An Australian anti-sexual exploitation group claims Instagram is a "predator's paradise", after its team compiled an alarming dossier of grooming-style behaviours on the popular social media platform.
Over several weeks, charity group Collective Shout discovered hundreds of predatory comments from men describing sexual acts they wanted to perform on underage girls, some as young as seven, whose photos had been shared on Instagram.
"These men were saying the most horrible things to these girls," Collective Shout campaigner Melinda Liszewski said.
She told nine.com.au how her team had observed predators using specific and cynical Instagram hashtags in comments underneath a photo to categorise and harvest images of young girls, often wearing swimsuits, gym kit and dance outfits.
Liszewski also warned that thousands of Instagram photographs of young girls were being "trafficked" into nefarious external websites, where they were sexualised by secret pedophile communities.
"They are writing very detailed erotic stories [on third party websites] about these girls," Liszewski said.
"Instagram is effectively enabling this exploitation," she claimed.
'Rampant' predatory behaviour
Collective Shout has joined a global campaign, #WakeUpInstagram, which is urging the $100 billion Silicon Valley giant to make technology changes so young girls are better protected and predators are weeded out.
Instagram needed to stop men posting sexualised, harassing and predatory comments on underage girls' posts, Liszewski said. This online behaviour was shockingly routine, she said. Typical comments her team found underneath photos included requests for nudes and comments about rape, body parts and sexually offensive remarks.
"It is rampant," she said.
Liszewski said Instagram should immediately stop strangers being allowed to direct messages another person, a ploy which is often used by adult men who groom young girls. Even Instagram accounts set to private can still receive unsolicited direct messages from strangers.
A Facebook company spokesperson told nine.com.au that "any content that endangers or exploits children is unthinkable and has no place on Instagram".
The spokesperson said Instagram uses proactive technology to find and remove content that poses a risk to children.
Facebook will not divulge how many human moderators worked in tandem with artificial intelligence programs to patrol Instagram and detect predatory behaviour.
Collective Shout questioned the effectiveness of Instagram's approach. They claimed accounts and posts they had flagged up to Instagram had been met with "inconsistent" action, or no action at all.
"They've got the capability to do more, if they want to do it," Liszewski said.
"We're asking why they won't apply that technology to care for these little girls."
Disturbing accounts sometimes operated by parents
Liszewski said her team had come across a number of disturbing Instagram accounts that seemed solely dedicated to posting images of minors, particularly those that are said to be modelling pictures. She urged Instagram to investigate these kind of accounts to ensure they were not promoting sexualised photos or attracting sexualised comments.
Worryingly, these accounts sometimes appeared to be operated by someone who claims to be a parent of the minor, Liszewski said.
Instagram rules require users to be at least 13 years old before they can create an account. Accounts that represent someone under the age of 13 must clearly state in the account's bio the account is managed by a parent or manager.
Research from the Internet Watch Foundation (IWF) underlined the risk and prevalence of photos harvested from social media and video chat sites being posted to parasite websites.
Over a four-week period in 2012, IWF found that 88 percent of the 12,000 images and videos categorised as potential online child sexual abuse material reported to its hotline had originally come from social media.
The study's findings underlined "the extent to which control over self-generated content is lost once it has been circulated online."