A recent study conducted by Stanford’s Internet Observatory has shed light on the concerning presence of child sexual abuse material (CSAM) on Mastodon, a decentralized social network often seen as an alternative to Twitter. The study found a staggering 112 instances of known CSAM in just two days, across 325,000 posts on the platform. Shockingly, the first instance of CSAM appeared within five minutes of the researchers’ search.
To conduct their research, the Internet Observatory examined the 25 most popular Mastodon instances for CSAM. They also utilized Google’s SafeSearch API and PhotoDNA, a tool designed to identify flagged CSAM. Throughout their search, the team discovered 554 pieces of content that matched hashtags or keywords commonly used by child sexual abuse groups online. All of these content pieces were identified as explicit with the “highest confidence” by Google SafeSearch.
In addition to explicit images, the study noted that there were 713 instances of the top 20 CSAM-related hashtags being used across the Fediverse (the interconnected network of decentralized social media platforms like Mastodon) in posts that contained media. Furthermore, there were 1,217 text-only posts that pointed to “off-site CSAM trading or grooming of minors.” The study concluded that the open posting of CSAM on Mastodon is alarmingly prevalent.
One incident cited in the study highlighted the consequences of CSAM being posted on Mastodon. The mastodon.xyz server experienced an extended outage caused by CSAM content. In a post addressing the incident, the server’s sole maintainer admitted that moderation is done in their spare time, which can lead to delays of up to a few days. Unlike large platforms with extensive teams, Mastodon’s moderation relies on the efforts of individual administrators.
Taking action against the CSAM content in question, the host of the mastodon.xyz domain suspended the server, rendering it inaccessible to users until the situation was resolved. Interestingly, the domain registrar later added the domain to a “false positive” list to prevent future takedowns. However, the researchers noted that the action taken was not a false positive, highlighting the need for more effective tools to address child safety concerns.
David Thiel, one of the researchers involved in the study, emphasized the severity of the findings. He stated, “We got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close.” Thiel attributed this to the lack of robust tools used by centralized social media platforms to address child safety.
As decentralized networks like Mastodon gain popularity, concerns over safety have inevitably arisen. Unlike mainstream sites such as Facebook, Instagram, and Reddit, decentralized networks grant control over moderation to each individual instance, leading to inconsistency across the Fediverse. To combat this issue, the researchers recommend that networks like Mastodon implement more comprehensive tools for moderators, such as PhotoDNA integration and CyberTipline reporting.
In conclusion, the study conducted by Stanford’s Internet Observatory has brought attention to the presence of child sexual abuse material on Mastodon. The alarming number of instances discovered in just two days raises significant concerns about the platform’s safety and the need for enhanced moderation tools. As decentralized social networks continue to gain traction, it is imperative that measures are taken to address child safety concerns and prevent the proliferation of CSAM.