Common Abuses on Mastodon: A Primer

Decentralized social networks may be the new model for social media, but their lack of a central moderation function make it more difficult to combat online abuse.
abstract illustration of people holding phones
  • Mastodon has gained popularity as a decentralized alternative to X (formerly Twitter). But as its user base grows, so do opportunities for abuse, including the proliferation of child sexual abuse material (CSAM), terrorist and extremist activity, spam and harassment, and data privacy concerns.
  • The lack of maturity and resources of decentralized social media sites makes them susceptible to abuses more easily dealt with by centralized platforms. 
  • The fediverse operates as a constellation of interconnected but individual servers, and tools to deal with these issues are currently limited. Users have the power to choose what kind of server (and community) they want to join, but are left with little protection against abuse beyond basic user tools and the prerogatives of their server’s admins and moderators. 
  • For a deeper look at abuses on Mastodon and how to address them, read SIO’s tips for running a Mastodon instance.
Tips for Running a Mastadon Instance
Download pdf

Mastodon, an open-source social media software released in 2016 and made popular in the wake of Elon Musk’s acquisition of Twitter (now X), reached 10 million accounts earlier this year. Unlike X or Facebook, Mastodon is a decentralized social media platform, where individual servers or “instances” established around common themes communicate. Users create an account on one specific instance, but aren't stuck if they don't like it – they can migrate to a new instance with followers intact. Mastodon is part of the ever-evolving “fediverse,” an ecosystem powered by the decentralized networking protocol ActivityPub in which servers on different sites interact with each other. It joins other decentralized social media sites like Bluesky (powered by a rival to ActivityPub and as such, not part of the fediverse) as well as Meta-owned Threads, the latter of which has been announced for fediverse integration. 

Decentralized social networks have a lot of benefits. They’ve been heralded as a healthier alternative to centralized systems for several good reasons (including the inability to be bought by any individual). Because each instance has its own terms of service and content moderation rules, users can pick and choose instances under which to open an account, giving them more autonomy and control. And unlike in centralized systems, where moderation functions are controlled by the company, each instance administrator can publicly moderate as they see fit. In the fediverse, users have the power to choose what kind of community they want to be part of. 

But the lack of a central moderation function and dependence on individual action make it more difficult to combat online abuse, including child sexual exploitation material, terrorist and extremist content, and data and privacy breaches. Decentralized networks are, by design, takedown-resistant. The founder of Mastodon has stated he has no control over anything platform-wide; the fediverse doesn't work that way. Instance administrators can create rules holding users to a higher standard than centralized companies, or the opposite – and when the opposite occurs, there's no central authority users can ask to take down the material. 

Why Am I Seeing Harmful Content on My Instance?

The fediverse is set up in a way that mostly allows users to see content only from other instances already known to their instance. As a user, your Mastodon feeds and search bar will show content from your instance and the instances with which your server federates (save for hashtags and account usernames, which are fully searchable). According to Mastodon’s founder, being unable to search the whole fediverse via text on Mastodon is a feature, not a bug; it’s meant to insulate you to your Mastodon community, preventing the search and targeted abuse of users posting about specific content. 

Mastodon users see several different feeds. Content from all accounts followed by other users on your instance will show up in your “Federated” timeline. This means that if a fellow user on your instance follows an account (on any instance) disseminating illegal or disturbing content, that content will also show up in your feed and in relevant search results. This can cause unwanted content on your feed. Worse, if you’re an instance administrator, that “bad” federated content is cached on your server for whatever period of time your storage service dictates. 

Child Sexual Abuse Material (CSAM)

Most social media sites would develop a major CSAM problem if left unmoderated. But Mastodon’s federated architecture makes finding and moderating CSAM more difficult, fueling the fediverse’s current CSAM problem

While we don’t currently know the full prevalence of CSAM on Mastodon, precedent gives us good reason to worry. Japan-based servers, for example, have historically been major originators of CSAM on Mastodon. Japan changed its historically lax CSAM laws in 2014 after major international pressure. CSAM is now illegal to possess and distribute in Japan; artificially created images or illustrations of it called “lolicon,” socially accepted in Japan, are not. Though the United States has similar laws – child sexual abuse material is illegal – past cases suggest that artificially created images or illustrations of CSAM (i.e. not involving an actual minor) are First Amendment-protected speech, unless considered “obscene.” CSAM is illegal across Europe, but EU member states differ on the legality of visual depictions. The ability to create insular communities with bespoke moderation drove Japanese users to Mastodon, where instances flourished that were organized around material that, while not necessarily illegal in Japan, might either be illegal or against instance terms of service (TOS) in the United States or Europe. 

Mastodon offers a few tools to deal with harmful content like CSAM. On a broader scale, Mastodon takes policy into account when determining what instances to promote on its webpage. According to Mastodon’s 2019 Server Covenant, only instances actively moderating content in healthy ways will be featured on join.mastodon, the main webpage for those looking to create an account. But like in any decentralized system, most tools are only available at the individual instance level. Server administrators can “defederate” with another instance by blocking them entirely. Some instances, like, publish their blocklists; others keep them hidden so nefarious actors looking for “bad” instances and content can’t find them. In light of stricter laws around CSAM in Europe and the United States, for example, European and U.S.-based instances often defederate with Japanese instances entirely to ensure media that is legal in Japan but possibly illegal elsewhere doesn't find its way onto such servers. While large platforms with robust trust & safety teams are able to be more discerning in their moderation, the incentive to over-block in the fediverse is more compelling than the risk of being held liable for CSAM on your server. Two of the biggest instances on Mastodon, both Japan-based, are blocked by and other “Western”-based instances for inappropriate content. Instance administrators can also use the “reject media” tool to ensure that media from a problematic instance is not cached locally on their server if they don’t want to block text content from the other server entirely. 

Mastodon users probably aren’t aware of CSAM on the platform unless it leaks into their federated timelines. This can happen when a fellow user on their instance follows an account posting CSAM. Ways to handle this problem are few. Though users who follow CSAM-disseminating accounts can be suspended from an instance by administrators, they can easily set up a new account on another; the fediverse by design makes them hard to find. As EFF points out, unlike in a centralized system, it will take a great degree of coordination to meaningfully mitigate this phenomenon on Mastodon.

Any CSAM encountered on Mastodon should be reported by instance administrators in the United States to NCMEC’s CyberTipline as outlined in 18 U.S. Code § 2258A. Our starter guide for instance administrators has further details on this requirement, and what to do if the instance is UK or EU-based. Tools like Microsoft’s PhotoDNA or Thorn’s Safer that employ perceptual hashing are often voluntarily used by other platforms to proactively scan for and identify CSAM and report it to NCMEC, but currently no default integration between such services and the fediverse exists.

Terrorism and Extremist Content

Mastodon once earned a reputation as a “Twitter without Nazis.” But that reputation was the Nazis’ choice, not Mastodon’s. In 2019, Gab, a social networking site known for hosting extremist content and users – particularly neo-Nazi material – adopted a version (or “fork”) of Mastodon’s open-source code using the ActivityPub protocol. Gab’s introduction to the fediverse meant Mastodon users could see Gab’s content, and vice versa. It became Mastodon’s largest instance at the time of migration. 

Reaction to Gab’s adoption of Mastodon source code was swift but largely uncoordinated. Mastodon’s founder released a statement opposing Gab’s “philosophy.” Individual instances blocked Gab, along with third party Mastodon apps. (The Gab app had previously been banned from the Apple App Store and Google Play Store.) But these actions alone could not protect the wider Mastodon community, only those whose instance administrators felt compelled to protect. Months later, Gab defederated from Mastodon as the majority of its users weren’t taking advantage of federation anyway, according to Gab’s CTO at the time. Nevertheless, the ordeal underscored that Mastodon’s open-source nature (much like its decentralized architecture) cut both ways.

ISIS, historically very good at adopting new technologies and using social media, has also been experimenting with the fediverse. The terrorist group published a guide to Mastodon in November 2022. Calling it “better than Twitter,” ISIS lauded Mastodon for its ability to host varying degrees of moderation and allow terrorist content to evade detection. 

Data Privacy and Security 

Private messages on Mastodon are not only not end-to-end encrypted (E2EE), but instance administrators – who are not bound by strict privacy, unlike employees at companies – have access to all users’ private messages. This includes the instance administrator of whoever someone is privately chatting with. 

Of course, while select employees of centralized social media companies can do the same, they are bound by strict access protocols that Mastodon administrators are not. Some are giving up that access entirely; in 2022, Meta began testing default E2EE for Facebook Messenger and Instagram direct messages in certain markets.  But functionally, Mastodon’s DMs are less like private messages and more like tweeting at someone, if the tweet’s visibility was set to only you and the recipient. Integrating E2EE into this design feature has proven to be extremely difficult. A new effort to provide E2EE for Mastodon’s direct messages is underway after several failed attempts. 

In theory, decentralized systems may also be less susceptible to the data breaches that centralized companies – where user data is housed by one entity – are vulnerable to. But the lack of a central data repository does not necessarily protect against data breaches. Several vulnerabilities have already been discovered in Mastodon’s code., one of Mastodon’s biggest instances, suffered its first major data breach in March 2023. An update was rolled out immediately after to patch the vulnerability, but an incident of similar scope could easily happen again. 

Mastodon is a volunteer-built platform. It is just much harder for a volunteer-run, distributed system to roll out protections like E2EE than a centralized company. And the blowback of a data breach or incentives to increase cyber hygiene fall on no one but the goodwill of individual instance administrators and the good Samaritans who pitch in to keep Mastodon’s code current. 

Looking ahead 

These abuses are neither comprehensive nor fediverse-specific, but they manifest differently in decentralized spaces. Users and instance administrators are also likely to run into problems such as copyright infringement that implicate administrators' potential legal liability for hosting certain content. A brief guide to understanding and addressing these abuses, aimed at instance administrators, can be found here

Read More

Fake profiles real children internet observatory

Fake Profiles, Real Children

A Look at the Use of Stolen Child Imagery in Social Media Role-Playing Games
cover link Fake Profiles, Real Children
text on a dark background shows the Atlantic Council logo, and reads "Report Launch: Sacaling trust on the web", task force for a trustworthy future web, #scalingtrust

New Report: "Scaling Trust on the Web"

The report from the Task Force for a Trustworthy Web maps systems-level dynamics and gaps that impact the trustworthiness and usefulness of online spaces
cover link New Report: "Scaling Trust on the Web"
watercolor style image showing a nexus of social media platform icons

Addressing Child Exploitation on Federated Social Media

New report finds an increasingly decentralized social media landscape offers users more choice, but poses technical challenges for addressing child exploitation and other online abuse.
cover link Addressing Child Exploitation on Federated Social Media