Sizing Up Self-Harm Policies

The Stanford Internet Observatory’s latest report compares online platforms’ policies on self-harm content.

A note: This post discusses suicide and other forms of self-harm in the context of online platform policies. In the United States, the National Suicide Prevention Lifeline is available 24/7 in English at 1-800-273-8255 and in Spanish at 1-888-628-9454. It offers Tele-Interpreter services in over 150 additional languages.

Download report

Online platforms — from search engines to social media sites to chat apps — can play a role in supporting individuals considering hurting themselves by providing resources and making space for recovery and support communities. Yet they can also host content glorifying or inciting self-harm.

What are platforms’ public-facing policies on suicide, self-injury and eating disorders? In this report we analyze the published policies for 39 online platforms, including search engines, social media networks, creator platforms, gaming platforms, dating apps and chat apps. Emulating similar Internet Observatory analyses of platform policies on election and vaccine misinformation, the report ranks platforms based on policy comprehensiveness across defined categories. 

Key Takeaways: 

  • There is vast unevenness in the comprehensiveness of public-facing policies. This ranges from Facebook’s detailed 675-word policy to platforms with one-line policies or no policy at all. 
  • Gaming platforms and search engines perform worst on our metrics. Creator platforms (TikTok, Twitch and YouTube) do best.
  • Policy information can be difficult to locate, particularly the resources platforms share with individuals or enforcement actions. Platforms sometimes include detailed updates on how they enforce community guidelines in blog posts without updating their official community guidelines.
  • Developing policies around suicide, self-injury and eating disorders is complicated because these policies target both distressed and recovering communities.
  • Platforms can play a vital role in providing help resources to users. Rather than simply taking action on violative content, platforms can provide links to hotlines or chat rooms. When and what resources are provided is rarely documented in public-facing policies.
  • Platforms can do more to clarify when and how they downrank violative content. 
table comparing social media platforms Self-harm policy ratings across platforms

While the existence of a platform’s policy does not mean the policy cannot be improved, this report takes the first step of documenting whether there is a public-facing policy at all. In ongoing research, we are assessing how effectively platforms implement their stated policies. 

Overall, we find that many of the platforms’ policies intended  to keep users safe have significant gaps. While it is not necessary for policies to be identical across different platforms, each platform should address content about suicide, self-injury and eating disorders. When platforms provide clear policies, users can know what to expect and can understand why content may be acted upon. Self-harm prevention groups can understand whether a platform’s policies are implementing best practices.

If you believe we made an error in describing your platforms’ policies, or want to update us on new policies, please email us at

Read More

stop hand facing video camera

Online Consent Moderation

New Approaches to Preventing Proliferation of Non-Consensual Intimate Images
cover link Online Consent Moderation

Suicide prevention content in multiple languages on major platforms

Google, Facebook, YouTube and Instagram feature suicide prevention resources on English language searches in the US, but are those resources available to a broader audience? Performance varies across countries and languages and supportive content appears more frequently in wealthy European and East Asian countries and less frequently other regions.
cover link Suicide prevention content in multiple languages on major platforms
reddit hate speech

Comparing Platform Hate Speech Policies: Reddit's Inevitable Evolution

On Monday, June 30, 2020, Reddit updated its policy on hate speech. As part of research for a forthcoming book based on the Stanford Internet Observatory’s Trust and Safety Engineering course, we present a comparative assessment of platform policies and enforcement practices on hate speech, and discuss how Reddit fits into this framework.
cover link Comparing Platform Hate Speech Policies: Reddit's Inevitable Evolution