The Stanford Internet Observatory conducts broad, interdisciplinary research that blends rigorous academic work with timely reports and commentary to inform decision makers on critical issues. Our research is loosely broken into four categories, although our outputs often integrate elements from multiple categories. Our researchers—from students to postdocs, young professionals to experienced experts—have contributed to, among other topics, significant findings on the tactics behind online information sharing, the dynamics of alt-platforms, and content moderation tools, and our team has continued to build technical tooling that facilitates better real-time online research.
Research Areas
Trust & Safety
Internet platforms and services are built for users, but are often used in unexpected ways that result in real human harm. Our research continues to investigate operational, design, engineering, and policy processes that address the potential misuse and abuse of platform features.
Technology companies and policymakers are increasingly addressing online harms in platform rules and public policy. SIO aims to ground government policy and regulation in technical reality while examining the impact and enforcement of platform policies and terms of service.
Information Interference
SIO research works toward a better understanding of how rumors become misleading narratives spread across platforms and in the media. Our team studies the tactics and techniques used by networks of social media accounts in efforts to influence public opinion and the flow of information across online spaces and traditional media.
With an eye towards the future, our research studies the potential use and abuse of machine learning and artificial intelligence technologies used to create media. This includes studying the underlying technology and novel application of deepfake video technologies and Generative Pre-trained Transformer (GPT) automated writing tools.
A collaboration between the Stanford Internet Observatory and Thorn looks at the risks of sexual abuse material produced using machine learning image generators.
A new article in Social Media + Society uses three case studies to understand the participatory nature and dynamics of the online spread of misleading information.
A Stanford Internet Observatory investigation identified large networks of accounts, purportedly operated by minors, selling self-generated illicit sexual content. Platforms have updated safety measures based on the findings, but more work is needed.
Gab was founded in 2016 as an uncensored alternative to mainstream social media platforms. Stanford Internet Observatory’s latest report looks at behaviors and dynamics across the platform.
A disinformation researcher shares what she and her team watch for when analyzing social media posts and other online reports related to the Russian invasion of Ukraine (originally appeared in Stanford News)