The Stanford Internet Observatory conducts broad, interdisciplinary research that blends rigorous academic work with timely reports and commentary to inform decision makers on critical issues. Our research is loosely broken into four categories, although our outputs often integrate elements from multiple categories. Our researchers—from students to postdocs, young professionals to experienced experts—have contributed to, among other topics, significant findings on the tactics behind online information sharing, the dynamics of alt-platforms, and content moderation tools, and our team has continued to build technical tooling that facilitates better real-time online research.
Research Areas
Trust & Safety
Internet platforms and services are built for users, but are often used in unexpected ways that result in real human harm. Our research continues to investigate operational, design, engineering, and policy processes that address the potential misuse and abuse of platform features.
Technology companies and policymakers are increasingly addressing online harms in platform rules and public policy. SIO aims to ground government policy and regulation in technical reality while examining the impact and enforcement of platform policies and terms of service.
Information Interference
SIO research works toward a better understanding of how rumors become misleading narratives spread across platforms and in the media. Our team studies the tactics and techniques used by networks of social media accounts in efforts to influence public opinion and the flow of information across online spaces and traditional media.
With an eye towards the future, our research studies the potential use and abuse of machine learning and artificial intelligence technologies used to create media. This includes studying the underlying technology and novel application of deepfake video technologies and Generative Pre-trained Transformer (GPT) automated writing tools.
This brief presents the findings of an experiment that measures how persuasive AI-generated propaganda is compared to foreign propaganda articles written by humans.
The online child safety ecosystem has already witnessed several key improvements in the months following the April publication of a landmark Stanford Internet Observatory (SIO) report, writes Riana Pfefferkorn, formerly a research scholar at the SIO and now a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
A new Stanford Internet Observatory report examines how to improve the CyberTipline pipeline from dozens of interviews with tech companies, law enforcement and the nonprofit that runs the U.S. online child abuse reporting system.
Partisanship and social media usage correlate with belief in COVID-19 misinformation and that misinformation shapes citizens’ willingness to get vaccinated. However, this evidence comes overwhelmingly from frequent internet users in rich, Western countries. We run a panel survey early in the pandemic leveraging a pre-pandemic sample of urban middle-class Nigerians, many of whom do not use the internet.
Social media has been fully integrated into the lives of most adolescents in the U.S., raising concerns among parents, physicians, public health officials, and others about its effect on mental and physical health. Over the past year, an ad hoc committee of the National Academies of Sciences, Engineering, and Medicine examined the research and produced this detailed report exploring that effect and laying out recommendations for policymakers, regulators, industry, and others in an effort to maximize the good and minimize the bad. Focus areas include platform design, transparency and accountability, digital media literacy among young people and adults, online harassment, and supporting researchers.
Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.
A collaboration between the Stanford Internet Observatory and Thorn looks at the risks of sexual abuse material produced using machine learning image generators.
A new article in Social Media + Society uses three case studies to understand the participatory nature and dynamics of the online spread of misleading information.
A Stanford Internet Observatory investigation identified large networks of accounts, purportedly operated by minors, selling self-generated illicit sexual content. Platforms have updated safety measures based on the findings, but more work is needed.