IO - Home

Newsletter

Subscribe

* indicates required
News
Filter:
Show Hide
Ex: author name, topic, etc.
Ex: author name, topic, etc.
By Topic
Show Hide
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
By Region
Show Hide
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
By Type
Show Hide
By date
Show Hide

New teaching materials from the Trust and Safety Teaching Consortium

This brief presents the findings of an experiment that measures how persuasive AI-generated propaganda is compared to foreign propaganda articles written by humans.

Through the Policy Change Studio, students partner with international organizations to propose policy-driven solutions to new digital challenges.

Assessing Coordinated Inauthentic Behavior on X, Tiktok and Telegram following Meta’s 2023 Q3 Adversarial Threat Report

How to improve the system for reporting child sex abuse material online. Originally published in Lawfare.

A new Stanford Internet Observatory report examines how to improve the CyberTipline pipeline from dozens of interviews with tech companies, law enforcement and the nonprofit that runs the U.S. online child abuse reporting system.

Youth online health and safety leaders urged action, shared accountability and a nuanced approach to empower and protect young people online.

A new preprint paper looks at the ways Facebook Page operators are using AI image models to create surreal content and generate online engagement.

The Stanford Internet Observatory and Social Media Lab will hold a March 13 convening with the Biden-Harris Administration’s Kids Online Health & Safety Task Force and leading experts

On December 26, 2023, Stanford University filed an amicus brief in the Supreme Court of the United States to correct false allegations made about the university and the Stanford Internet Observatory (SIO).

The seventh issue features four peer-reviewed articles and four commentaries

From Tech Policy Press, by Dave Willner and Samidh Chakrabarti, both of the Program on Governance of Emerging Technologies at the CPC.

A new report identifies hundreds of instances of exploitative images of children in a public dataset used for AI text-to-image generation models.

New work in Nature Human Behaviour from SIO researchers, with other co-authors looks at how generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust.