Generative Machine Learning and Online Sexual Exploitation
The Stanford Internet Observatory and Thorn find rapid advances in generative machine learning make it possible to create realistic imagery that is facilitating child sexual exploitation.
The Safeguarding Democracy Project brings together in dialogue scholars, election administrators, legislators, lawyers, voting rights advocates, and concerned citizens to develop practical solutions to urgent problems.
Challenges to Democracy in the Digital Information Realm
Former U.S. President Barack Obama delivered a keynote address about how information is created and consumed, and the threat that disinformation poses to democracy.
Following election day, narrative after bad-faith narrative took aim at election officials, often culminating in months of personal threats against their lives and the lives of their family members.
The Journal of Online Trust and Safety is a no fee, fast peer review, and open access journal. Authors may submit letters of inquiry to assess whether their manuscript is a good fit.
Moderated Content from Stanford Law School is podcast content about content moderation, moderated by assistant professor Evelyn Douek. The community standards of this podcast prohibit anything except the wonkiest conversations about the regulation—both public and private—of what you see, hear and do online.
The Trust & Safety Teaching Consortium is a coalition of academic, industry and non-profit experts in online trust and safety problems. Our goal is to create content that can be used to teach a variety of audiences about trust and safety issues in a wide
Platformer Highlights Findings from Journal Commentary
A February 2024 Platformer article highlighted a Journal of Online Trust and Safety commentary titled: “Burden of Proof: Lessons Learned for Regulators from the Oversight Board’s Implementation Work.”
Wall Street Journal Highlights Findings from Journal Article
A February 2024 article in the Wall Street Journal on talking to kids about sexting discussed a Journal of Online Trust and Safety article titled "American Parents’ Perceptions of Child Explicit Image Sharing."
An September 2023 article in the New York Times about fact checking discussed a Journal of Online Trust and Safety commentary titled "Future Challenges for Online, Crowdsourced Content Moderation: Evidence from Twitter’s Community Notes."
This article examines the problem of statutory obsolescence in the regulation of rapidly evolving technologies, with a focus on GDPR and generative AI. It shows how core GDPR provisions on lawful processing, accuracy, and erasure prove difficult—if not impossible—to apply to AI systems, generating legal uncertainty and divergent national enforcement. The analysis highlights how comprehensive, principle-based instruments can quickly become inadequate in fast-moving technological domains. Drawing lessons from the GDPR, the article reflects on the need for more adaptive, flexible, and responsive regulatory approaches in the technological age.
Japan’s unique strategy – combining regulatory oversight, resource efficiency, and international partnership – offers a potential blueprint for the world. By Charles Mok and Athena Tong.
In an era where digital technology serves as both a tool for liberation and a threat to democracy, the term “digital authoritarianism” has emerged to describe the strategies employed by authoritarian regimes to exert control in the digital sphere. This chapter explores the defining characteristics of digital authoritarianism as exemplified by countries such as China and Russia, identifying three primary pillars: information control, mass surveillance, and the creation of a fragmented, isolated Internet. Furthermore, this chapter emphasizes that digital authoritarian practices are not confined to authoritarian regimes. Democratic governments and technologically advanced private corporations, especially the dominant tech companies shaping the modern Internet, are also capable of adopting authoritarian tactics. Finally, the chapter argues that the technology itself—through the omnipotence of code in cyberspace—may inherently foster a form of digital authoritarianism.
Existing Law and Extended Reality: An Edited Volume of the 2023 Symposium Proceedings, compiles and expands upon the ideas presented during the symposium. Edited by Brittan Heller, the collection includes contributions from symposium speakers and scholars who delve deeper into the regulatory gaps, ethical concerns, and societal impacts of XR and AI.
HAI and Stanford Cyber Policy Center,
August 6, 2024
A new report by Florence G'sell, visiting professor in the program on the Governance of Emerging Technologies, at the Cyber Policy Center addresses the urgent need for AI regulation.
HAI and Stanford Cyber Policy Center,
August 6, 2024
A new report by Florence G'sell, visiting professor in the program on the Governance of Emerging Technologies, at the Cyber Policy Center addresses the urgent need for AI regulation.
The online child safety ecosystem has already witnessed several key improvements in the months following the April publication of a landmark Stanford Internet Observatory (SIO) report, writes Riana Pfefferkorn, formerly a research scholar at the SIO and now a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
The new law seeks to regulate critical infrastructure operators responsible for “continuous delivery of essential services” and "maintaining important societal and economic activities."
A new Stanford Internet Observatory report examines how to improve the CyberTipline pipeline from dozens of interviews with tech companies, law enforcement and the nonprofit that runs the U.S. online child abuse reporting system.
A new Stanford Internet Observatory report examines how to improve the CyberTipline pipeline from dozens of interviews with tech companies, law enforcement and the nonprofit that runs the U.S. online child abuse reporting system.