Generative Machine Learning and Online Sexual Exploitation
The Stanford Internet Observatory and Thorn find rapid advances in generative machine learning make it possible to create realistic imagery that is facilitating child sexual exploitation.
The Safeguarding Democracy Project brings together in dialogue scholars, election administrators, legislators, lawyers, voting rights advocates, and concerned citizens to develop practical solutions to urgent problems.
Challenges to Democracy in the Digital Information Realm
Former U.S. President Barack Obama delivered a keynote address about how information is created and consumed, and the threat that disinformation poses to democracy.
Following election day, narrative after bad-faith narrative took aim at election officials, often culminating in months of personal threats against their lives and the lives of their family members.
The Journal of Online Trust and Safety is a no fee, fast peer review, and open access journal. Authors may submit letters of inquiry to assess whether their manuscript is a good fit.
Moderated Content from Stanford Law School is podcast content about content moderation, moderated by assistant professor Evelyn Douek. The community standards of this podcast prohibit anything except the wonkiest conversations about the regulation—both public and private—of what you see, hear and do online.
The Trust & Safety Teaching Consortium is a coalition of academic, industry and non-profit experts in online trust and safety problems. Our goal is to create content that can be used to teach a variety of audiences about trust and safety issues in a wide
Platformer Highlights Findings from Journal Commentary
A February 2024 Platformer article highlighted a Journal of Online Trust and Safety commentary titled: “Burden of Proof: Lessons Learned for Regulators from the Oversight Board’s Implementation Work.”
Wall Street Journal Highlights Findings from Journal Article
A February 2024 article in the Wall Street Journal on talking to kids about sexting discussed a Journal of Online Trust and Safety article titled "American Parents’ Perceptions of Child Explicit Image Sharing."
An September 2023 article in the New York Times about fact checking discussed a Journal of Online Trust and Safety commentary titled "Future Challenges for Online, Crowdsourced Content Moderation: Evidence from Twitter’s Community Notes."
Texas and Florida are telling the Supreme Court that their social media laws are like civil rights laws prohibiting discrimination against minority groups. They’re wrong.
As the U.S., EU and China are taking divergent leads in new AI regulations, a new framework for AI diplomacy is emerging, all under the shadow of strategic technological competition.
Renée DiResta is the technical research manager at Stanford Internet Observatory. Dave Willner is a Non-Resident Fellow in the Program on Governance of Emerging Technologies at Stanford Cyber Policy Center.
Renée DiResta is the technical research manager at Stanford Internet Observatory. Dave Willner is a Non-Resident Fellow in the Program on Governance of Emerging Technologies at Stanford Cyber Policy Center.
A collaboration between the Stanford Internet Observatory and Thorn looks at the risks of sexual abuse material produced using machine learning image generators.
A collaboration between the Stanford Internet Observatory and Thorn looks at the risks of sexual abuse material produced using machine learning image generators.
This volume, edited by Marietje Schaake and Francis Fukuyama provides perspectives on how digital technologies are used, perceived, and affect behavior in a range of countries outside of North America and Europe. This volume should be seen as a modest first effort to gather comparative data on digital technology issues affecting ECs that will inform government policy, the platforms, and civil society around the world.