New Report on AI-Generated Child Sexual Abuse Material

New Report on AI-Generated Child Sexual Abuse Material

Insights from Educators, Platforms, Law Enforcement, Legislators, and Victims
illustration in purple of a circuit board

In this report we aim to understand how educators, platform staff, law enforcement officers, U.S. legislators, and victims are thinking about and responding to AI-generated child sexual abuse material (CSAM). We interviewed 52 people, analyzed documents from four public school districts, and coded state legislation.

Our main findings are that while the prevalence of student-on-student nudify app use in schools is unclear, schools are generally not addressing the risks of nudify apps with students, and some schools that have had a nudify incident have made missteps in their response. We additionally find that mainstream platforms report the CSAM they discover, but, for various reasons, without systematically trying to discern and convey whether it is AI-generated in their reports to the National Center for Missing and Exploited Children's (NCMEC) CyberTipline. This means the task of identifying AI-generated material falls to NCMEC and law enforcement. However, frontline platform staff believe the prevalence of AI CSAM on their platforms remains low. Finally, we find that legal risk is hindering CSAM red teaming efforts for mainstream AI model-building companies. 

This publication has been produced with financial support from Safe Online. However, the opinions, findings, conclusions, and recommendations, expressed herein are those of the authors and do not necessarily reflect those of Safe Online. 

Read More

U.S. Marshals work with the NCMEC during Operation We Will Find You, in 2023 (U.S. Marshals Service photo by Bennie J. Davis III, https://www.flickr.com/photos/usmarshals/52917723748/; CC BY 2.0 DEED, https://creativecommons.org/licenses/by/2.0/)
News

Challenges in the Online Child Safety Ecosystem

How to improve the system for reporting child sex abuse material online. Originally published in Lawfare.
Challenges in the Online Child Safety Ecosystem
social media icons on a phone
News

How Unmoderated Platforms Became the Frontline for Russian Propaganda

In an essay for Lawfare Blog, Samantha Bradshaw, Renee DiResta and Christopher Giles look at how state war propaganda in Russia is increasingly prevalent on platforms that offer minimal-moderation virality as their value proposition.
How Unmoderated Platforms Became the Frontline for Russian Propaganda
graphic represntations of computers and phones connected to tettering boxes of files and then connected to law enforcement in a complex web on a blue background.
Blogs

How to Fix the Online Child Exploitation Reporting System

A new Stanford Internet Observatory report examines how to improve the CyberTipline pipeline from dozens of interviews with tech companies, law enforcement and the nonprofit that runs the U.S. online child abuse reporting system.
How to Fix the Online Child Exploitation Reporting System