Investigation Finds AI Image Generation Models Trained on Child Abuse

A new report identifies hundreds of instances of exploitative images of children in a public dataset used for AI text-to-image generation models.
Ai collage
  • An investigation found hundreds of known images of child sexual abuse material (CSAM) in an open dataset used to train popular AI image generation models, such as Stable Diffusion.
  • Models trained on this dataset, known as LAION-5B, are being used to create photorealistic AI-generated nude images, including CSAM.
  • It is challenging to clean or stop the distribution of publicly distributed datasets as it has been widely disseminated. Future datasets could use freely available detection tools to prevent the collection of known CSAM.

A Stanford Internet Observatory (SIO) investigation identified hundreds of known images of child sexual abuse material (CSAM) in an open dataset used to train popular AI text-to-image generation models, such as Stable Diffusion.

A previous SIO report with the nonprofit online child safety group Thorn found rapid advances in generative machine learning make it possible to create realistic imagery that facilitates child sexual exploitation using open source AI image generation models. Our new investigation reveals that these models are trained directly on CSAM present in a public dataset of billions of images, known as LAION-5B. The dataset included known CSAM scraped from a wide array of sources, including mainstream social media websites and popular adult video sites.

Removal of the identified source material is currently in progress as researchers reported the image URLs to the National Center for Missing and Exploited Children (NCMEC) in the U.S. and the Canadian Centre for Child Protection (C3P). The study was primarily conducted using hashing tools such as PhotoDNA, which match a fingerprint of an image to databases maintained by nonprofits that receive and process reports of online child sexual exploitation and abuse. Researchers did not view abuse content, and matches were reported to NCMEC and confirmed by C3P where possible.

There are methods to minimize CSAM in datasets used to train AI models, but it is challenging to clean or stop the distribution of open datasets with no central authority that hosts the actual data. The report outlines safety recommendations for collecting datasets, training models and hosting models trained on scraped datasets. Images collected in future datasets should be checked against known lists of CSAM by using detection tools such as Microsoft’s PhotoDNA or partnering with child safety organizations such as NCMEC and C3P.

Read More

Fake profiles real children internet observatory
Blogs

Fake Profiles, Real Children

A Look at the Use of Stolen Child Imagery in Social Media Role-Playing Games
cover link Fake Profiles, Real Children
text on a dark background shows the Atlantic Council logo, and reads "Report Launch: Sacaling trust on the web", task force for a trustworthy future web, #scalingtrust
Blogs

New Report: "Scaling Trust on the Web"

The report from the Task Force for a Trustworthy Web maps systems-level dynamics and gaps that impact the trustworthiness and usefulness of online spaces
cover link New Report: "Scaling Trust on the Web"
watercolor style image showing a nexus of social media platform icons
Blogs

Addressing Child Exploitation on Federated Social Media

New report finds an increasingly decentralized social media landscape offers users more choice, but poses technical challenges for addressing child exploitation and other online abuse.
cover link Addressing Child Exploitation on Federated Social Media