New report finds generative machine learning exacerbates online sexual exploitation

The Stanford Internet Observatory and Thorn find rapid advances in generative machine learning make it possible to create realistic imagery that is facilitating child sexual exploitation.
three synthetically generated images of the same child, each less blurry than the previous, on a blue background showing the image code. The base image for this photo illustration was generated using machine learning. This person does not exist.

Generative Machine Learning (ML) models have been rapidly adopted by large numbers of users. Large language model (LLM) chatbots and generative media tools can create human-like responses or any imagined scene in increasingly short timeframes. In just the past few months, some of this text and imagery has become so realistic that it is difficult to distinguish from reality. The public release of these tools also spawned a thriving open-source community dedicated to expanding their capabilities.

Tools for generating realistic images and video are advancing rapidly in the open-source ML community, with a combination of advances in generative ML technology and increasingly powerful processing power available to consumers. However, we have also seen the ability to misuse this technology to create deceptive accounts and content for government propaganda, and for creating non-consensual explicit media used for harassment and extortion.

A new report from researchers at the Stanford Internet Observatory and Thorn, a nonprofit working to address the role of technology in facilitating child sexual exploitation, highlights how this rapidly advancing technology poses a threat for child sexual exploitation, namely the production of increasingly realistic computer-generated child sexual abuse imagery (CSAM). The report outlines a number of potential technical mitigations and areas for industry and policy collaboration on AI ethics and safety measures. However, the researchers warn that the use of generative ML tools for creating realistic non-consensual adult content and CSAM is growing and likely to worsen without intervention by a broad array of stakeholders.

Read More

stanford dish at sunset
Blogs

Addressing the distribution of illicit sexual content by minors online

A Stanford Internet Observatory investigation identified large networks of accounts, purportedly operated by minors, selling self-generated illicit sexual content. Platforms have updated safety measures based on the findings, but more work is needed.
cover link Addressing the distribution of illicit sexual content by minors online
Fake profiles real children internet observatory
Blogs

Fake Profiles, Real Children

A Look at the Use of Stolen Child Imagery in Social Media Role-Playing Games
cover link Fake Profiles, Real Children
pictures of attendees from the 2022 Trust and Safety Research Conference.
News

Registration Open for the 2023 Trust and Safety Research Conference

Tickets on sale for the Stanford Internet Observatory’s Trust and Safety Research to be held September 28-29, 2023. Lock in early bird prices by registering before August 1.
cover link Registration Open for the 2023 Trust and Safety Research Conference