Addressing Child Exploitation on Federated Social Media
New report finds an increasingly decentralized social media landscape offers users more choice, but poses technical challenges for addressing child exploitation and other online abuse.
Registration Open for the 2023 Trust and Safety Research Conference
Tickets on sale for the Stanford Internet Observatory’s Trust and Safety Research to be held September 28-29, 2023. Lock in early bird prices by registering before August 1.
Assessing Political Motivations Behind Ransomware Attacks
Recent developments suggest possible links between some ransomware groups and the Russian government. We investigate this relationship by creating a dataset of ransomware victims and analyzing leaked communications from a major ransomware group.
Research on inauthentic behavior on TikTok, misinformation on Stanford's campus, Telegram activity in Belarus, health insurance scams that run advertisements on Google, and QAnon content on Tumblr.
The report is the culmination of work by Aspen Digita's Commission on Information Disorder, with guidance from Stanford Cyber's Renee DiResta, Alex Stamos, Daphne Keller, Nate Persily and Herb Lin, and provides a framework for action with 15 recommendations to build trust & reduce harm.
This is the fourth of a series of pieces we have published on societies and elections at risk from online disinformation. The politically-fueled disinformation engine in Brazil puts the country in the midst of an information crisis leading up to its 2022 presidential election.
Riana Pfefferkorn is a research scholar at the Stanford Internet Observatory and a member of the Global Encryption Coalition. This first appeared in Brookings TECH STREAM.
In The Politics of Order in Informal Markets: How the State Shapes Private Governance, Grossman explores findings that challenge the conventional wisdom that private good governance in developing countries thrives when the government keeps its hands off private group affairs.
When we’re faced with a video recording of an event—such as an incident of police brutality—we can generally trust that the event happened as shown in the video. But that may soon change, thanks to the advent of so-called “deepfake” videos that use machine learning technology to show a real person saying and doing things they haven’t.