News
Filter:
Show Hide
Ex: author name, topic, etc.
Ex: author name, topic, etc.
By Topic
Show Hide
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
By Region
Show Hide
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
By Type
Show Hide
By date
Show Hide

Riana Pfefferkorn of SIO spoke with Wired on Meta's expansion of end-to-end encryption in Messenger.

Julie Owono, Executive Director of the Content Policy & Society Lab (CPSL) and a fellow of the Program on Democracy and the Internet (PDI) at Stanford University, on the issue of banning platforms. Authored for Just Security.

In an essay for Lawfare Blog, Samantha Bradshaw of American University and Shelby Grossman of the Stanford Internet Observatory explore whether two key platforms, Facebook and Twitter, were internally consistent in how they applied their labels during the 2020 presidential election.

A graphic depiction of a face falling towards the ground on a red background overlayed with a black satellite dish and the word "takedown".
Blogs
Blogs

An Investigation into an Inauthentic Facebook and Instagram Network Linked to an Israeli Public Relations Firm

During a hearing titled “A Growing Threat: Foreign And Domestic Sources Of Disinformation," DiResta offered expert testimony on influence operations and the spread of narratives across social and media networks.

A look at how user choice and transparency provide new ways of addressing content moderation and online safety policy.

Gab was founded in 2016 as an uncensored alternative to mainstream social media platforms. Stanford Internet Observatory’s latest report looks at behaviors and dynamics across the platform.

"We cannot live in a world where Facebook and Google know everything about us and we know next to nothing about them." – Nate Persily

During three panel discussions at the Cyber Policy Center, speakers discussed the challenges and potential solutions to disinformation and its often negative impact to democracy.

News, highlights, publications, events and opportunities from our programs and scholars

The Stanford Internet Observatory and the Trust and Safety Foundation will host a two-day conference focusing on cutting-edge research in trust and safety for those in academia, industry, civil society, and government.

A primer on the predictive models used for automated content moderation, known as classifiers.

On March 4th, Cyber Policy Center experts and experts in industry gathered to discuss the propaganda battles related to the conflict already in full force.

News, highlights, publications, events and opportunities from our programs and scholars

Shelby Grossman shares what she and her team watch for when analyzing social media posts and other online reports related to the Russian invasion of Ukraine. (Appeared first in Stanford News)

The Journal of Online Trust and Safety published its second issue on Tuesday, March 1.

The Virality Project final report finds recycled anti-vaccine narratives and viral content driven by recurring actors.

Narratives from overt propaganda, unattributed Telegram channels, and inauthentic social media accounts

Research on inauthentic behavior on TikTok, misinformation on Stanford's campus, Telegram activity in Belarus, health insurance scams that run advertisements on Google, and QAnon content on Tumblr.

In February the Consulate General of France, and the Content Policy & Society Lab convened a seminar to discuss online content moderation and the ways forward in 2022

News, highlights, publications, events and opportunities from our programs and scholars

The Stanford Internet Observatory's Matt Masterson and Alex Stamos spoke at a virtual hearing on the importance of policy work in order to secure American elections.

How well do platform reporting flows and context labels work with screen readers for the visually impaired?