Cyber Policy Center

LATEST NEWS FROM THE CPC

News
Filter:
Show Hide
Ex: author name, topic, etc.
Ex: author name, topic, etc.
By Topic
Show Hide
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
By Region
Show Hide
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
By Type
Show Hide
By date
Show Hide

Charles Mok is an internet entrepreneur and IT advocate. He was formerly a member of the Hong Kong Legislative Council and founded the Hong Kong chapter of the Internet Society. He is currently a Visiting Scholar at the Global Digital Policy Incubator at Stanford University. This article appeared in OPTF.

A political cartoon encouraging Indians to boycott Chinese products
Blogs
Blogs

An Analysis of a Pro-Indian Army Covert Influence Operation on Twitter

Pan’s research focuses on political and authoritarian politics, including how preferences and behaviors are shaped by political censorship, propaganda, and information manipulation.

Blogs

Stanford Internet Observatory collaborated with Graphika to analyze a large network of accounts removed from Facebook, Instagram, and Twitter in our latest report. This information operation likely originated in the United States and targeted a range of countries in the Middle East and Central Asia.

Twitter suspended a network of accounts that coordinated to promote narratives around the coronavirus pandemic, and to amplify a pro-Russian news site ahead of the invasion of Ukraine.

The Program on Platform Regulation's Daphne Keller worked with the ACLU to file this comment to the Meta Oversight Board's "UK Drill Music" case.

Following the success of The China Questions, a new volume of insights from top China specialists explains key issues shaping today’s United States–China relationship. Graham Webster of the DigiChina Project authored "What Is at Stake in the US–China Technological Relationship?" for the book.

In an essay for Lawfare Blog, Samantha Bradshaw, Renee DiResta and Christopher Giles look at how state war propaganda in Russia is increasingly prevalent on platforms that offer minimal-moderation virality as their value proposition.

Riana Pfefferkorn of SIO spoke with Wired on Meta's expansion of end-to-end encryption in Messenger.

Julie Owono, Executive Director of the Content Policy & Society Lab (CPSL) and a fellow of the Program on Democracy and the Internet (PDI) at Stanford University, on the issue of banning platforms. Authored for Just Security.

In an essay for Lawfare Blog, Samantha Bradshaw of American University and Shelby Grossman of the Stanford Internet Observatory explore whether two key platforms, Facebook and Twitter, were internally consistent in how they applied their labels during the 2020 presidential election.

A graphic depiction of a face falling towards the ground on a red background overlayed with a black satellite dish and the word "takedown".
Blogs
Blogs

An Investigation into an Inauthentic Facebook and Instagram Network Linked to an Israeli Public Relations Firm

During a hearing titled “A Growing Threat: Foreign And Domestic Sources Of Disinformation," DiResta offered expert testimony on influence operations and the spread of narratives across social and media networks.

A look at how user choice and transparency provide new ways of addressing content moderation and online safety policy.

Gab was founded in 2016 as an uncensored alternative to mainstream social media platforms. Stanford Internet Observatory’s latest report looks at behaviors and dynamics across the platform.

"We cannot live in a world where Facebook and Google know everything about us and we know next to nothing about them." – Nate Persily

During three panel discussions at the Cyber Policy Center, speakers discussed the challenges and potential solutions to disinformation and its often negative impact to democracy.

News, highlights, publications, events and opportunities from our programs and scholars

The Stanford Internet Observatory and the Trust and Safety Foundation will host a two-day conference focusing on cutting-edge research in trust and safety for those in academia, industry, civil society, and government.

A primer on the predictive models used for automated content moderation, known as classifiers.

On March 4th, Cyber Policy Center experts and experts in industry gathered to discuss the propaganda battles related to the conflict already in full force.