-

Image
out of the rabbit hole event flyer with photo of becca lewis

PART OF THE FALL SEMINAR SERIES

Join us on October 5th at the Cyber Policy Center seminar from 12 PM - 1 PM PST featuring Becca Lewis, PhD Candidate in Communication at Stanford University. The session will be moderated by Kelly Born, Director of the Cyber Initiative at the William and Flora Hewlett Foundation. 

Conventional wisdom suggests that conspiracy theories and far-right propaganda thrive mainly at the end of algorithmic rabbit holes, in the deep, dark corners of the internet. This presentation will show that the opposite is true by explaining how in fact, harmful ideas gain traction through the charisma and popularity of internet celebrities in mainstream social media contexts. Through her extensive research on far-right YouTubers, Becca Lewis argues that instead of merely focusing our responses on the threat of algorithmic rabbit holes, we must also understand the power of amplification through thriving alternative media systems on- and offline.

  

REGISTER

 

Speaker Profile:

Becca Lewis is a Stanford Graduate Fellow and PhD candidate in Communication at Stanford University, as well as a research affiliate at Data & Society Research Institute and the University of North Carolina’s Center for Information, Technology, and Public Life. Her research has been published in the journals Society Media + Society, Television and New Media, and American Behavioral Scientist, and her public writing has appeared in outlets including The Guardian, New York Magazine, and Columbia Journalism Review. She holds an MSc in Social Science from the Oxford Internet Institute.


 

Becca Lewis Stanford Graduate Fellow and PhD candidate in Communication at Stanford University
Seminars
-

Image
Julie Owono & Dr. Niousha Roshani Event September 28

PART OF THE FALL SEMINAR SERIES

Join us via Zoom on Tue, September 28th from 12 PM - 1 PM PST for a conversation with PDI fellows, Julie Owono and Dr Niousha Roshani, moderated by Co-Director of the Cyber Policy Center, Nathaniel Persily, as they discuss the challenges to content policy and the solutions that a multistakeholder approach has to offer. This is part of the fall seminar series organized by Stanford Cyber Policy Center’s Program on Democracy and the Internet, and the William and Flora Hewlett Foundation’s Cyber Initiative. 

  

REGISTER

 

The multistakeholder governance model is increasingly presented as a solution for addressing content governance issues online. While this inclusive and collaborative approach mirrors the foundational principle of an “open and free internet”, challenges with online content around scale and rapidity notably, call for further experimentation. 

The Content Policy & Society Lab (CPSL), a new project of the Program on Democracy and the Internet (PDI) at Stanford’s Cyber Policy Center (CPC), aims to be one of these experiments - by creating a safe space for a diverse array of stakeholders from government, the private sector, civil society, and academia to share knowledge and collaborate on solutions.

Moderator: Nathaniel Persily, Co-Director, Cyber Policy Center

Speakers: Julie Owono, PDI Fellow and Niousha Roshani, PDI Fellow

 

 

 

Julie Owono
Niousha Roshani
Seminars
-

Image
header for Technology and Geopolitics: EU Proposals for Regulating Rights, Security and Trade

The future of technology policy in Europe will be affected by growing nationalism and protectionism, cyber and national security threats, and great power rivalries. The Program on Democracy and the Internet invites you to a technology policy discussion led by International Policy Director, Marietje SchaakeJoin us on September 16th from 9 AM - 12 PM PST (6 PM - 9 PM CET), as we dive into conversations on EU legislative packages, digital trade rules, and cybersecurity & geopolitics. We hope to develop a more precise understanding of how the EU and its allies can collaborate to create compatible technology standards, build more resilient supply chains, and address novel opportunities and risks presented by emerging technologies.This event is organized by the Program on Democracy and the Internet (part of the Cyber Policy Center and the Center on Philanthropy and Civil Society) and co-sponsored by the Institute for Human-Centered Artificial Intelligence.

Moderators: 

 

Speakers:

 

Seminars
Paragraphs

Whether the targets are local governmentshospital systems, or gas pipelines, ransomware attacks in which hackers lock down a computer network and demand money are a growing threat to critical infrastructure. The attack on Colonial Pipeline, a major supplier of fuel on the East Coast of the United States, is just one of the latest examples—there will likely be many more. Yet the federal government has so far failed to protect these organizations from the cyberattacks, and even its actions since May, when Colonial Pipeline was attacked, fall short of what’s necessary.

Read more 

All Publications button
1
Publication Type
Commentary
Publication Date
Subtitle
Op-ed in Bulletin of the Atomic Scientists, by Gregory Falco and Sejal Jhawer
Authors
Sejal Jhawer
-

With the rise of national digital identity systems (Digital ID) across the world, there is a growing need to examine their impact on human rights. While these systems offer accountability and efficiency gains, they also pose risks for surveillance, exclusion, and discrimination. In several instances, national Digital ID programmes started with a specific scope of use, but have since been deployed for different applications, and in different sectors. This raises the question of how to determine appropriate and inappropriate uses of Digital ID programs, which create an inherent power imbalance between the State and its residents given the personal data they collect.

On Wednesday, June 23rd @ 10:00 am Pacific Time, join Amber Sinha of India’s Center for Internet and Society (CIS), Anri van der Spuy of Research ICT Africa (RIA) and Dr. Tom Fischer of Privacy International in conversation with Kelly Born, Director of the Hewlett Foundation’s Cyber Initiative and fellow at Stanford’s Cyber Policy Center, to discuss the challenges and opportunities posed by digital identity systems, a proposed framework for assessing trade-offs and ensuring that human rights are adequately protected, and a discussion of experiences in translating and adapting new digital ID assessment framework by CIS and RIA to different contexts and geographies.

Amber Sinha 
Anri van der Spuy
Dr. Tom Fischer 
Kelly Born
Authors
Riana Pfefferkorn
Riana Pfefferkorn
News Type
Blogs
Date
Paragraphs

India’s information technology ministry recently finalized a set of rules that the government argues will make online service providers more accountable for their users’ bad behavior. Noncompliance may expose a provider to legal liability from which it is otherwise immune. Despite the rules’ apparently noble aim of incentivizing providers to better police their services, in reality, the changes pose a serious threat to Indians’ data security and reflect the Indian government’s increasingly authoritarian approach to internet governance.

The government of Prime Minister Narendra Modi has in recent years taken a distinctly illiberal approach to online speech. When India’s IT ministry released its original draft of the rules more than two years ago, civil society groups criticized the proposal as a grave threat to free speech and privacy rights. In the intervening years, threats to free speech have only grown. To quell dissent, Modi’s government has shut off the internet in multiple regions. Facing widespread protests led by the country’s farmers against his government, Modi has escalated his attacks on the press and pressured Twitter into taking down hundreds of accounts critical of the government’s protest response. The new rules represent the latest tightening of state control over online content, and as other backsliding democracies consider greater restrictions on online speech, the Modi government is providing a troubling model for how to do so. 

Beyond chilling digital rights, the new rules threaten to undermine computer security systems that Indian internet users rely on every day in order to grant the state increased power to police online content. The new rules require messaging services to be able to determine the origin of content and demand that online platforms develop automated tools to take down certain content deemed illegal. Taken together, the new rules pose threats to freedom of speech and the privacy and security of India’s internet users. 

The relevant provisions apply to “significant” “social media intermediaries” (which I’ll call SSMIs for short). “Significant” means the provider has hit a yet to be defined number of registered Indian users. “Social media intermediary” broadly encompasses many kinds of user-generated content-driven services. A government press release calls out WhatsApp, YouTube, Facebook, Instagram, and Twitter specifically, but services as diverse as LinkedIn, Twitch, Medium, TikTok, and Reddit also fall within the definition. 

Two provisions are of particular concern. Section 4(2) of the new rules requires SSMIs that are “primarily” messaging providers to be able to identify the “first originator” of content on the platform. Section 4(4) requires any SSMI (not limited to messaging) to “endeavour to deploy technology-based measures, including automated tools or other mechanisms to proactively identify” two categories of content: child sex abuse material and content identical to anything that’s been taken down before. I’ll call these the “traceability” and “filtering” provisions.

These provisions endanger the security of Indian internet users because they are incompatible with end-to-end encryption. End-to-end encryption, or E2EE, is a data security measure for protecting information by encoding it into an illegible scramble that no one but the sender and the intended recipient can decode. That way, the encrypted data remains private, and outsiders can’t alter it en route to the recipient. These features, confidentiality and integrity, are core underpinnings of data security. 

Not even the provider of an E2EE service can decrypt encrypted information. That’s why E2EE is incompatible with tracing and filtering content. Tracing the “originator” of information requires the ability to identify every instance when some user sent a given piece of information, which an intermediary can’t do if it can’t decode the encrypted information. The same problem applies to automatically filtering a service for certain content. 

Put simply, SSMIs can’t provide end-to-end encryption and still comply with these two provisions. This is by design. Speaking anonymously to The Economic Times, one government official said the new rules will force large online platforms to “control” what the government deems to be unlawful content: Under the new rules, “platforms like WhatsApp can’t give end-to-end encryption as an excuse for not removing such content,” the official said

The rules confront SSMIs with a Hobson’s choice: either weaken their data security practices, or open themselves up to expensive litigation as the price of strong security. That is an untenable dilemma. Intermediaries should not be penalized for choosing to protect users’ data. Indeed, the existing rules already require intermediaries to take “reasonable measures” to secure user data. If SSMIs weaken their encryption to comply with the new traceability and filtering provisions, will that violate the “reasonable data security” provision? This tension creates yet another quandary for intermediaries. 

The new rules make a contradictory demand: Secure Indians’ data—but not too well. A nation of 1.3 billion people cannot afford half-measures. National, economic, and personal security have become indivisible from data security. Strong encryption is critical to protecting data, be it military communications, proprietary business information, medical information, or private conversations between loved ones. Good data security is even more vital since the COVID-19 pandemic shifted much of daily life online. Without adequate protective measures, sensitive information is ripe for privacy invasions, theft, espionage, and hacking.

Weakening intermediaries’ data security is a gift to those who seek to harm India and its people. Citing national security and privacy concerns, Indian authorities have moved to restrict the presence of Chinese apps in India, but these new rules risk exposing the country’s internet users. The rules affect all of an intermediary’s users, not just those using the platform for bad acts. Over 400 million Indians currently use WhatsApp, and Signal hopes to add 100-200 million Indian users in the next two years. Most of those half-billion people are not criminals. If intermediaries drop E2EE to comply with the new rules, that primarily jeopardizes the privacy and security of law-abiding people, in return for making it easier for police to monitor the small criminal minority. 

Such monitoring may prove less effective than the Indian government expects. If popular apps cease offering E2EE, many criminals will drop those apps and move to the dark web, where they’re harder to track down. Some might create their own encrypted apps, as Al-Qaeda did as far back as 2007. In short, India’s new rules may lead to a perverse outcome where outlaws have better security than the law-abiding people whom they target. 

Meanwhile, weakening encryption is not the only way for police to gather evidence. We live in a “golden age for surveillance” in which our activities, movements, and communications generate a wealth of digital information about us. Many sources of digital evidence, such as communications metadata, cloud backups, and email, are not typically end-to-end encrypted. That means they’re available from the service provider in readable form. If Indian police have difficulty acquiring such data (for example because the data and the company are located outside of India), it’s not due to encryption, and passing rules limiting encryption will do nothing to ameliorate the problem.

When intermediaries employ end-to-end encryption, that means stronger security for communities, businesses, government, the military, institutions, and individuals—all of which adds up to the security of the nation. But the new traceability and filtering requirements may put an end to end-to-end encryption in India. The revised intermediary rules put the whole country’s security at risk. Amid a global backsliding for internet freedom, the proposal may offer an example for other would-be authoritarians to follow. 

Riana Pfefferkorn is a research scholar at the Stanford Internet Observatory.

Facebook, Google, and Microsoft provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Read More

Riana Pfefferkorn
News

Q&A with Riana Pfefferkorn, Stanford Internet Observatory Research Scholar

Riana Pfefferkorn joined the Stanford Internet Observatory as a research scholar in December. She comes from Stanford’s Center for Internet and Society, where she was the Associate Director of Surveillance and Cybersecurity.
Q&A with Riana Pfefferkorn, Stanford Internet Observatory Research Scholar
twitter takedown headliner
Blogs

Analysis of February 2021 Twitter Takedowns

In this post and in the attached reports we investigate a Twitter network attributed to actors in Armenia, Iran, and Russia.
Analysis of February 2021 Twitter Takedowns
img 5787
Blogs

Analyzing a Twitter Takedown Originating in Saudi Arabia

Analyzing a Twitter Takedown Originating in Saudi Arabia
All News button
1
Authors
Stanford Internet Observatory
News Type
Blogs
Date
Paragraphs

On May 6, 2021, Facebook announced the takedown of 32 Pages, 46 Profiles, and six Instagram accounts operated by individuals in the Central African Republic (CAR) whose activities targeted audiences in CAR. Facebook shared this network with the Stanford Internet Observatory (SIO) on April 26, 2021. This network was suspended not due to the content of its posts, but rather for coordinated inauthentic behavior. SIO found significant indications both on and off platform that many of the assets removed in this takedown were aliases for the same entity. 

The suspended network exhibited strong ties to Harouna Douamba, a pseudonym for an allegedly Burkinabe individual who has gained notoriety in CAR for the information campaigns he wages on social media. Douamba claims to be the president of three non-governmental organizations (NGOs): Aimons Notre Afrique (ANA), Coalition Afrique Engagée (CAE), and Fédération Nationale des Ivoiriens d’Origine Étrangères (FENIOE). Facebook Pages for these organizations were included in the suspended network, in addition to Pages for several other NGOs and media companies with ties to Douamba. We also found some evidence that one of the suspended Profiles may be the individual behind the Harouna Douamba pseudonym. Facebook attributes the network to ANA.

List of NGOs and media outlets linked to Harouna Douamba NGOs and media outlets linked to Harouna Douamba

Suspended Pages consistently disparaged France’s involvement with CAR, but praised President Faustin-Archange Touadéra and Russia. They also published slanted stories on other west and central African countries. 

We also investigated Douamba’s connections to a disinformation campaign that claimed four officials associated with the UN peacekeeping mission in CAR (the Multidimensional Integrated Stabilization Mission in the Central African Republic, known as MINUSCA) trafficked arms to rebels operating in a neighborhood in Bangui, the CAR capitol. One of the suspended Pages was deeply involved in this effort and posted what might qualify as incitements to violence. 

Key takeaways: 

  • The suspended network centered around the activities of Harouna Douamba. Nearly all of the suspended Pages have connections to Douamba and/or frequently published content featuring Douamba and the activities of his NGOs. Several of the suspended Profiles and Instagram accounts also appear to have direct ties to Douamba, his NGOs, or affiliated media companies. 

  • Many of the suspended Pages claimed to be NGOs that seek to advance Pan-African causes. However, these NGOs largely appear to be thinly veiled aliases for Douamba’s ANA and CAE NGOs. Pages for these organizations demonstrated significant coordinated behavior. For instance, they frequently shared duplicated content from ANA and CAE, usually within 10 to 15 minutes of the original posts. 

  • One of the suspended Pages was a coordinating force around a disinformation campaign in 2020 alleging that UN peacekeepers in CAR trafficked weapons to rebel groups and calling for revolt at the peacekeeping operation. This is strong evidence that Douamba is linked to that disinformation campaign. 

  • Eighteen domains, largely French-language news sites covering central and west Africa, were linked to the network. There is substantial evidence that the sites are linked to each other and to Douamba. The ANA website, for instance, lists nearly all of the news sites as part of their media group, ANA-COM.

  • Topically, the network largely pushed content critical of France and supportive of the Touadéra regime and Russia. They also published slanted stories on other west and central African countries.

  • The network also attempted to build its audience across platforms. One post that was shared widely by suspended Pages called for Pan-Africanists to include their WhatsApp numbers in the comments. However, few users shared this information.

 

Read More

twitter takedown headliner
Blogs

Analysis of February 2021 Twitter Takedowns

In this post and in the attached reports we investigate a Twitter network attributed to actors in Armenia, Iran, and Russia.
Analysis of February 2021 Twitter Takedowns
palestine takedown headliner
Blogs

Staying Current

An Investigation Into a Suspended Facebook Network Supporting the Leader of the Palestinian Democratic Reform Current
Staying Current
takedown report headliner
Blogs

Stoking Conflict by Keystroke

New Facebook takedowns expose networks of Russian-linked assets targeting Libya, Sudan, Syria, and the Central African Republic.
Stoking Conflict by Keystroke
All News button
1
Subtitle

A Facebook takedown exposes a network of NGO and media entities linked to Harouna Douamba.

-

Image
this is how they tell me the world ends event at cyber policy center

On Wednesday, May 26 at 10 am pacific time, please join Andrew Grotto, Director of Stanford’s Program on Geopolitics, Technology and Governance, for a conversation with Nicole Perlroth, New York Times Cybersecurity Reporter, about the underground market for cyber-attack capabilities.

In her book This Is How They Tell Me the World Ends: The Cyberweapons Arms Race,” Perlroth argues that the United States government became the world's dominant hoarder of one of the most coveted tools in a spy's arsenal, the zero-day vulnerability. After briefly cornering the market, in her account, the United States then lost control of its hoard and the market.

Perlroth and Grotto, a former Senior Director for Cybersecurity Policy at the White House in both the Obama and Trump Administrations, will talk about the development and evolution of this market, and what it portends about the future of conflict in cyberspace and beyond.

This event is co-sponsored by the Freeman Spogli Institute for International Studies and the Cyber Policy Center.

Praise for “This Is How They Tell Me the World Ends”: “Perlroth's terrifying revelation of how vulnerable American institutions and individuals are to clandestine cyberattacks by malicious hackers is possibly the most important book of the year . . . Perlroth's precise, lucid, and compelling presentation of mind-blowing disclosures about the underground arms race a must-read exposé.” —Booklist, starred review

Nicole Perlroth
-

End-to-end encrypted (E2EE) communications have been around for decades, but the deployment of default E2EE on billion-user platforms has new impacts for user privacy and safety. The deployment comes with benefits to both individuals and society but it also creates new risks, as long-existing models of messenger abuse can now flourish in an environment where automated or human review cannot reach. New E2EE products raise the prospect of less understood risks by adding discoverability to encrypted platforms, allowing contact from strangers and increasing the risk of certain types of abuse. This workshop will place a particular focus on platform benefits and risks that impact civil society organizations, with a specific focus on the global south. Through a series of workshops and policy papers, the Stanford Internet Observatory is facilitating open and productive dialogue on this contentious topic to find common ground. 

An important defining principle behind this workshop series is the explicit assumption that E2EE is here to stay. To that end, our workshops have set aside any discussion of exceptional access (aka backdoor) designs. This debate has raged between industry, academic cryptographers and law enforcement for decades and little progress has been made. We focus instead on interventions that can be used to reduce the harm of E2E encrypted communication products that have been less widely explored or implemented. 

Submissions for working papers and requests to attend will be accepted up to 10 days before the event. Accepted submitters will be invited to present or attend our upcoming workshops. 

SUBMIT HERE

Webinar

Workshops
Subscribe to Russia and Eurasia