Authors
Riana Pfefferkorn
Riana Pfefferkorn
News Type
Blogs
Date
Paragraphs

When we’re faced with a video recording of an event—such as an incident of police brutality—we can generally trust that the event happened as shown in the video. But that may soon change, thanks to the advent of so-called “deepfake” videos that use machine learning technology to show a real person saying and doing things they haven’t.

This technology poses a particular threat to marginalized communities. If deepfakes cause society to move away from the current “seeing is believing” paradigm for video footage, that shift may negatively impact individuals whose stories society is already less likely to believe. The proliferation of video recording technology has fueled a reckoning with police violence in the United States, recorded by bystanders and body-cameras. But in a world of pervasive, compelling deepfakes, the burden of proof to verify authenticity of videos may shift onto the videographer, a development that would further undermine attempts to seek justice for police violence. To counter deepfakes, high-tech tools meant to increase trust in videos are in development, but these technologies, though well-intentioned, could end up being used to discredit already marginalized voices. 

(Content Note: Some of the links in this piece lead to graphic videos of incidents of police violence. Those links are denoted in bold.)

Recent police killings of Black Americans caught on camera have inspired massive protests that have filled U.S. streets in the past year. Those protests endured for months in Minneapolis, where former police officer Derek Chauvin was convicted this week in the murder of George Floyd, a Black man. During Chauvin’s trial, another police officer killed Daunte Wright just outside Minneapolis, prompting additional protests as well as the officer’s resignation and arrest on second-degree manslaughter charges. She supposedly mistook her gun for her Taser—the same mistake alleged in the fatal shooting of Oscar Grant in 2009, by an officer whom a jury later found guilty of involuntary manslaughter (but not guilty of a more serious charge). All three of these tragic deaths—George Floyd, Daunte Wright, Oscar Grant—were documented in videos that were later used (or, in Wright’s case, seem likely to be used) as evidence at the trials of the police officers responsible. Both Floyd’s and Wright’s deaths were captured by the respective officers’ body-worn cameras, and multiple bystanders with cell phones recorded the Floyd and Grant incidents. Some commentators credit a 17-year-old Black girl’s video recording of Floyd’s death for making Chauvin’s trial happen at all.

The growth of the movement for Black lives in the years since Grant’s death in 2009 owes much to the rise in the availability, quality, and virality of bystander videos documenting police violence, but this video evidence hasn’t always been enough to secure convictions. From Rodney King’s assailants in 1992 to Philando Castile’s shooter 25 years later, juries have often declined to convict police officers even in cases where wanton police violence or killings are documented on video. Despite their growing prevalence, police bodycams have had mixed results in deterring excessive force or impelling accountability. That said, bodycam videos do sometimes make a difference, helping to convict officers in the killings of Jordan Edwards in Texas and Laquan McDonald in Chicago. Chauvin’s defense team pitted bodycam footage against the bystander videos employed by the prosecution, and lost.

What makes video so powerful? Why does it spur crowds to take to the streets and lawyers to showcase it in trials? It’s because seeing is believing. Shot at differing angles from officers’ point of view, bystander footage paints a fuller picture of what happened. Two people (on a jury, say, or watching a viral video online) might interpret a video two different ways. But they’ve generally been able to take for granted that the footage is a true, accurate record of something that really happened. 

That might not be the case for much longer. It’s now possible to use artificial intelligence to generate highly realistic “deepfake” videos showing real people saying and doing things they never said or did, such as the recent viral TikTok videos depicting an ersatz Tom Cruise. You can also find realistic headshots of people who don’t exist at all on the creatively-named website thispersondoesnotexist.com. (There’s even a cat version.) 

While using deepfake technology to invent cats or impersonate movie stars might be cute, the technology has more sinister uses as well. In March, the Federal Bureau of Investigation issued a warning that malicious actors are “almost certain” to use “synthetic content” in disinformation campaigns against the American public and in criminal schemes to defraud U.S. businesses. The breakneck pace of deepfake technology’s development has prompted concerns that techniques for detecting such imagery will be unable to keep up. If so, the high-tech cat-and-mouse game between creators and debunkers might end in a stalemate at best. 

If it becomes impossible to reliably prove that a fake video isn’t real, a more feasible alternative might be to focus instead on proving that a real video isn’t fake. So-called “verified at capture” or “controlled-capture” technologies attach additional metadata to imagery at the moment it’s taken, to verify when and where the footage was recorded and reveal any attempt to tamper with the data. The goal of these technologies, which are still in their infancy, is to ensure that an image’s integrity will stand up to scrutiny. 

Photo and video verification technology holds promise for confirming what’s real in the age of “fake news.” But it’s also cause for concern. In a society where guilty verdicts for police officers remain elusive despite ample video evidence, is even more technology the answer? Or will it simply reinforce existing inequities? 

The “ambitious goal” of adding verification technology to smartphone chipsets necessarily entails increasing the cost of production. Once such phones start to come onto the market, they will be more expensive than lower-end devices that lack this functionality. And not everyone will be able to afford them. Black Americans and poor Americans have lower rates of smartphone ownership than whites and high earners, and are more likely to own a “dumb” cell phone. (The same pattern holds true with regard to educational attainment and urban versus rural residence.) Unless and until verification technology is baked into even the most affordable phones, it risks replicating existing disparities in digital access. 

That has implications for police accountability, and, by extension, for Black lives. Primed by societal concerns about deepfakes and “fake news,” juries may start expecting high-tech proof that a video is real. That might lead them to doubt the veracity of bystander videos of police brutality if they were captured on lower-end phones that lack verification technology. Extrapolating from current trends in phone ownership, such bystanders are more likely to be members of marginalized racial and socioeconomic groups. Those are the very people who, as witnesses in court, face an uphill battle in being afforded credibility by juries. That bias, which reared its ugly head again in the Chauvin trial, has long outlived the 19th-century rules that explicitly barred Black (and other non-white) people from testifying for or against white people on the grounds that their race rendered them inherently unreliable witnesses. 

In short, skepticism of “unverified” phone videos may compound existing prejudices against the owners of those phones. That may matter less in situations where a diverse group of numerous eyewitnesses record a police brutality incident on a range of devices. But if there is only a single bystander witness to the scene, the kind of phone they own could prove significant.

The advent of mobile devices empowered Black Americans to force a national reckoning with police brutality. Ubiquitous, pocket-sized video recorders allow average bystanders to document the pandemic of police violence. And because seeing is believing, those videos make it harder for others to continue denying the problem exists. Even with the evidence thrust under their noses, juries keep acquitting police officers who kill Black people. Chauvin’s conviction this week represents an exception to recent history: Between 2005 and 2019, of the 104 law enforcement officers charged with murder or manslaughter in connection with a shooting while on duty, 35 were convicted

The fight against fake videos will complicate the fight for Black lives. Unless it is equally available to everyone, video verification technology may not help the movement for police accountability, and could even set it back. Technological guarantees of videos’ trustworthiness will make little difference if they are accessible only to the privileged, whose stories society already tends to believe. We might be able to tech our way out of the deepfakes threat, but we can’t tech our way out of America’s systemic racism. 

Riana Pfefferkorn is a research scholar at the Stanford Internet Observatory

Read More

Riana Pfefferkorn
News

Q&A with Riana Pfefferkorn, Stanford Internet Observatory Research Scholar

Riana Pfefferkorn joined the Stanford Internet Observatory as a research scholar in December. She comes from Stanford’s Center for Internet and Society, where she was the Associate Director of Surveillance and Cybersecurity.
Q&A with Riana Pfefferkorn, Stanford Internet Observatory Research Scholar
A member of the All India Student Federation teaches farmers about social media and how to use such tools as part of ongoing protests against the government. (Pradeep Gaur / SOPA Images / Sipa via Reuters Connect)
Blogs

New Intermediary Rules Jeopardize the Security of Indian Internet Users

New Intermediary Rules Jeopardize the Security of Indian Internet Users
All News button
1
-

End-to-end encrypted (E2EE) communications have been around for decades, but the deployment of default E2EE on billion-user platforms has new impacts for user privacy and safety. The deployment comes with benefits to both individuals and society but it also creates new risks, as long-existing models of messenger abuse can now flourish in an environment where automated or human review cannot reach. New E2EE products raise the prospect of less understood risks by adding discoverability to encrypted platforms, allowing contact from strangers and increasing the risk of certain types of abuse. This workshop will place a particular focus on platform benefits and risks that impact civil society organizations, with a specific focus on the global south. Through a series of workshops and policy papers, the Stanford Internet Observatory is facilitating open and productive dialogue on this contentious topic to find common ground. 

An important defining principle behind this workshop series is the explicit assumption that E2EE is here to stay. To that end, our workshops have set aside any discussion of exceptional access (aka backdoor) designs. This debate has raged between industry, academic cryptographers and law enforcement for decades and little progress has been made. We focus instead on interventions that can be used to reduce the harm of E2E encrypted communication products that have been less widely explored or implemented. 

Submissions for working papers and requests to attend will be accepted up to 10 days before the event. Accepted submitters will be invited to present or attend our upcoming workshops. 

SUBMIT HERE

Webinar

Workshops
-
Karen Nershi headshot on a blue background with Fall Seminar Series in white font

Join the Cyber Policy Center and moderator  Daniel Bateyko in conversation with Karen Nershi for How Strong Are International Standards in Practice?:  Evidence from Cryptocurrency Transactions. 

The rise of cryptocurrency (decentralized digital currency) presents challenges for state regulators given its connection to illegal activity and pseudonymous nature, which has allowed both individuals and businesses to circumvent national laws through regulatory arbitrage. Karen Nershi assess the degree to which states have managed to regulate cryptocurrency exchanges, providing a detailed study of international efforts to impose common regulatory standards for a new technology. To do so, she introduces a dataset of cryptocurrency transactions collected during a two-month period in 2020 from exchanges in countries around the world and employ bunching estimation to compare levels of unusual activity below a threshold at which exchanges must screen customers for money laundering risk. She finds that exchanges in some, but not all, countries show substantial unusual activity below the threshold; these findings suggest that while countries have made progress toward regulating cryptocurrency exchanges, gaps in enforcement across countries allow for regulatory arbitrage. 

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

Karen Nershi is a Postdoctoral Fellow at Stanford University's Stanford Internet Observatory and the Center for International Security and Cooperation (CISAC). In the summer of 2021, she completed her Ph.D. in political science at the University of Pennsylvania specializing in the fields of international relations and comparative politics. Through an empirical lens, her research examines questions of international cooperation and regulation within international political economy, including challenges emerging from the adoption of decentralized digital currency and other new technologies. 

Specific topics Dr. Nershi explores in her research include ransomware, cross-national regulation of the cryptocurrency sector, and international cooperation around anti-money laundering enforcement. Her research has been supported by the University of Pennsylvania GAPSA Provost Fellowship for Innovation and the Christopher H. Browne Center for International Politics. 

Before beginning her doctorate, Karen Nershi earned a B.A. in International Studies with honors at the University of Alabama. She lived and studied Arabic in Amman, Jordan and Meknes, Morocco as a Foreign Language and Area Studies Fellow and a Critical Language Scholarship recipient. She also lived and studied in Mannheim, Germany, in addition to interning at the U.S. Consulate General Frankfurt (Frankfurt, Germany).

Dan Bateyko is the Special Projects Manager at the Stanford Internet Observatory.

Dan worked previously as a Research Coordinator for The Center on Privacy & Technology at Georgetown Law, where he investigated Immigration and Customs Enforcement surveillance practices, co-authoring American Dragnet: Data-Drive Deportation in the 21st Century. He has worked at the Berkman Klein Center for Internet & Society, the Dangerous Speech Project, and as a research assistant for Amanda Levendowski, whom he assisted with legal scholarship on facial surveillance.

In 2016, he received a Thomas J. Watson Fellowship. He spent his fellowship year talking with people about digital surveillance and Internet infrastructure in South Korea, China, Malaysia, Germany, Ghana, Russia, and Iceland. His writing has appeared in Georgetown Tech Law Review, Columbia Journalism Review, Dazed Magazine, The Internet Health Report, Council on Foreign Relations' Net Politics, and Global Voices. He is a 2022 Internet Law & Policy Foundry Fellow.

Dan received his Masters of Law & Technology from Georgetown University Law Center (where he received the IAPP Westin Scholar Book Award for excellence in Privacy Law), and his B.A. from Middlebury College.

Karen Nershi
Seminars
-
robert robertson headshot fall seminar series text on blue background

Join the Program on Democracy and the Internet (PDI) and moderator Alex Stamos in conversation with Ronald E. Robertson for Engagement Outweighs Exposure to Partisan and Unreliable News within Google Search 

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

If popular online platforms systematically expose their users to partisan and unreliable news, they could potentially contribute to societal issues like rising political polarization. This concern is central to the echo chamber and filter bubble debates, which critique the roles that user choice and algorithmic curation play in guiding users to different online information sources. These roles can be measured in terms of exposure, the URLs seen while using an online platform, and engagement, the URLs selected while on that platform or browsing the web more generally. However, due to the challenges of obtaining ecologically valid exposure data--what real users saw during their regular platform use--studies in this vein often only examine engagement data, or estimate exposure via simulated behavior or inference. Despite their centrality to the contemporary information ecosystem, few such studies have focused on web search, and even fewer have examined both exposure and engagement on any platform. To address these gaps, we conducted a two-wave study pairing surveys with ecologically valid measures of exposure and engagement on Google Search during the 2018 and 2020 US elections. We found that participants' partisan identification had a small and inconsistent relationship with the amount of partisan and unreliable news they were exposed to on Google Search, a more consistent relationship with the search results they chose to follow, and the most consistent relationship with their overall engagement. That is, compared to the news sources our participants were exposed to on Google Search, we found more identity-congruent and unreliable news sources in their engagement choices, both within Google Search and overall. These results suggest that exposure and engagement with partisan or unreliable news on Google Search are not primarily driven by algorithmic curation, but by users' own choices.

Dr. Ronald E Robertson received his Ph.D. in Network Science from Northeastern University in 2021. He was advised by Christo Wilson, a computer scientist, and David Lazer, a political scientist. For his research, Dr. Robertson uses computational tools, behavioral experiments, and qualitative user studies to measure user activity, algorithmic personalization, and choice architecture in online platforms. By rooting his questions in findings and frameworks from the social, behavioral, and network sciences, his goal is to foster a deeper and more widespread understanding of how humans and algorithms interact in digital spaces. Prior to Northeastern, Dr. Robertson obtained a BA in Psychology from the University of California San Diego and worked with research psychologist Robert Epstein at the American Institute for Behavioral Research and Technology.

Alex Stamos
0
ronald-e-robertson-2024.jpg
PhD

Dr. Ronald E Robertson received his Ph.D. in Network Science from Northeastern University in 2021. He was advised by Christo Wilson, a computer scientist, and David Lazer, a political scientist. For his research, Dr. Robertson uses computational tools, behavioral experiments, and qualitative user studies to measure user activity, algorithmic personalization, and choice architecture in online platforms. By rooting his questions in findings and frameworks from the social, behavioral, and network sciences, his goal is to foster a deeper and more widespread understanding of how humans and algorithms interact in digital spaces.

Prior to Northeastern, Dr. Robertson obtained a BA in Psychology from the University of California San Diego and worked with research psychologist Robert Epstein at the American Institute for Behavioral Research and Technology.

Research Scientist, Cyber Policy Center
Date Label
Seminars
-
l jean camp headshot on blue background

Join the Program on Democracy and the Internet (PDI) and moderator Andrew Grotto, in conversation with L. Jean Camp for Create a Market for Safe, Secure Software

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

Today the security market, particularly in embedded software and Internet of Things (IoT) devices, is a lemons market.  Buyers simply cannot distinguish between secure and insecure products. To enable the market for secure high quality products to thrive,  buyers need to have some knowledge of the contents of these digital products. Once purchased, ensuring a product or software package remains safe requires knowing if these include publicly disclosed vulnerabilities. Again this requires knowledge of the contents.  When consumers do not know the contents of their digital products, they can not know if they are at risk and need to take action.

The Software Bill of Materials  is a proposal that was identified as a critical instrument for meeting these challenges and securing software supply chains in the Executive Order on Improving the Nation’s Cybersecurity} by the Biden Administration (EO 14028. In this presentation Camp will introduce SBOMs, provide examples, and explain the components that are needed in the marketplace for this initiative to meet its potential.

Jean Camp is a Professor at Indiana University with appointments in Informatics and Computer Science.  She is a Fellow of the AAAS (2017), the IEEE (2018), and the ACM (2021).  She joined Indiana after eight years at Harvard’s Kennedy School. A year after earning her doctorate from Carnegie Mellon she served as a Senior Member of the Technical Staff at Sandia National Laboratories. She began her career as an engineer at Catawba Nuclear Station after a double major in electrical engineering and mathematics, followed by a MSEE in optoelectronics at University of North Carolina at Charlotte.

L. Jean Camp Professor at Indiana University
Seminars
-
Aleksandra Kuczerawy headshot on a blue background with text European Developments in Internet Regulation

Join the Program on Democracy and the Internet (PDI) and moderator Daphne Keller, in conversation with Aleksandra Kuczerawy for European Developments in Internet Regulation.

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

The Digital Services Act is a new landmark European Union legislation addressing illegal and harmful content online. Its main goals are to create a safer digital space but also to enhance protection of fundamental rights online. In this talk, Aleksandra Kuczerawy will discuss the core elements of the DSA, such as the layered system of due diligence obligations, content moderation rules and the enforcement framework, while providing underlying policy context for the US audience.

Aleksandra Kuczerawy is a postdoctoral scholar at the Program on Platform Regulation and has been a postdoctoral researcher at KU Leuven’s Centre for IT & IP Law and is assistant editor of the International Encyclopedia of Law (IEL) – Cyber Law. She has worked on the topics of privacy and data protection, media law, and the liability of Internet intermediaries since 2010 (projects PrimeLife, Experimedia, REVEAL). In 2017 she participated in the works of the Committee of experts on Internet Intermediaries (MSI-NET) at the Council of Europe, responsible for drafting a recommendation by the Committee of Ministers on the roles and responsibilties of internet intermediaries and a study on Algorithms and Human Rights.

Daphne Keller
Aleksandra Kuczerawy Postdoctoral Scholar at the Program on Platform Regulation (PPR)
Seminars
-
transatlantic summit text on blue background with globe

Please note, event is now sold out, though waitlist is available through the registration link above.

The Transatlantic Summit is where the worlds of cutting-edge research, industry, and policy come together to find answers on geopolitics, digital platforms and emerging tech as well as digital sovereignty. Whether you're an industry leader, policy maker, or student - join the start of a new Transatlantic movement seeking synergies between technology and society and become part of the international conversation going forward.

About:

  • Creates a vibrant forum for a dialogue between the US and Europe in Silicon Valley about the impact of digital technologies on business and society
  • Builds a strong network for German American collaboration in digital innovation, business, and geopolitics
  • Excite, connect and inspire: Participants meet the movers and shakers of the digital future from business, academia, and politics

 

Topics:

  1. Digital Sovereignty
  2. Geopolitics of Emerging Technologies
  3. Digital Platforms and Misinformation

 

The conference, which is jointly organized by the German Federal Foreign Office, The Representatives of German Business (GAAC West), German Consulate General of San Francisco, Stanford German Student Association and Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center addresses current discussions about digital technologies, business and society. Join us and get inspired by our series of speakers and networking sessions to bring together leaders, politicians, students, and changemakers.

Digital Sovereignty and Multilateral Collaboration

Digital sovereignty vs. cooperation: What should the future of the transatlantic partnership on digital policies look like, and how do we reach it?

Technology increasingly sits at the epicenter of geopolitics. In recent years, the notion of technological or digital sovereignty has emerged in Europe as a means of promoting the notion of European leadership and strategic autonomy in the digital field. On the other side of the Atlantic, the United States find themselves in an increasingly fierce race with China for global technology dominance. Against this backdrop, cooperation between the European Union and the United States may be more critical than ever. This raises important questions: What does Europe's move toward digital sovereignty and self- determination mean for the transatlantic partnership? And how should the US and EU balance sovereignty and cooperation in digital and technology policy? Our panel will explore tensions between sovereignty and cooperation and what the future of transatlantic policy may look like on issues from data protection to semiconductors, in light of the rising technological influence and ambitions of China.

John Zysman, Professor Emeritus, UC Berkeley
Maryam Cope, Head of Government Affairs, ASML U. S.
Hannah Bracken, Policy Advisor -Privacy Shield, U.S. Department of Commerce
Adriana Groh, Co-Founder, Sovereign Tech Fund

Agenda & Speakers

Transatlantic Summit: Sovereignty vs. Cooperation in the Digital Era
Thursday, Nov. 17th, 2022, 9:00am – 6:00pm PT
Vidalakis Dining Hall, Schwab Residential Center Stanford, CA 94305

FULL AGENDA
Download pdf
SPEAKER BIOS
Download pdf
Conferences
-

Join the Program on Democracy and the Internet (PDI) and moderator Nate Persily, in conversation with Aleksandra Kuczerawy for European Developments in Internet Regulation.

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford affiliation only) and virtual attendance (open to public) is available; registration is required.

Aleksandra Kuczerawy is a postdoctoral scholar at the Program on Platform Regulation and has been a postdoctoral researcher at KU Leuven’s Centre for IT & IP Law and is assistant editor of the International Encyclopedia of Law (IEL) – Cyber Law. She has worked on the topics of privacy and data protection, media law, and the liability of Internet intermediaries since 2010 (projects PrimeLife, Experimedia, REVEAL). In 2017 she participated in the works of the Committee of experts on Internet Intermediaries (MSI-NET) at the Council of Europe, responsible for drafting a recommendation by the Committee of Ministers on the roles and responsibilties of internet intermediaries and a study on Algorithms and Human Rights.

Aleksandra Kuczerawy Postdoctoral Scholar at the Program on Platform Regulation (PDI)
Seminars
-
chenyan jia headshot on flyer

Join the Program on Democracy and the Internet (PDI) and moderator Nate Persily, in conversation with Chenyan Jia for The Evolving Role of AI In Political News Consumption: The Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation.

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford affiliation only) and virtual attendance (open to the public) is available; registration is required.

Chenyan Jia (Ph.D., The University of Texas at Austin) is a postdoctoral scholar in The Program on Democracy and the Internet (PDI) at Stanford University. In 2023 Fall, she will be joining Northeastern University as an Assistant Professor in the School of Journalism in the College of Arts, Media, and Design with a joint appointment in the Khoury College of Computer Sciences. She has been working as a research assistant for UT's Human–AI Interaction Lab.

Her research interests lie at the intersection of communication and human-computer interaction. Her work has examined (a) the influence of emerging media technologies such as automated journalism and misinformation detection algorithms on people’s political attitudes and news consumption behaviors; (b) the political bias in news coverage through NLP techniques; (c) how to leverage AI technologies to reduce bias and promote democracy.

Her research has appeared in mass communication journals and top-tier AI and HCI venues including Human-Computer Interaction Journal (CSCW), Journal of Artificial Intelligence, International Journal of Communication, Media and Communication, ICLR, ICWSM, EMNLP, ACL, and AAAI. Her research has been awarded the Best Paper Award at AAAI 21. She was the recipient of the Harrington Dissertation Fellowship and the Dallas Morning News Graduate Fellowship for Journalism Innovation.

YOUTUBE RECORDING

Chenyan Jia Postdoctoral Scholar at the Program on Democracy and the Internet (PDI) 
Seminars
-
meicen sun headshot on blue background advertising seminar

Join the Program on Democracy and the Internet (PDI) and moderator Nate Persily, in conversation with Meicen Sun for Internet Control as A Winning Strategy: How the Duality of Information Consolidates Autocratic Rule in the Digital Age.

This paper advances a new theory on how the Internet as a digital technology helps consolidate autocratic rule. Exploiting a major Internet control shock in China in 2014, this paper finds that Chinese data-intensive firms have gained from Internet control a 10% increase in revenue over other Chinese firms, and about 1-2% over their U.S. competitors. Meanwhile, the same Internet control has incurred an up to 25% reduction in research quality for Chinese scholars conditional on the knowledge-intensity of their discipline. This occurred specifically via a reduction in the access to cutting-edge knowledge from the outside world. These findings suggest that while politically motivated information flow restrictions do take a toll on the country’s long-term capacity for innovation, they lend a short-term benefit to its data-intensive sectors. Conventional wisdom on the inherent limit to information control by autocracies overlooks this crucial protectionist benefit that aids in autocratic power consolidation in the digital age. 

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person and virtual attendance is available; registration is required.

Meicen Sun is a postdoctoral scholar with the Program on Democracy and the Internet at Stanford University. Her research examines the political economy of information and the effect of information policy on the future of innovation and state power. Her writings have appeared in academic and policy outlets including Foreign Policy Analysis, Harvard Business Review, World Economic Forum, the Asian Development Bank Institute, and The Diplomat among others. She had previously conducted research at the Center for Strategic and International Studies and at Georgetown University in Washington, DC, and at the UN Regional Centre for Peace and Disarmament in Africa. Bilingual in English and Chinese, she has also written stories, plays, and music and staged many of her works -- in both languages -- in China, Singapore and the U.S. Sun has served as a Fellow on the World Economic Forum's Global Future Council on China and as a Research Affiliate with the MIT Initiative on the Digital Economy. She holds an A.B. with Honors from Princeton University, an A.M. with a Certificate in Law from the University of Pennsylvania, and a Ph.D from the Massachusetts Institute of Technology.

Meicen Sun Postdoctoral scholar with the Program on Democracy and the Internet
Seminars
News Feed Image
Subscribe to Sub-Saharan Africa