Security

FSI scholars produce research aimed at creating a safer world and examing the consequences of security policies on institutions and society. They look at longstanding issues including nuclear nonproliferation and the conflicts between countries like North and South Korea. But their research also examines new and emerging areas that transcend traditional borders – the drug war in Mexico and expanding terrorism networks. FSI researchers look at the changing methods of warfare with a focus on biosecurity and nuclear risk. They tackle cybersecurity with an eye toward privacy concerns and explore the implications of new actors like hackers.

Along with the changing face of conflict, terrorism and crime, FSI researchers study food security. They tackle the global problems of hunger, poverty and environmental degradation by generating knowledge and policy-relevant solutions. 

-
Image
event advertisement graphic

COVID-19 is having a profound impact on our online systems - exposing both the essential role they can and do play in our modern society, and the risks and vulnerabilities they represent. Substantial research is emerging on this topic, and the implications of that research  will have important consequences for both medium (e.g., 2020 elections) and long-term cyber policies. 

Welcome Remarks: Mike McFaul

  • Introduction to CPC and the center’s work on COVID from moderator Kelly Born
  • Alex Stamos of the Internet Observatory will discuss their work examining shifting narratives about coronavirus from Chinese and Russian State Media, early insights into covid misinformation in other countries (e.g., Nigeria), and how tech companies are responding; as well as how Zoom and other platforms have been working to adapt policies and practices to meet growing demands, and risks.
  • Nate Persily at PDI will discuss the challenges of running the elections in the current environment, including implications for state necessary changes to state policies and practices.
  • Eileen Donahoe at GDPI will discuss geopolitical threats to the international human rights law framework due to ineffective response to COVID by democratic govs; cite risks to 5 specific substantive civil/political rights; and recommend that democratic govs apply international human rights process principles in COVID-19 context.  [I will use ~2 slides  -only  if others use slides]
  • Marietje Schaake will discuss human-rights challenges (to privacy, freedom of association, and freedom of expression) that have arisen with various applications of AI in COVID-19 context - e.g.,contact tracking and content moderation; as well as emerging criteria for policymakers to consider when deploying tracing and related technologies. 
  • Andy Grotto from GTG will discuss recent work on How to Report Responsibly on Hacks and Disinformation, and the implications for mainstream media’s coverage of COVID.
  • 30 min for Q&A
Seminars
Paragraphs

A Research Agenda for Cyber Risk and Cyber Insurance | June 2019

Lead Author: Gregory Falco, Program on Geopolitics, Technology, and Governance at the Cyber Policy Center

Presented at the the 2019 Workshop on the Economics of Information Security (Boston, June 3-4, 2019)

Cyber risk as a research topic has attracted considerable academic, industry and government attention over the past 15 years. Unfortunately, research progress has been modest and has not been sufficient to answer the “call to action” in many prestigious committee and agency reports. To date, industry and academic research on cyber risk in all its complexity has been piecemeal and uncoordinated – which is typical of emergent, pre-paradigmatic fields. Further complicating matters is the multidisciplinary characteristics of cyber risk. In order to significantly advance the pace of research progress, a group of scholars, industry practitioners and policymakers from around the world present a research agenda for cyber risk and cyber insurance, which accounts for the variety of fields relevant to the problem space. We propose a cyber risk unified concept model that identifies where certain disciplines of study can add value. The concept model can also be used to identify collaboration opportunities across the major research questions. In this agenda, we unpack the major research questions into manageable projects and tactical questions that need to be addressed.

DOWNLOAD (PDF)

All Publications button
1
Publication Type
Working Papers
Publication Date
Authors
-

On May 20th please join us for Perspectives on Science Communication, Misinformation, and the COVID-19 Infodemic, featuring University of Washington scholars Kate Starbird, Jevin West and Ryan Calo, in conversation with Cyber Policy Center Director Kelly Born, as they discuss a new project exploring how scientific findings and science credentials are mobilized in the spread of misinformation.

Kate Starbird and Jevin West will present emerging research into how scientific findings and science credentials are mobilized within the spread of false and misleading information about COVID-19. Ryan Calo will explore proposals to address COVID-19 through information technology—the subject of a recent Senate Commerce hearing at which he testified—with particular attention to the ways contact tracing apps could prove a vector for misinformation and disinformation. 


May 20, 10am-11am (PST)
Join via Zoom 

Kate StarbirdKate Starbird is an Associate Professor in the Department of Human Centered Design & Engineering (HCDE) and Director of the Emerging Capacities of Mass Participation (emCOMP) Laboratory. She is also adjunct faculty in the Paul G. Allen School of Computer Science & Engineering and the Information School and a data science fellow at the eScience Institute. 

Kate's research is situated within human-computer interaction (HCI) and the emerging field of crisis informatics — the study of how information-communication technologies (ICTs) are used during crisis events. Her research examines how people use social media to seek, share, and make sense of information after natural disasters (such as earthquakes and hurricanes) and man-made disasters (such as acts of terrorism and mass shooting events). More recently, her work has shifted to focus on the spread of disinformation in this context. 

Ryan Calo
Ryan Calo
 is the Lane Powell and D. Wayne Gittinger Associate Professor at the University of Washington School of Law. In addition to co-founding the UW Center for an Informed Public, he is a faculty co-director (with Batya Friedman and Tadayoshi Kohno) of the UW Tech Policy Lab---a unique, interdisciplinary research unit that spans the School of Law, Information School, and Paul G. Allen School of Computer Science and Engineering where Calo also holds courtesy appointments. Calo is widely published in the area of law and emerging technology. 

 


Jevin WestJevin West is an Associate Professor in the Information School at the University of Washington. He is the co-founder of the DataLab and the new Center for an Informed Public at UW. He holds an Adjunct Faculty position in the Paul G. Allen School of Computer Science & Engineering and Data Science Fellow at the eScience Institute. His research and teaching focus on misinformation in and about science. He develops methods for mining the scientific literature in order to study the origins of disciplines, the social and economic biases that drive these disciplines, and the impact the current publication system has on the health of science. 

 

Kelly Born
Kelly Born
 is the Executive Director of Stanford’s Cyber Policy Center. The center’s research and teaching focuses on the governance of digital technology at the intersection of security, geopolitics and democracy. Born collaborates with the center’s program leaders to pioneer new lines of research, policy-oriented curriculum, and outreach to key decision-makers globally. Prior to joining Stanford, Born helped to launch and lead The Madison Initiative at the William and Flora Hewlett Foundation, one of the largest philanthropic undertakings working to reduce polarization and improve U.S. democracy. There, she designed and implemented strategies focused on money in politics, electoral reform, civic engagement and digital disinformation. Kelly earned a master’s degree in international policy from Stanford University. 

Kate Starbird
Ryan Calo
Jevin West
Seminars
-

A recording of this event can be found here (YouTube recording)

National AI Strategies and Human Rights: New Urgency in the Era of COVID-19, takes place Wednesday, May 6th, at 10am PST with Eileen Donahoe, the Executive Director of the Global Digital Policy Incubator (GDPi) at Stanford's Cyber Policy Center and Megan Metzger, Associate Director for Research, also at GDPi. Joining them will be Mark Latonero, Senior Researcher at Data & Society, Richard Wingfield, from Global Partners Digital, and Gallit Dobner, Director of the Centre for International Digital Policy at Global Affairs Canada. The session will be moderated by Kelly Born, Executive Director of the Cyber Policy Center.   

The seminar will focus on the recently published report, National Artificial Intelligence Strategies and Human Rights: A Review, produced by the Global Digital Policy Incubator at Stanford and Global Partners Digital - and will also provide an opportunity to look at how the COVID-19 crisis is impacting human rights and digital technology work more generally.   

We will also be jointly hosting a webinar with the Freeman Spogli Institute for International Studies on May 8th at 1pm PST, with experts from around the center and institute discussing emerging research on Covid-19, and the implications to future cyber policies, as well as the upcoming elections. More information on the May 8th event, can be found here.   

May 6, 10am-11am (PST)  
Join via Zoom


eileen donahoe headshot  
Eileen Donahoe is the Executive Director of the Global Digital Policy Incubator (GDPI) at Stanford University, FSI/Cyber Policy Center. GDPI is a global multi-stakeholder collaboration hub for development of policies that reinforce human rights and democratic values in digitized society. Areas of current research: AI & human rights; combatting digital disinformation; governance of digital platforms. She served in the Obama administration as the first US Ambassador to the UN Human Rights Council in Geneva, at a time of significant institutional reform and innovation. After leaving government, she joined Human Rights Watch as Director of Global Affairs where she represented the organization worldwide on human rights foreign policy, with special emphasis on digital rights, cybersecurity and internet governance. Earlier in her career, she was a technology litigator at Fenwick & West in Silicon Valley. Eileen serves on the National Endowment for Democracy Board of Directors; the Transatlantic Commission on Election Integrity; the World Economic Forum Future Council on the Digital Economy; University of Essex Advisory Board on Human Rights, Big Data and Technology; NDI Designing for Democracy Advisory Board; Freedom Online Coalition Advisory Network; and Dartmouth College Board of Trustees. Degrees: BA, Dartmouth; J.D., Stanford Law School; MA East Asian Studies, Stanford; M.T.S., Harvard; and Ph.D., Ethics & Social Theory, GTU Cooperative Program with UC Berkeley. She is a member of the Council on Foreign Relations.


Megan Metzger headshot   
Megan Metzger is a Research Scholar and Associate Director for Research at the Global Digital Policy Incubator (GDPi) Program. Megan’s research is focused on how changes in technology change how individuals and states use and have access to information, and how this affects protest and other forms of political behavior. Her dissertation was focused primarily on the role of social media during the EuroMaidan protests in Ukraine. She has also worked on projects about the Gezi Park protests in Turkey, and has ongoing projects exploring Russian state strategies of information online. In addition to her academic background, Megan has spent a number of years studying and working in the post-communist world. Her scholarly work has been published in The Journal of Comparative Economics and Slavic Review. Her analysis has also been published in the Monkey Cage Blog at The Washington Post, The Huffington Post and Al Jazeera English.


Mark Latonero  
Mark Latonero is a Senior Researcher at Data & Society focused on AI and human rights and a Fellow at Harvard Kennedy School’s Carr Center for Human Rights Policy. Previously he was a research director and research professor at USC where he led the Technology and Human Trafficking Initiative. He has also served as innovation consultant for the UN Office of the High Commissioner for Human Rights. Dr. Latonero works on the social and policy implications of emerging technology and examines the benefits, risks, and harms of digital technologies, particularly in human rights and humanitarian contexts. He has published a number of reports on the impact of data-centric and automated technologies in forced migration, refugee identity, and crisis response.  

Richard Wingfield  
Richard Wingfield provides legal and policy expertise across Global Partners Digital's portfolio of programs. As Head of Legal, he provides legal and policy advice internally at GPD and to its partner organizations on human rights as they relate to the internet and digital policy, and develops legal analyses, policy briefings and other resources for stakeholders. Before joining GPD, Richard led on policy development and advocacy at the Equal Rights Trust, an international human rights organization working to combat discrimination and inequality. He has also undertaken research for the Bar Human Rights Committee and Commonwealth Lawyers Association, the Netherlands Institute of Human Rights and provided support during the preparatory work for the Yogyakarta Principles.  
Gallit Dobner  
Gallit Dobner is Director of the Centre for International Digital Policy at Global Affairs Canada, with responsibility for the G7 Rapid Response Mechanism to counter foreign threats to democracy as well as broader issues at the intersection of foreign policy and technology. She formerly served as Political Counsellor in The Hague, where she was responsible for bilateral relations and the international courts and tribunals (2015-19), and in Algiers (2010-12). Gallit has also served as Deputy Director at Global Affairs Canada for various international security files, including Counter Terrorism, the Middle East, and Afghanistan. Prior to this, Gallit was a Middle East analyst at Canada’s Privy Council Office. Gallit has a Masters in Political Science from McGill University and Sciences PO. 

Kelly Born  
Kelly Born is the Executive Director of Stanford’s Cyber Policy Center. The center’s research and teaching focuses on the governance of digital technology at the intersection of security, geopolitics and democracy. Born collaborates with the center’s program leaders to pioneer new lines of research, policy-oriented curriculum, and outreach to key decision-makers globally. Prior to joining Stanford, Born helped to launch and lead The Madison Initiative at the William and Flora Hewlett Foundation, one of the largest philanthropic undertakings working to reduce polarization and improve U.S. democracy. There, she designed and implemented strategies focused on money in politics, electoral reform, civic engagement and digital disinformation. Kelly earned a master’s degree in international policy from Stanford University. 

Online, via Zoom

Eileen Donahoe Stanford University
Megan Metzger Stanford University
-

The Stanford Cyber Policy Center continues its online Zoom series: Digital Technology and Democracy, Security & Geopolitics in an Age of Coronavirus. These webinars will take place every other Wednesday at 10am PST. 

The next event, Improving Journalistic Coverage in the Digital Age: From Covid-19 to the 2020 Elections, will take place Wednesday, April 22, at 10am PST with Andrew Grotto, from the Cyber Policy Center's Program on Geopolitics, Technology, and Governance, Janine Zacharia, from Stanford's Department of Communication and Joan Donovan, from Harvard University’s Shorenstein Center on Media, Politics and Public Policy at the Kennedy School of Government, in conversation with Kelly Born, Executive Director of the Cyber Policy Center. 

Grotto and Zacharia will be discussing their recent report How to Report Responsibly on Hacks and Disinformation. Recognizing that reporters are targeted adversaries of foreign and domestic actors, especially during an election year, the report provides recommendations and actionable guidance, including a playbook and a repeatable, enterprise-wide process for implementation. Donovan will discuss health misinformation, COVID-19, and how this relates to disinformation around the 2020 elections, the US census and beyond. 

Join us on April 22nd for the next talk in this enlightening series. You can also watch our April 8th seminar, Digital Disinformation and Health: From Vaccines to the Coronavirus.  

April 22, 10am-11am (PST) 
Join via Zoom link 
Janine Zacharia 
Janine Zacharia is the Carlos Kelly McClatchy Lecturer in Stanford’s Department of Communication. In addition to teaching journalism courses at Stanford, she researches and writes on the intersection between technology and national security, media trends and foreign policy. Earlier in her career, she reported extensively on the Middle East and U.S. foreign policy including stints as Jerusalem Bureau Chief for the Washington Post, State Department Correspondent for Bloomberg News, Washington Bureau Chief for the Jerusalem Post, and Jerusalem Correspondent for Reuters. 


Andrew Grotto 
Andrew Grotto is director of the Program on Geopolitics, Technology and Governance and William J. Perry International Security Fellow at Stanford’s Cyber Policy Center and teaches the gateway course for graduate students specializing in cyber policy in Stanford’s Ford Dorsey Master’s in International Policy program. He is also a Visiting Fellow at the Hoover Institution. He served as Senior Director for Cyber Policy on the National Security Council in both the Obama and the Trump White House. 



Dr. Joan Donovan

Dr. Joan Donovan is Director of the Technology and Social Change (TaSC) Research Project at the Shorenstein Center. Dr. Donovan leads the field in examining internet and technology studies, online extremism, media manipulation, and disinformation campaigns. Dr. Donovan's research and teaching interests are focused on media manipulation, effects of disinformation campaigns, and adversarial media movements. This fall, she will be teaching a graduate-level course on Media Manipulation and Disinformation Campaigns (DPI-622) with a focus on how social movements, political parties, governments, corporations, and other networked groups engage in active efforts to shape media narratives and disrupt social institutions.

Kelly Born 
Kelly Born is the Executive Director of Stanford’s Cyber Policy Center, where she collaborates with the center’s program leaders to pioneer new lines of research, policy-oriented curriculum, policy workshops and executive education. Prior to joining Stanford, she helped to launch and lead The Madison Initiative at the William and Flora Hewlett Foundation, one of the largest philanthropic undertakings working to reduce polarization and improve U.S. democracy. There, she designed and implemented strategies focused on money in politics, electoral reform, civic engagement and digital disinformation. Kelly earned a master’s degree in international policy from Stanford University.

Andrew Grotto Director of the Program on Geopolitics, Technology and Governance Stanford University
Janine Zacharia Carlos Kelly McClatchy Lecturer in Stanford’s Department of Communication Stanford University
Joan Donovan Director of the Technology and Social Change (TaSC) Research Project Harvard University
-

Links to Event Materials:

 

The Stanford Cyber Policy Center continues its online Zoom series: Digital Technology and Democracy, Security & Geopolitics in an Age of Coronavirus. These webinars will take place every other Wednesday at 10am PST. 

The next event, Digital Disinformation and Health: From Vaccines to the Coronavirus, will take place Wednesday, April 8, at 10am PST with Kelly Born, Executive Director of the Cyber Policy Center, in conversation with Professor David Broniatowski, from George Washington University, Professor Kathleen M. Carley, from Carnegie Mellon University, and Professor Jacob N. Shapiro, from Princeton University. 

In particular, Professor Broniatowski will discuss the results of new studies regarding bots and trolls in the vaccine debate, as well as what makes messages go viral from the standpoint of Fuzzy Trace TheoryProfessor Carley will explore how information moves from country to country, with a look at both the differences in who is broadcasting certain types of disinformation and the role bots play in the spread. Professor Shapiro will speak to trends and themes we are seeing in coronavirus disinformation narratives and in news reporting on COVID-related misinformation.


David Broniatowski 
Professor David Broniatowski conducts research in decision-making under risk, group decision-making, system architecture, and behavioral epidemiology. This research program draws upon a wide range of techniques including formal mathematical modeling, experimental design, automated text analysis and natural language processing, social and technical network analysis, and big data. Current projects include a text network analysis of transcripts from the US Food and Drug Administration's Circulatory Systems Advisory Panel meetings, a mathematical formalization of Fuzzy Trace Theory -- a leading theory of decision-making under risk, derivation of metrics for flexibility and controllability for complex engineered socio-technical systems, and using Twitter data to conduct surveillance of influenza infection and the resulting social response. 
Professor Kathleen M. Carley 
Professor Kathleen M. Carley is Director of the Center for Informed Democracy and Social-cybersecurity (IDeaS) and the director of the center for Computational Analysis of Social and Organizational Systems (CASOS). She specializes in network science, agent-based modeling, and text-mining within a complex socio-technical system, organizational and social theory framework. In her work, she examines how cognitive, social and institutional factors come together to impact individual, organizational and societal outcomes. Using this lens she has addressed a number of policy issues including counter-terrorism, human and narcotic trafficking, cyber and nuclear threat, organizational resilience and design, natural disaster preparedness, cyber threat in social media, and leadership.   
Professor Jacob N. Shapiro 
Professor Jacob N. Shapiro is professor of Politics and International Affairs at Princeton University and directs the Empirical Studies of Conflict Project, a multi-university consortium that compiles and analyzes micro-level data on politically motivated violence in countries around the world. His research covers conflict, economic development, and security policy. He is author of The Terrorist’s Dilemma: Managing Violent Covert Organizations and co-author of Small Wars, Big Data: The Information Revolution in Modern Conflict. His research has been published in broad range of academic and policy journals as well as a number of edited volumes. He has conducted field research and large-scale policy evaluations in Afghanistan, Colombia, India, and Pakistan.

Kelly BornKelly Born is the Executive Director of Stanford’s Cyber Policy Center, where she collaborates with the center’s program leaders to pioneer new lines of research, policy-oriented curriculum, policy workshops and executive education. Prior to joining Stanford, she helped to launch and lead The Madison Initiative at the William and Flora Hewlett Foundation, one of the largest philanthropic undertakings working to reduce polarization and improve U.S. democracy.  There, she designed and implemented strategies focused on money in politics, electoral reform, civic engagement and digital disinformation. Kelly earned a master’s degree in international policy from Stanford University.

Online, via Zoom: REGISTER

Professor David Broniatowski George Washington University
Professor Kathleen M. Carley Carnegie Mellon University
Professor Jacob N. Shapiro Princeton University
Seminars
Paragraphs

The run-up to the 2016 U.S. presidential election illustrated how vulnerable our most venerated journalistic outlets are to a new kind of information warfare. Reporters are a targeted adversary of foreign and domestic actors who want to harm our democracy. And to cope with this threat, especially in an election year, news organizations need to prepare for another wave of false, misleading, and hacked information. Often, the information will be newsworthy. Expecting reporters to refrain from covering news goes against core principles of American journalism and the practical business drivers that shape the intensely competitive media marketplace. In these cases, the question is not whether to report but how to do so most responsibly. Our goal is to give journalists actionable guidance.

Included in the report is the Newsroom Playbook for Propaganda Reporting and a helpful Implementing the Playbook flowchart. 

Read More > 

All Publications button
1
Publication Type
White Papers
Publication Date
Authors
0
cropFDLINWNL-Presenter-ThielD_0.jpg

David was previously the Chief Technologist of the Stanford Internet Observatory. He performs research in the areas of coordinated disinformation campaigns, the dynamics of various "alt" platforms, decentralized social media, and issues affecting online child safety. He is also a managing editor of the Journal of Online Trust and Safety.

Prior to Stanford, David worked at Facebook, primarily focusing on security and safety for Facebook Connectivity, a collection of projects aimed at providing faster and less expensive internet connectivity to unconnected or underconnected communities. Projects included the Terragraph mesh networking system, the Magma open source mobile network platform, Express Wi-Fi and Facebook Lite.

Before Facebook, David was a VP at iSEC Partners and later NCC Group, managing the North American security consulting and research team, as well as producing original security research, coordinating vulnerability disclosure and performing security assessments and penetration testing for companies across a wide range of business sectors.

David has spoken at various industry conferences, including Black Hat, DEFCON, PacSec and the Crimes Against Children Conference. He is also the author of iOS Application Security (No Starch Press) and coauthor of Mobile Application Security (McGraw-Hill).

Former Chief Technologist, Stanford Internet Observatory
Big Data Architect
Date Label
-

The research on misinformation generally and fake news specifically is vast, as is coverage in media outlets. Two questions run throughout both the academic and public discourse: what explains the spread of fake news online, and what can be done about it? While there is substantial literature on who is likely to be exposed to and share fake news, these behaviors might not signal belief or effect. Conversely, there is far less work on who is able to differentiate between true and false stories and, as a result, who is most likely to believe fake news (or, conversely, not believe true news), a question that speaks directly to Facebook’s recent “community review” approach to combating the spread of fake news on its platform.

In his talk, Professor Tucker will report on initial findings from a new collaborative project between NYU’s Center for Social Media and Politics and Stanford’s Program on Democracy and the Internet designed to fill these gaps in the scholarly literature and inform the types of policy decisions being made by Facebook. The project has enlisted both professional fact checkers and random “crowds” of close to 100 people to fact check five “fresh” articles (that have appeared in the past 24 hours) per day, four days a week, for eights week using an innovative transparent and replicable algorithm for selecting the articles for fact checking. He will report on initial observations regarding (a) individual determinants of fact checking proficiency; (b) the viability using the “wisdom of the crowds” for fact checking, including examining the tradeoffs between crafting a more accurate crowd vs. a more representative crowd and (c) results from experiments designed to assess potential policy interventions to improve crowdsourcing accuracy.

About the Speaker:

Image
Joshua Tucker
Joshua A. Tucker is Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University. He is the Director of NYU’s Jordan Center for Advanced Study of Russia, a co-Director of the NYU Social Media and Political Participation (SMaPP) laboratory, a co-Director of the new NYU Center for Social Media and Politics, and a co-author/editor of the award-winning politics and policy blog The Monkey Cage at The Washington Post. He serves on the advisory boards of the American National Election Study, the Comparative Study of Electoral Systems, and numerous academic journals. Originally a scholar of post-communist politics, he has more recently studied social media and politics. His research in this area has included studies on the effects of network diversity on tolerance, partisan echo chambers, online hate speech, the effects of exposure to social media on political knowledge, online networks and protest, disinformation and fake news, how authoritarian regimes respond to online opposition, and Russian bots and trolls. His research has been funded by over $8 million in grants in the past three years, including a 2019 Knight Foundation “Research on the Future of an Informed Society” grant. His most recent book is the co-authored Communism’s Shadow: Historical Legacies and Contemporary Political Attitudes (Princeton University Press, 2017), and he is the co-editor of the forthcoming edited volume Social Media and Democracy (Cambridge University Press, 2020). 

News Type
Q&As
Date
Paragraphs

A Q&A with Professor Stephen Stedman, who serves as the Secretary General of the Kofi Annan Commission on Elections and Democracy in the Digital Age.

Image
Stedman Steve
Stephen Stedman, a Senior Fellow at the Freeman Spogli Institute for International Studies (FSI) at Stanford, is the director of the Kofi Annan Commission on Elections and Democracy in the Digital Age, an initiative of the Kofi Annan Foundation. The Commission is focused on studying the effects of social media on electoral integrity and the measures needed to safeguard the democratic process.  

At the World Economic Forum in Davos, Switzerland, the Commission which includes FSI’s Nathaniel Persily, Alex Stamos, and Toomas Ilves, launched a new report, Protecting Electoral Integrity in the Digital Age. The report takes an in-depth look at the challenges faced by democracy today and makes a number of recommendations as to how best to tackle the threats posed by social media to free and fair elections. On Tuesday, February 25, professors Stedman and Persily will discuss the report’s findings and recommendations during a lunch seminar from 12-1:15 PM. To learn more and to RSVP, visit the event page.

Q: What are some of the major findings of the report? Are digital technologies a threat to democracy?

Steve Stedman: Our report suggests that social media and the Internet pose an acute threat to democracy, but probably not in the way that most people assume. Many people believe that the problem is a diffuse one based on excess disinformation and a decline in the ability of citizens to agree on facts. We too would like the quality of deliberation in our democracy to improve and we worry about how social media might degrade democratic debate, but if we are talking about existential threats to democracy the problem is that digital technologies can be weaponized to undermine the integrity of elections.

When we started our work, we were struck by how many pathologies of democracy are said to be caused by social media: political polarization; distrust in fellow citizens, government institutions and traditional media; the decline in political parties; democratic deliberation, and on and on. Social media is said to lessen the quality of democracy because it encourages echo chambers and filter bubbles where we only interact with those who share our political beliefs. Some platforms are said to encourage extremism through their algorithms.

What we found, instead, is a much more complex problem. Many of the pathologies that social media are said to create – for instance, polarization, distrust, and political sorting begin their trendlines before the invention of the Internet, let alone the smart phone. Some of the most prominent claims are unsupported by evidence, or are confounded by conflicting evidence. In fact, we say that some assertions simply cannot be judged without access to data held by the tech platforms.

Instead, we rely on the work of scholars like Yochai Benkler and Edda Humphries to argue that not all democracies are equally vulnerable to network propaganda and disinformation. It is precisely where you have high pre-existing affective polarization, low trust, and hyperpartisan media, that digital technologies can intensify and amplify polarization.

Elections and toxic polarization are a volatile mix. Weaponized disinformation and hate speech can wreak havoc on elections, even if they don’t alter the vote tallies. This is because democracies require a system of mutual security. In established democracies political candidates and followers take it for granted that if they lose an election, they will be free to organize and contest future elections. They are confident that the winners will not use their power to eliminate them or disenfranchise them. Winners have the expectation that they hold power temporarily, and accept that they cannot change the rules of competition to stay in power forever. In short, mutual security is a set of beliefs and norms that turn elections from being a one-shot game into a repeated game with a long shadow of the future.

In a situation already marred by toxic polarization, we fear that weaponized disinformation and hate speech can cause parties and followers to believe that the other side doesn’t believe in the rules of mutual security. The stakes become higher. Followers begin to believe that losing an election means losing forever. The temptation to cheat and use violence increases dramatically. 

Q: As far as political advertising, the report encourages platforms to provide more transparency about who is funding that advertising. But it also asks that platforms require candidates to make a pledge that they will avoid deceptive campaign practices when purchasing ads. It also goes as far as to recommend financial penalties for a platform if, for example, a bot spreading information is not labelled as such. Some platforms might argue that this puts an unfair onus on them. How might platforms be encouraged to participate in this effort?

SS: The platforms have a choice: they can contribute to toxic levels of political polarization and the degradation of democratic deliberation, or they can protect electoral integrity and democracy. There are a lot of employees of the platforms who are alarmed at the state of polarization in this country and don’t want their products to be conduits of weaponized disinformation and hate speech. You saw this in the letter signed by Facebook employees objecting to the decision by Mark Zuckerberg that Facebook would treat political advertising as largely exempt from their community standards. If ever there were a moment in this country that we should demand that our political parties and candidates live up to a higher ethical standard it is now. Instead Facebook decided to allow political candidates to pay to run ads even if the ads use disinformation, tell bald-faced lies, engage in hate speech, and use doctored video and audio. Their rationale is that this is all part of “the rough and tumble of politics.” In doing so, Facebook is in the contradictory position that it has hundreds of employees working to stop disinformation and hate speech in elections in Brazil and India, but is going to allow politicians and parties in the United States to buy ads that can use disinformation and hate speech.

Our recommendation gives Facebook an option that allows political advertisement in a way that need not enflame polarization and destroy mutual security among candidates and followers: 1.) Require that candidates, groups or parties who want to pay for political advertising on Facebook sign a pledge of ethical digital practices; 2.) Then use the standards to determine if an ad meets the pledge or not. If an ad uses deep fakes, if an ad grotesquely distorts the facts, if an ad out and out lies about what an opponent said or did, then Facebook would not accept the ad. Facebook can either help us raise our electoral politics out of the sewer or it can ensure that our politics drowns in it.

It’s worth pointing out that the platforms are only one actor in a many-sided problem. Weaponized disinformation is actively spread by unscrupulous politicians and parties; it is used by foreign countries to undermine electoral integrity; and it is often spread and amplified by irresponsible partisan traditional media. Fox News, for example, ran the crazy conspiracy story about Hilary Clinton running a pedophile ring out of a pizza parlor in DC. Individuals around the president, including the son of the first National Security Adviser tweeted the story. 

Q: While many of the recommendations focus on the role of platforms and governments, the report also proposes that public authorities promote digital and media literacy in schools as well as public interest programming for the general population. What might that look like? And how would that type of literacy help protect democracy? 

SS: Our report recommends digital literacy programs as a means to help build democratic resilience against weaponized disinformation. Having said that however, the details matter tremendously. Sam Wineburg at Stanford, who we cite, has extremely insightful ideas for how to teach citizens to evaluate the information they see on the Internet, but even he puts forward warnings: if done poorly digital literacy could simply increase citizen distrust of all media, good and bad; digital literacy in a highly polarized context begs the question of who will decide what is good and bad media. We say in passing that in addition to digital literacy we need to train citizens to understand biased assimilation of information. Digital literacy trains citizens to understand who is behind a piece of information and who benefits from it. But we also need to teach citizens to stand back and ask, “why am I predisposed to want to believe this piece of information?”

Q: Obviously access to data is critical for researchers and commissioners to do their work, analysis and reporting. One of the recommendations asks that public authorities compel major internet platforms to share meaningful data with academic institutions. Why is it so important for platforms and academia to share information?

SS: Some of the most important claims about the effects of social media can’t be evaluated without access to the data. One example we cite in the report is the controversy about whether YouTube’s algorithms radicalize individuals and send them down a rabbit hole of racist, nationalist content. This is a common claim and has appeared on the front pages of the New York Times. The research supporting the claim, however, is extremely thin, and other research disputes it. What we say is that we can’t adjudicate this argument unless YouTube were to share its data, so that researchers can see what the algorithm is doing. There are similar debates concerning the effects of Facebook. One of our commissioners, Nate Persily, has been at the forefront of working with Facebook to provide certified researchers with privacy protected data – Social Science One. Progress has been so slow that the researchers have lost patience. We hope that governments can step in and compel the platforms to share the data.

Q: This is one of the first reports to look at this problem in the Global South. Is the problem more or less critical there?

SS: Kofi Annan was very concerned that the debate about digital technologies and democracy was far too focused on Europe and the United States. Before Cambridge Analytica’s involvement in the United States and Brexit elections of 2016, its predecessor company had manipulated elections in Asia, Africa and the Caribbean. There is now a transnational industry in election manipulation.

What we found does not bode well for democracies in the rest of the world. The factors that make democracies vulnerable to network propaganda and weaponized disinformation are often present in the Global South: pre-existing polarization, low trust, and hyperpartisan traditional media. Many of these democracies already have a repertoire of electoral violence. 

On the other hand, we did find innovative partnerships in Indonesia and Mexico where Election Management Bodies, civil society organizations, and traditional media cooperated to fight disinformation during elections, often with success. An important recommendation of the report is that greater attention and resources are needed for such efforts to protect electoral integrity in the Global South. 

About the Commission on Elections and Democracy in the Digital Age

 As one of his last major initiatives, in 2018 Kofi Annan convened the Commission on Elections and Democracy in the Digital Age. The Commission includes members from civil society and government, the technology sector, academia and media; across the year 2019 they examined and reviewed the opportunities and challenges for electoral integrity created by technological innovations. Assisted by a small secretariat at Stanford University and the Kofi Annan Foundation, the Commission has undertaken extensive consultations and issue recommendations as to how new technologies, social media platforms and communication tools can be harnessed to engage, empower and educate voters, and to strengthen the integrity of elections. Visit  the Kofi Annan Foundation and the Commission on Elections and Democracy in the Digital Age for more on their work.

Hero Image
democracy stock image
All News button
1
Subscribe to Security