-

Links to Event Materials:

 

The Stanford Cyber Policy Center continues its online Zoom series: Digital Technology and Democracy, Security & Geopolitics in an Age of Coronavirus. These webinars will take place every other Wednesday at 10am PST. 

The next event, Digital Disinformation and Health: From Vaccines to the Coronavirus, will take place Wednesday, April 8, at 10am PST with Kelly Born, Executive Director of the Cyber Policy Center, in conversation with Professor David Broniatowski, from George Washington University, Professor Kathleen M. Carley, from Carnegie Mellon University, and Professor Jacob N. Shapiro, from Princeton University. 

In particular, Professor Broniatowski will discuss the results of new studies regarding bots and trolls in the vaccine debate, as well as what makes messages go viral from the standpoint of Fuzzy Trace TheoryProfessor Carley will explore how information moves from country to country, with a look at both the differences in who is broadcasting certain types of disinformation and the role bots play in the spread. Professor Shapiro will speak to trends and themes we are seeing in coronavirus disinformation narratives and in news reporting on COVID-related misinformation.


David Broniatowski 
Professor David Broniatowski conducts research in decision-making under risk, group decision-making, system architecture, and behavioral epidemiology. This research program draws upon a wide range of techniques including formal mathematical modeling, experimental design, automated text analysis and natural language processing, social and technical network analysis, and big data. Current projects include a text network analysis of transcripts from the US Food and Drug Administration's Circulatory Systems Advisory Panel meetings, a mathematical formalization of Fuzzy Trace Theory -- a leading theory of decision-making under risk, derivation of metrics for flexibility and controllability for complex engineered socio-technical systems, and using Twitter data to conduct surveillance of influenza infection and the resulting social response. 
Professor Kathleen M. Carley 
Professor Kathleen M. Carley is Director of the Center for Informed Democracy and Social-cybersecurity (IDeaS) and the director of the center for Computational Analysis of Social and Organizational Systems (CASOS). She specializes in network science, agent-based modeling, and text-mining within a complex socio-technical system, organizational and social theory framework. In her work, she examines how cognitive, social and institutional factors come together to impact individual, organizational and societal outcomes. Using this lens she has addressed a number of policy issues including counter-terrorism, human and narcotic trafficking, cyber and nuclear threat, organizational resilience and design, natural disaster preparedness, cyber threat in social media, and leadership.   
Professor Jacob N. Shapiro 
Professor Jacob N. Shapiro is professor of Politics and International Affairs at Princeton University and directs the Empirical Studies of Conflict Project, a multi-university consortium that compiles and analyzes micro-level data on politically motivated violence in countries around the world. His research covers conflict, economic development, and security policy. He is author of The Terrorist’s Dilemma: Managing Violent Covert Organizations and co-author of Small Wars, Big Data: The Information Revolution in Modern Conflict. His research has been published in broad range of academic and policy journals as well as a number of edited volumes. He has conducted field research and large-scale policy evaluations in Afghanistan, Colombia, India, and Pakistan.

Kelly BornKelly Born is the Executive Director of Stanford’s Cyber Policy Center, where she collaborates with the center’s program leaders to pioneer new lines of research, policy-oriented curriculum, policy workshops and executive education. Prior to joining Stanford, she helped to launch and lead The Madison Initiative at the William and Flora Hewlett Foundation, one of the largest philanthropic undertakings working to reduce polarization and improve U.S. democracy.  There, she designed and implemented strategies focused on money in politics, electoral reform, civic engagement and digital disinformation. Kelly earned a master’s degree in international policy from Stanford University.

Online, via Zoom: REGISTER

Professor David Broniatowski George Washington University
Professor Kathleen M. Carley Carnegie Mellon University
Professor Jacob N. Shapiro Princeton University
Seminars
Paragraphs

The run-up to the 2016 U.S. presidential election illustrated how vulnerable our most venerated journalistic outlets are to a new kind of information warfare. Reporters are a targeted adversary of foreign and domestic actors who want to harm our democracy. And to cope with this threat, especially in an election year, news organizations need to prepare for another wave of false, misleading, and hacked information. Often, the information will be newsworthy. Expecting reporters to refrain from covering news goes against core principles of American journalism and the practical business drivers that shape the intensely competitive media marketplace. In these cases, the question is not whether to report but how to do so most responsibly. Our goal is to give journalists actionable guidance.

Included in the report is the Newsroom Playbook for Propaganda Reporting and a helpful Implementing the Playbook flowchart. 

Read More > 

All Publications button
1
Publication Type
White Papers
Publication Date
Authors
Andrew Grotto
Janine Zacharia
-

The research on misinformation generally and fake news specifically is vast, as is coverage in media outlets. Two questions run throughout both the academic and public discourse: what explains the spread of fake news online, and what can be done about it? While there is substantial literature on who is likely to be exposed to and share fake news, these behaviors might not signal belief or effect. Conversely, there is far less work on who is able to differentiate between true and false stories and, as a result, who is most likely to believe fake news (or, conversely, not believe true news), a question that speaks directly to Facebook’s recent “community review” approach to combating the spread of fake news on its platform.

In his talk, Professor Tucker will report on initial findings from a new collaborative project between NYU’s Center for Social Media and Politics and Stanford’s Program on Democracy and the Internet designed to fill these gaps in the scholarly literature and inform the types of policy decisions being made by Facebook. The project has enlisted both professional fact checkers and random “crowds” of close to 100 people to fact check five “fresh” articles (that have appeared in the past 24 hours) per day, four days a week, for eights week using an innovative transparent and replicable algorithm for selecting the articles for fact checking. He will report on initial observations regarding (a) individual determinants of fact checking proficiency; (b) the viability using the “wisdom of the crowds” for fact checking, including examining the tradeoffs between crafting a more accurate crowd vs. a more representative crowd and (c) results from experiments designed to assess potential policy interventions to improve crowdsourcing accuracy.

About the Speaker:

Image
Joshua Tucker
Joshua A. Tucker is Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University. He is the Director of NYU’s Jordan Center for Advanced Study of Russia, a co-Director of the NYU Social Media and Political Participation (SMaPP) laboratory, a co-Director of the new NYU Center for Social Media and Politics, and a co-author/editor of the award-winning politics and policy blog The Monkey Cage at The Washington Post. He serves on the advisory boards of the American National Election Study, the Comparative Study of Electoral Systems, and numerous academic journals. Originally a scholar of post-communist politics, he has more recently studied social media and politics. His research in this area has included studies on the effects of network diversity on tolerance, partisan echo chambers, online hate speech, the effects of exposure to social media on political knowledge, online networks and protest, disinformation and fake news, how authoritarian regimes respond to online opposition, and Russian bots and trolls. His research has been funded by over $8 million in grants in the past three years, including a 2019 Knight Foundation “Research on the Future of an Informed Society” grant. His most recent book is the co-authored Communism’s Shadow: Historical Legacies and Contemporary Political Attitudes (Princeton University Press, 2017), and he is the co-editor of the forthcoming edited volume Social Media and Democracy (Cambridge University Press, 2020). 

News Type
Q&As
Date
Paragraphs

A Q&A with Professor Stephen Stedman, who serves as the Secretary General of the Kofi Annan Commission on Elections and Democracy in the Digital Age.

Image
Stedman Steve
Stephen Stedman, a Senior Fellow at the Freeman Spogli Institute for International Studies (FSI) at Stanford, is the director of the Kofi Annan Commission on Elections and Democracy in the Digital Age, an initiative of the Kofi Annan Foundation. The Commission is focused on studying the effects of social media on electoral integrity and the measures needed to safeguard the democratic process.  

At the World Economic Forum in Davos, Switzerland, the Commission which includes FSI’s Nathaniel Persily, Alex Stamos, and Toomas Ilves, launched a new report, Protecting Electoral Integrity in the Digital Age. The report takes an in-depth look at the challenges faced by democracy today and makes a number of recommendations as to how best to tackle the threats posed by social media to free and fair elections. On Tuesday, February 25, professors Stedman and Persily will discuss the report’s findings and recommendations during a lunch seminar from 12-1:15 PM. To learn more and to RSVP, visit the event page.

Q: What are some of the major findings of the report? Are digital technologies a threat to democracy?

Steve Stedman: Our report suggests that social media and the Internet pose an acute threat to democracy, but probably not in the way that most people assume. Many people believe that the problem is a diffuse one based on excess disinformation and a decline in the ability of citizens to agree on facts. We too would like the quality of deliberation in our democracy to improve and we worry about how social media might degrade democratic debate, but if we are talking about existential threats to democracy the problem is that digital technologies can be weaponized to undermine the integrity of elections.

When we started our work, we were struck by how many pathologies of democracy are said to be caused by social media: political polarization; distrust in fellow citizens, government institutions and traditional media; the decline in political parties; democratic deliberation, and on and on. Social media is said to lessen the quality of democracy because it encourages echo chambers and filter bubbles where we only interact with those who share our political beliefs. Some platforms are said to encourage extremism through their algorithms.

What we found, instead, is a much more complex problem. Many of the pathologies that social media are said to create – for instance, polarization, distrust, and political sorting begin their trendlines before the invention of the Internet, let alone the smart phone. Some of the most prominent claims are unsupported by evidence, or are confounded by conflicting evidence. In fact, we say that some assertions simply cannot be judged without access to data held by the tech platforms.

Instead, we rely on the work of scholars like Yochai Benkler and Edda Humphries to argue that not all democracies are equally vulnerable to network propaganda and disinformation. It is precisely where you have high pre-existing affective polarization, low trust, and hyperpartisan media, that digital technologies can intensify and amplify polarization.

Elections and toxic polarization are a volatile mix. Weaponized disinformation and hate speech can wreak havoc on elections, even if they don’t alter the vote tallies. This is because democracies require a system of mutual security. In established democracies political candidates and followers take it for granted that if they lose an election, they will be free to organize and contest future elections. They are confident that the winners will not use their power to eliminate them or disenfranchise them. Winners have the expectation that they hold power temporarily, and accept that they cannot change the rules of competition to stay in power forever. In short, mutual security is a set of beliefs and norms that turn elections from being a one-shot game into a repeated game with a long shadow of the future.

In a situation already marred by toxic polarization, we fear that weaponized disinformation and hate speech can cause parties and followers to believe that the other side doesn’t believe in the rules of mutual security. The stakes become higher. Followers begin to believe that losing an election means losing forever. The temptation to cheat and use violence increases dramatically. 

Q: As far as political advertising, the report encourages platforms to provide more transparency about who is funding that advertising. But it also asks that platforms require candidates to make a pledge that they will avoid deceptive campaign practices when purchasing ads. It also goes as far as to recommend financial penalties for a platform if, for example, a bot spreading information is not labelled as such. Some platforms might argue that this puts an unfair onus on them. How might platforms be encouraged to participate in this effort?

SS: The platforms have a choice: they can contribute to toxic levels of political polarization and the degradation of democratic deliberation, or they can protect electoral integrity and democracy. There are a lot of employees of the platforms who are alarmed at the state of polarization in this country and don’t want their products to be conduits of weaponized disinformation and hate speech. You saw this in the letter signed by Facebook employees objecting to the decision by Mark Zuckerberg that Facebook would treat political advertising as largely exempt from their community standards. If ever there were a moment in this country that we should demand that our political parties and candidates live up to a higher ethical standard it is now. Instead Facebook decided to allow political candidates to pay to run ads even if the ads use disinformation, tell bald-faced lies, engage in hate speech, and use doctored video and audio. Their rationale is that this is all part of “the rough and tumble of politics.” In doing so, Facebook is in the contradictory position that it has hundreds of employees working to stop disinformation and hate speech in elections in Brazil and India, but is going to allow politicians and parties in the United States to buy ads that can use disinformation and hate speech.

Our recommendation gives Facebook an option that allows political advertisement in a way that need not enflame polarization and destroy mutual security among candidates and followers: 1.) Require that candidates, groups or parties who want to pay for political advertising on Facebook sign a pledge of ethical digital practices; 2.) Then use the standards to determine if an ad meets the pledge or not. If an ad uses deep fakes, if an ad grotesquely distorts the facts, if an ad out and out lies about what an opponent said or did, then Facebook would not accept the ad. Facebook can either help us raise our electoral politics out of the sewer or it can ensure that our politics drowns in it.

It’s worth pointing out that the platforms are only one actor in a many-sided problem. Weaponized disinformation is actively spread by unscrupulous politicians and parties; it is used by foreign countries to undermine electoral integrity; and it is often spread and amplified by irresponsible partisan traditional media. Fox News, for example, ran the crazy conspiracy story about Hilary Clinton running a pedophile ring out of a pizza parlor in DC. Individuals around the president, including the son of the first National Security Adviser tweeted the story. 

Q: While many of the recommendations focus on the role of platforms and governments, the report also proposes that public authorities promote digital and media literacy in schools as well as public interest programming for the general population. What might that look like? And how would that type of literacy help protect democracy? 

SS: Our report recommends digital literacy programs as a means to help build democratic resilience against weaponized disinformation. Having said that however, the details matter tremendously. Sam Wineburg at Stanford, who we cite, has extremely insightful ideas for how to teach citizens to evaluate the information they see on the Internet, but even he puts forward warnings: if done poorly digital literacy could simply increase citizen distrust of all media, good and bad; digital literacy in a highly polarized context begs the question of who will decide what is good and bad media. We say in passing that in addition to digital literacy we need to train citizens to understand biased assimilation of information. Digital literacy trains citizens to understand who is behind a piece of information and who benefits from it. But we also need to teach citizens to stand back and ask, “why am I predisposed to want to believe this piece of information?”

Q: Obviously access to data is critical for researchers and commissioners to do their work, analysis and reporting. One of the recommendations asks that public authorities compel major internet platforms to share meaningful data with academic institutions. Why is it so important for platforms and academia to share information?

SS: Some of the most important claims about the effects of social media can’t be evaluated without access to the data. One example we cite in the report is the controversy about whether YouTube’s algorithms radicalize individuals and send them down a rabbit hole of racist, nationalist content. This is a common claim and has appeared on the front pages of the New York Times. The research supporting the claim, however, is extremely thin, and other research disputes it. What we say is that we can’t adjudicate this argument unless YouTube were to share its data, so that researchers can see what the algorithm is doing. There are similar debates concerning the effects of Facebook. One of our commissioners, Nate Persily, has been at the forefront of working with Facebook to provide certified researchers with privacy protected data – Social Science One. Progress has been so slow that the researchers have lost patience. We hope that governments can step in and compel the platforms to share the data.

Q: This is one of the first reports to look at this problem in the Global South. Is the problem more or less critical there?

SS: Kofi Annan was very concerned that the debate about digital technologies and democracy was far too focused on Europe and the United States. Before Cambridge Analytica’s involvement in the United States and Brexit elections of 2016, its predecessor company had manipulated elections in Asia, Africa and the Caribbean. There is now a transnational industry in election manipulation.

What we found does not bode well for democracies in the rest of the world. The factors that make democracies vulnerable to network propaganda and weaponized disinformation are often present in the Global South: pre-existing polarization, low trust, and hyperpartisan traditional media. Many of these democracies already have a repertoire of electoral violence. 

On the other hand, we did find innovative partnerships in Indonesia and Mexico where Election Management Bodies, civil society organizations, and traditional media cooperated to fight disinformation during elections, often with success. An important recommendation of the report is that greater attention and resources are needed for such efforts to protect electoral integrity in the Global South. 

About the Commission on Elections and Democracy in the Digital Age

 As one of his last major initiatives, in 2018 Kofi Annan convened the Commission on Elections and Democracy in the Digital Age. The Commission includes members from civil society and government, the technology sector, academia and media; across the year 2019 they examined and reviewed the opportunities and challenges for electoral integrity created by technological innovations. Assisted by a small secretariat at Stanford University and the Kofi Annan Foundation, the Commission has undertaken extensive consultations and issue recommendations as to how new technologies, social media platforms and communication tools can be harnessed to engage, empower and educate voters, and to strengthen the integrity of elections. Visit  the Kofi Annan Foundation and the Commission on Elections and Democracy in the Digital Age for more on their work.

All News button
1
-

Join Stephen Stedman, Nathaniel Persily, the Cyber Policy Center, and the Center on Democracy, Development and the Rule of Law (CDDRL) in an enlightening exploration of the recent report, Protecting Electoral Integrity in the Digital Age, put out by the Kofi Annan Commission on Elections and Democracy in the Digital Age. Moderated by Kelly Born, Executive Director of the Cyber Policy Center.

More on the report:

 

Abstract:

New information and communication technologies (ICTs) pose difficult challenges for electoral integrity. In recent years foreign governments have used social media and the Internet to interfere in elections around the globe. Disinformation has been weaponized to discredit democratic institutions, sow societal distrust, and attack political candidates. Social media has proved a useful tool for extremist groups to send messages of hate and to incite violence. Democratic governments strain to respond to a revolution in political advertising brought about by ICTs. Electoral integrity has been at risk from attacks on the electoral process, and on the quality of democratic deliberation.

The relationship between the Internet, social media, elections, and democracy is complex, systemic, and unfolding. Our ability to assess some of the most important claims about social media is constrained by the unwillingness of the major platforms to share data with researchers. Nonetheless, we are confident about several important findings.

About the Speakers

Image
Stephen Stedman
Stephen Stedman is a senior fellow at the Freeman Spogli Institute for International Studies, professor, by courtesy, of political science, and deputy director of the Center on Democracy, Development and Rule of Law. Professor Stedman currently serves as the Secretary General of the Kofi Annan Commission on Elections and Democracy in the Digital Age, and is the principal drafter of the Commission’s report, “Protecting Electoral Integrity in the Digital Age.”

Professor Stedman served as a special adviser and assistant secretary general of the United Nations, where he helped to create the United Nations Peacebuilding Commission, the UN’s Peacebuilding Support Office, the UN’s Mediation Support Office, the Secretary’s General’s Policy Committee, and the UN’s counterterrorism strategy. During 2005 his office successfully negotiated General Assembly approval of the Responsibility to Protect. From 2010 to 2012, he directed the Global Commission on Elections, Democracy, and Security, an international body mandated to promote and protect the integrity of elections worldwide.  Professor Stedman served as Chair of the Stanford Faculty Senate in 2018-2019. He and his wife Corinne Thomas are the Resident Fellows in Crothers, Stanford’s academic theme house for Global Citizenship. In 2018, Professor Stedman was awarded the Lloyd B. Dinkelspiel Award for outstanding service to undergraduate education at Stanford.

Image
Nathaniel Persily

Nathaniel Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of Political Science, Communication and FSI.  Prior to joining Stanford, Professor Persily taught at Columbia and the University of Pennsylvania Law School, and as a visiting professor at Harvard, NYU, Princeton, the University of Amsterdam, and the University of Melbourne. Professor Persily’s scholarship and legal practice focus on American election law or what is sometimes called the “law of democracy,” which addresses issues such as voting rights, political parties, campaign finance, redistricting, and election administration. He has served as a special master or court-appointed expert to craft congressional or legislative districting plans for Georgia, Maryland, Connecticut, and New York, and as the Senior Research Director for the Presidential Commission on Election Administration.

Also among the commissioners of the report were FSI's Alex Stamos, and Toomas Ilves

 

 

Stephen Stedman
-

Abstract:

China’s cyberspace and technology regime is going through a period of change—but it’s taking a while. The U.S.–China economic and tech competition both influences Chinese government developments and awaits their outcomes, and the 2017 Cybersecurity Law set up a host of still-unresolved questions. Data governance, security standards, market access, compliance, and other questions saw only modest new clarity in 2019. But 2020 promises new laws on personal information protection and data security, and the Stanford-based DigiChina Project in the Program on Geopolitics, Technology, and Governance, is devoted to monitoring, translating, and explaining these developments. From AI governance to the the nexus of cybersecurity and supply chains, this talk will summarize recent Chinese policymaking and lay out expectations for the year to come.

Image
Graham Webster
About the Speaker:

Graham Webster is editor in chief of the Stanford–New America DigiChina Project at the Stanford University Cyber Policy Center and a China digital economy fellow at New America. He was previously a senior fellow and lecturer at Yale Law School, where he was responsible for the Paul Tsai China Center’s U.S.–China Track 2 and Track 1.5 dialogues for five years before leading programming on cyberspace and technology issues. In the past, he wrote a CNET News blog on technology and society from Beijing, worked at the Center for American Progress, and taught East Asian politics at NYU's Center for Global Affairs. Webster holds a master's degree in East Asian studies from Harvard University and a bachelor's degree in journalism from Northwestern University. Webster also writes the independent Transpacifica e-mail newsletter.

Graham Webster
-

Multilateral Negotiations on ICTs (information and communications technologies) and International Security: Process and Prospects for the UN Group of Government Experts and the UN Open-Ended Working Group

Abstract: The intent of this seminar is to provide an update on recent events at the UN relevant to international discussions of cybersecurity (and a primer of sorts on current UN processes for addressing this topic).

In 2018, UN Member States decided to establish two concurrent negotiations with nearly identical mandates on the international security dimension of ICTs—a sixth limited membership UN Group of Governmental Experts (GGE) and an Open-Ended Working Group (OEWG) open to all governments. How did this happen? Are they competing or complementary endeavors? Is it likely that one will be able to bridge the longstanding divides on how international law applies to cyberspace or agree by consensus to additional norms of responsible State behavior? What would be a good outcome of each process? And how do these negotiations fit into the wider UN ecosystem, including the follow-up to the Secretary-General’s High Level Panel on Digital Cooperation.  

Image
Kerstin Vignard
About the Speaker: Kerstin Vignard is an international security policy professional with nearly 25 years’ experience at the United Nations, with a particular interest in the nexus of international security policy and technology. Vignard is Deputy to the Director at UNIDIR, currently on temporary assignment leading UNIDIR’s team supporting the Chairmen of the latest Group of Governmental Experts (GGEs) on Cyber Security and the Open-Ended Working Group. She has led UNIDIR’s team supporting four previous cyber GGEs. From 2013 to 2018, she initiated and led UNIDIR’s work on the weaponization of increasingly autonomous technologies, and is the co-Principal Investigator of a CIFAR AI & Society grant examining potential regulatory approaches for security and defence applications of AI.

Paragraphs

Despite pressure from President Donald Trump and Attorney General William Barr, Apple continues to stand its ground and refuses to re-engineer iPhones so law enforcement can unlock the devices. Apple has maintained that it has done everything required by law and that creating a "backdoor" would undermine cybersecurity and privacy for iPhone users everywhere.

Apple is right to stand firm in its position that building a "backdoor" could put user data at risk.

At its most basic, encryption is the act of converting plaintext (like a credit card number) into unintelligible ciphertext using a very large, random number called a key. Anyone with the key can convert the ciphertext back to plaintext. Persons without the key cannot, meaning that even if they acquire the ciphertext, it should still be impossible for them to discover the meaning of the underlying plaintext.

Full Text at CNN

 

 

 

 

All Publications button
1
Publication Type
Commentary
Publication Date
Authors
Andrew Grotto
0
Tracy Navichoque portrait

Tracy Navichoque is the Program Manager at the Global Digital Policy Incubator (GDPi). Before coming to Stanford, Tracy was the Membership and Education Manager at the Los Angeles World Affairs Council. She holds an MA in Public Diplomacy from USC and BA in History and International Studies from Northwestern University. She was a Fulbright Scholar to Uruguay and worked in education and public affairs at the binational center in Montevideo. She serves as a Gilman International Scholarship Alumni Ambassador.

Program Manager at the Global Digital Policy Incubator (GDPi)
Date Label
-

Abstract:

Considerable scholarship has established that algorithms are an increasingly important part of what information people encounter in everyday life. Much less work has focused on studying users’ experiences with, understandings of, and attitudes about how algorithms may influence what they see and do. The dearth of research on this topic may be in part due to the difficulty in studying a subject about which there is no known ground truth given that details about algorithms are proprietary and rarely made public. In this talk, I will report on the methodological challenges of studying people’s algorithm skills based on 83 in-person interviews conducted in five countries. I will also discuss the types of algorithm skills identified from our data. The talk will advocate for more such scholarship to accompany existing system-level analyses of algorithms’ social implications and offers a blue print for how to do this.

Image
Eszter Hargittai
About the Speaker:

Eszter Hargittai is Professor and Chair of Internet Use and Society at the Institute of Communication and Media Research, University of Zurich. Previously, she was the Delaney Family Professor in the Communication Studies Department at Northwestern University. In 2019, she was elected Fellow of the International Communication Association and also received the William F. Ogburn Mid-Career Achievement Award from the American Sociological Association’s section on Communication, Information Technology and Media Sociology. For over two decades, she has been researching people’s Internet uses and skills, and how these relate to questions of social inequality.

 

Subscribe to Russia and Eurasia