Governance

FSI's research on the origins, character and consequences of government institutions spans continents and academic disciplines. The institute’s senior fellows and their colleagues across Stanford examine the principles of public administration and implementation. Their work focuses on how maternal health care is delivered in rural China, how public action can create wealth and eliminate poverty, and why U.S. immigration reform keeps stalling. 

FSI’s work includes comparative studies of how institutions help resolve policy and societal issues. Scholars aim to clearly define and make sense of the rule of law, examining how it is invoked and applied around the world. 

FSI researchers also investigate government services – trying to understand and measure how they work, whom they serve and how good they are. They assess energy services aimed at helping the poorest people around the world and explore public opinion on torture policies. The Children in Crisis project addresses how child health interventions interact with political reform. Specific research on governance, organizations and security capitalizes on FSI's longstanding interests and looks at how governance and organizational issues affect a nation’s ability to address security and international cooperation.

-

Image
Avi Tuschman, Adam Berinsky, David Rand

Please join the Cyber Policy Center for Exploring Potential “Solutions” to Online Disinformation​, hosted by Cyber Policy Center's Kelly Born, with guests Adam Berinsky, Mitsui Professor of Political Science at MIT and Director of the MIT Political Experiments Research Lab (PERL) at MIT, David Rand, Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences, and Director of the Human Cooperation Laboratory and the Applied Cooperation Team at MIT, and Avi Tuschman, Founder & CIO, Pinpoint Predictive. The session is open but registraton is required.

Adam Berinsky is the Mitsui Professor of Political Science at MIT and serves as the director of the MIT Political Experiments Research Lab (PERL). He is also a Faculty Affiliate at the Institute for Data, Systems, and Society (IDSS). Berinsky received his PhD from the University of Michigan in 2000. He is the author of "In Time of War: Understanding American Public Opinion from World War II to Iraq" (University of Chicago Press, 2009). He is also the author of "Silent Voices: Public Opinion and Political Participation in America" (Princeton University Press, 2004) and has published articles in many journals. He is currently the co-editor of the Chicago Studies in American Politics book series at the University of Chicago Press. He is also the recipient of multiple grants from the National Science Foundation and was a fellow at the Center for Advanced Study in the Behavioral Sciences.

David Rand is the Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan, and the Director of the Human Cooperation Laboratory and the Applied Cooperation Team. Bridging the fields of behavioral economics and psychology, David’s research combines mathematical/computational models with human behavioral experiments and online/field studies to understand human behavior. His work uses a cognitive science perspective grounded in the tension between more intuitive versus deliberative modes of decision-making, and explores topics such as cooperation/prosociality, punishment/condemnation, perceived accuracy of false or misleading news stories, political preferences, and the dynamics of social media platform behavior. 

Avi Tuschman is a Stanford StartX entrepreneur and founder of Pinpoint Predictive, where he currently serves as Chief Innovation Officer and Board Director. He’s spent the past five years developing the first Psychometric AI-powered data-enrichment platform, which ranks 260 million individuals for performance marketing and risk management applications. Tuschman is an expert on the science of heritable psychometric traits. His book and research on human political orientation have been covered in peer-reviewed and mainstream media from 25 countries. Previous to his career in tech, he advised current and former heads of state as well as multilateral development banks in the Western Hemisphere. Tuschman completed his undergraduate and doctoral degrees in evolutionary anthropology at Stanford.

News Type
Q&As
Date
Paragraphs

Image
Marietje Schaake

 

  

DOWNLOAD THE PAPER 

 

The European Union is often called a ‘super-regulator’, especially when it comes to data-protection and privacy rules. Having seen European lawmaking from close by, in all its complexities, I have often considered this qualification an exaggerated one. Yes, the European Union frequently takes the first steps in ensuring principles continue to be protected, even as digitization disrupts. However, the speed with which technology evolves versus the pace of democratic lawmaking leads to perpetual mismatches.  

Even the famous, or infamous, General Data Protection Regulation does not meet many essential regulatory needs of the moment. The mainstreaming of Artificial Intelligence in particular, poses new challenges to concepts of the protection of rights and the sustaining of the rule of law. In its White Paper on Artificial Intelligence, as well the Data Strategy, the European Commission references to the common good and the public interest, as well as societal needs as opposed to an emphasis on regulating the digital market. These are welcome steps in acknowledging the depth and scope of technological impact and defining harms not just in economic terms. It remains to be seen how the visions articulated in the White Paper and the Strategy, will translate into concrete legislation. 

One proposal to make concrete improvements to legal frameworks, is outlined by Martin Tisné in The Data Delusion. He highlights the need to update legal privacy standards to be more reflective of the harms incurred through collective data analysis, as opposed to individual privacy violations. Martin makes a clear case for addressing the discrepancy between the profit models benefitting from grouped data versus the ability of any individual to prove the harms caused to his or her rights. 

The lack of transparency into the inner workings of algorithmic processing of data further hinders the path to much needed accountability of the powerful technology businesses operating growing parts of our information architecture and the data flows they process.  

While EU takes the lead in setting values-based standards and rules for the digital layer of our societies and economies, a lot of work remains to be done. 

Marietje Schaake: Martin, in your paper you address the gap between the benefits for technology companies through collective data processing, and the harms for society. You point to historic reasons for individual privacy protections in European laws. Do you consider the European Union to be the best positioned to address the legal shortcomings, especially as you point out that some opportunities to do so were missed in the GDPR?

Martin Tisné: Europe is well positioned but perhaps not for the reasons we traditionally think of (strong privacy tradition, empowered regulators). Individual privacy alone is a necessary, but not sufficient foundation stone to build the future of AI regulation. And whilst much is made of European regulators, the GDPR has been hobbled by the lack of funding and capacity of data protection commissioners across Europe. What Europe does have though, is a legal, political and societal tradition of thinking about the public interest, the common good and how this is balanced against individual interests. This is where we should innovate, taking inspiration from environmental legislation such as the Urgenda Climate Case against the Dutch Government which established that the government had a legal duty to prevent dangerous climate change, in the name of the public interest. 

And Europe also has a lot to learn from other political and legal cultures. Part of the future of data regulation may come the indigenous data rights movement, with greater emphasis on the societal and group impacts of data, or from the concept of Ubuntu ethics that assigns community and personhood to all people. 

Schaake: What scenario do you foresee in 10 years if collective harms are not dealt with in updates of laws? 

Tisné: I worry we will see two impacts. The first is a continuation of what we are seeing now: negative impacts of digital technologies on discrimination, voting rights, privacy, consumers. As people become increasingly aware of the problem there will be a corresponding increase in legal challenges. We’re seeing this already for example with the Lloyd class action case against Google for collecting iPhone data. But I worry these will fail to stick and have lasting impact because of the obligation to have these cases turn on one person, or a class of people’s, individual experiences. It is very hard for individuals to seek remedy for collective harms, as opposed to personal privacy invasions. So unless we solve the issue I raise in the paper – the collective impact of AI and automation – these will continue to fuel polarization, discrimination on the basis of age, gender (and many other aspects of our lives) and the further strengthening of populist regimes. 

I also worry about the ways in which algorithms will optimize on the basis of seemingly random classifications (e.g. “people who wear blue shirts, get up early on Saturday mornings, and were geo-located in a particular area of town at a particular time”). These may be proxies for protected characteristics (age, gender reassignment, disability, race, religion, sex, marriage, pregnancy/maternity, sexual orientation) and provide grounds for redress. They may also not be and sow the seeds of future discrimination and harms. Authoritarian rulers are likely to take advantage of the seeming invisibility of those data-driven harms to further silence their opponents. How can I protect myself if I don’t know the basis on which I am being discriminated against or targeted? 

Schaake: How do you reflect on the difference in speed between technological innovations and democratic lawmaking? Some people imply this will give authoritarian regimes an advantage in setting global standards and rules. What are your thoughts on ensuring democratic governments speed up? 

Tisné: Democracies cannot afford to be outpaced by technological innovation and constantly be fighting yesterday’s wars. Our laws have not changed to reflect changes in technology, which extracts value from collective data, and need to catch up.  A lot of the problems stem from the fact that in government (as in companies), the people responsible for enforcement are separated from those with the technical understanding. The solution lies in much better translation between technology, policy and the needs of the public.  

An innovation and accountability-led government must involve and empower the public in co-creating policies, above and beyond the existing rules that engage individuals (consent forms etc.). In the paper I propose a Public Interest Data Bill that addresses this need: the rules of the digital highway used as a negotiation between the public and regulators, between private data consumers and data generators. Specifically: clear transparency, public participation and realistic sanctions when things go wrong.

This is where democracies should hone their advantage over authoritarian regimes – using such an approach as the basis for setting global standards and best practices (e.g. affected communities providing input into algorithmic impact assessments). 

Schaake: The protection of privacy is what sets democratic societies apart from authoritarian ones. How likely is it that we will see an effort between democracies to set legal standards across borders together? Can we overcome the political tensions across the Atlantic, and strengthen democratic alliances globally?

Tisné: I remain a big supporter of international cooperation. I helped found the Open Government Partnership ten years ago, which remains the main forum for 79 countries to develop innovative open government reforms jointly with the public. Its basic principles hold true: involve global south and global north countries with equal representation, bring civil society in jointly with government from the outset, seek out and empower reformers within government (they exist, regardless of who is in power in the given year), and go local to identify exciting innovations. 

If we heed those principles we can set legal standards by learning from open data and civic technology reforms in Taiwan, experiments with data trusts in India, legislation to hold algorithms accountable in France; and by identifying and working with the individuals driving those innovations, reformers such as Audrey Tang in Taiwan, Katarzyna Szymielewicz in Poland, and Henri Verdier in France. 

These reformers need a home, a base to influence policymakers and technologists, to get those people responsible for enforcement working with those with the technical understanding. The Global Partnership on Artificial Intelligence may be that home but these are early days, it needs to be agile enough to work with the private sector, civil society as well as governments and the international system. I remain hopeful. 

 

 

All News button
1
Subtitle

Protecting Individual Isn't Enough When the Harm is Collective. A Q&A with Marietje Schaake and Martin Tisne on his new paper The Data Delusion.

-
Image
The rise of digital authoritarianism banner advertisement

There will be four events, with the first on September 29th; all dates listed below

REGISTER

  • September 29th, 9-11am PST
  • October 1st, 9-11am PST
  • October 6th, 9-11am PST
  • October 9th, 9-11am PST

 

 

The Rise of Digital Authoritarianism: China, AI and Human Rights

Day 1- September 29, 2020

Welcome Remarks

Larry Diamond | Senior Fellow, Hoover Institution and FSI, Principal Investigator, Global Digital Policy Incubator

Glenn Tiffert | Research Fellow, Hoover Institution

Jenny Wang | Strategic Advisor, Human Rights Foundation

Opening Remarks

Condoleezza Rice | Director, Hoover Institution, Former U.S. Secretary of State, Denning Professor in Global Business at the Graduate School of Business

 

Panel 1: How AI is powering China's Domestic Surveillance State - How is AI exacerbating surveillance risks and enabling digital authoritarianism? This session will examine both state-sponsored applications and Chinese commercial services.

Panelists

Bethany Allen-Ebrahimian | China Reporter, Axios

Paul Mozur | Asia Technology Correspondent, New York Times

Glenn Tiffert | Research Fellow, Hoover Institution

Xiao Qiang | UC Berkeley & Editor-in-Chief, China Digital Times

Moderator

Melissa Chan | Foreign Affairs Reporter, Deutsche Welle Asia

 

 

Day 2- October 1, 2020

Panel 2: The Ethics of Doing Business with China and Chinese Companies

Eric Schmidt | Former Executive Chairman and CEO, Google//Co-Founder, Schmidt Futures
Conversant: Eileen Donahoe, Executive Director of GDPI

 

Panel 2: The Ethics of Doing Business with China and Chinese Companies - What dynamics are at play in China's effort to establish market dominance for Chinese companies, both domestically and globally? What demands are placed on non-Chinese technology companies to participateWhat dynamics are at play in China's effort to establish market dominance for Chinese companies, both domestically and globally? What demands are placed on non-Chinese technology companies to participate in the Chinese marketplace? What framework should U.S.-based companies use to evaluate the risks and opportunities for collaboration and market entry in China? To what extent are Chinese companies (e.g..,TikTok) competing in Western markets required to comply with Chinese government instructions or demands for access to data?

Panelists

Mary Hui | Hong Kong-based Technology and Business Reporter, Quartz
 
Megha Rajagopalan | International Correspondent and Former China Bureau Chief, Buzzfeed News
 

Alex Stamos | Director, Stanford Internet Observatory & Former Chief Security Officer, Facebook

Moderator

Casey Newton | Silicon Valley Editor, The Verge

 

 

Day 3- October 6, 2020

Panel 3: China as an Emerging Global AI Superpower

Keynote & Conversation

Competing in the Superpower Marathon with China

Mike Brown | Director, Defense Innovation Unit

Conversant: Larry Diamond, Senior Fellow, Hoover Institution and FSI, Principal Investigator, Global Digital Policy Incubator

Panel 3: China as an Emerging Global AI Superpower- How should we think about China's growing influence in the realm of AI and the attendant geopolitical risks and implications? This session will explore China’s bid through Huawei to build and control the world's 5G networks, and what that implies for human rights and national sovereignty and security; China's export of surveillance technology to authoritarian regimes around the world; China's global partnerships to research and develop AI; and the problem of illicit technology transfer/theft.

Panelists

Steven Feldstein | Senior Fellow, Carnegie Endowment for International Peace 

Lindsay Gorman | Fellow for Emerging Technologies, Alliance for Securing Democracy, German Marshall Fund 

Maya Wang | China Senior Researcher, Human Rights Watch

Moderator

Dominic Ziegler | Senior Asia Correspondent and Banyan Columnist, The Economist

 

 

Day 4- October 9, 2020

Panel 4: How Democracies Should Respond to China’s Emergence as an AI Superpower

Keynote

Digital Social Innovation: Taiwan Can Help

Audrey Tang | Digital Minister, Taiwan

Panel 4: How Democracies Should Respond to China's Emergence as an AI Superpower- How should the rest of the world, and especially the world's democracies, react to China's bid to harness AI for ill as well as good? How do we strike the right balance between vigilance in defense of human rights and national security and xenophobic overreaction?

Panelists

Christopher Balding | Associate Professor, Fulbright University Vietnam

Anja Manuel | Co-Founder, Rice, Hadley, Gates & Manuel

Chris Meserole | Deputy Director of the Artificial Intelligence and Emerging Technology Initiative, Brookings Institution

Moderator

Larry Diamond | Senior Fellow, Hoover Institution and FSI, Principal Investigator, Global Digital Policy Incubator

 

 

Closing Keynote & Conversation

Strengthening Human-Centered Artificial Intelligence

Fei-Fei Li | Co-Director, Stanford Institute for Human-Centered Artificial Intelligence (HAI) Conversant: Eileen Donahoe, Executive Director of GDPi

Closing Remarks: Alex Gladstein & Eileen Donahoe

Seminars
Date Label
-

Image
towards cyber peace

Please join the Cyber Policy Center for Towards Cyber Peace, Closing the Accountability Gap, hosted by Cyber Policy Center's Marietje Schaake, along with guests Stéphane Duguin, CEO of the Cyber Peace Institute and Camille François, CIO of Graphika and Mozilla Fellow. The discussion will focus on the challenges to cyber peace, and the work being done to chart a path forward. The session is open to the public, but registration is required. 

Marietje Schaake is the international policy director at Stanford University’s Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. She was named President of the Cyber Peace Institute. Between 2009 and 2019, Marietje served as a Member of European Parliament for the Dutch liberal democratic party where she focused on trade, foreign affairs and technology policies. Marietje is affiliated with a number of non-profits including the European Council on Foreign Relations and the Observer Research Foundation in India and writes a monthly column for the Financial Times and a bi-monthly column for the Dutch NRC newspaper. 

Camille François works on cyber conflict and digital rights online. She is the Chief Innovation Officer at Graphika, where she leads the company’s work to detect and mitigate disinformation, media manipulation and harassment. Camille was previously the Principal Researcher at Jigsaw, an innovation unit at Google that builds technology to address global security challenges and protect vulnerable users. Camille has advised governments and parliamentary committees on both sides of the Atlantic on policy issues related to cybersecurity and digital rights. She served as a special advisor to the Chief Technology Officer of France in the Prime Minister’s office, working on France’s first Open Government roadmap. Camille is a Mozilla Fellow, a Berkman-Klein Center affiliate, and a Fulbright scholar. She holds a masters degree in human rights from the French Institute of Political Sciences (Sciences-Po) and a masters degree in international security from the School of International and Public Affairs (SIPA) at Columbia University. François’ work has been featured in various publications, including the New York Times, WIRED, Washington Post, Bloomberg Businessweek, Globo and Le Monde.

Stéphane Duguin is the Chief Executive Officer of the CyberPeace Institute. His mission is to coordinate a collective response to decrease the frequency, impact, and scale of cyberattacks by sophisticated actors. Building on his hands-on experience in countering and analyzing cyber operations and information operations which impact civilians and civilian infrastructure, he leads the Institute with the aim of holding malicious actors to account for the harms they cause. Prior to this position, Stéphane Duguin was a senior manager and innovation coordinator at Europol. He led key operational projects to counter both cybercrime and online terrorism, such as the setup of the European Cybercrime Centre (EC3), the Europol Innovation Lab, and the European Internet Referral Unit (EU IRU). A leader in digital transformation, his work focused on the implementation of innovative responses to a large-scale abuse of the cyberspace, notably on the convergence of disruptive technologies and public-private partnerships.

 

News Type
Commentary
Date
All News button
1
Subtitle

On May 28th, President Trump signed an executive order threatening to revoke CDA 230 protections, which would expose social media companies to increased liability for content that is posted on their sites. The Cyber Policy Center team responded on June 1 in a public webinar. The event was recorded.

-

On Thursday, President Trump signed an executive order threatening to revoke CDA 230 protections, which would expose social media companies to increased liability for content that is posted on their sites. This comes on the heels of Twitter, last week, fact-checking two misleading tweets from the president about mail-in voting. Critics of the executive order say the White House is overstepping its authority, and cannot limit the legal protections that social media companies currently hold under federal law.
 
Join the Stanford Cyber Policy Center's team Monday June 1 at 8AM PST for President Trump’s Executive Order on Platforms and Online Speech: Stanford’s Cyber Policy Center Responds, with Nate Persily, Faculty Co-Director of the Cyber Policy Center and Director of the Program on Democracy and the Internet; Daphne Keller, Director for the Program on Platform Regulation and former associate general counsel for Google; Alex Stamos, Director of the Cyber Center’s Internet Observatory and former Chief Security Officer at Facebook; Marietje Schaake, Policy Director for the Cyber Policy Center and former Member of EU Parliament; and Eileen Donahoe, Executive Director of the Global Digital Policy Incubator and former US Ambassador to the UN Human Rights Counsel, in conversation with Cyber Center Director Kelly Born.

Monday, June 1st
8am PDT
Join via Zoom

Panel Discussions
-

This event is co-sponsored with the Cyber Policy Center and the Center for a New American Security.

* Please note all CISAC events are scheduled using the Pacific Time Zone

 

Seminar Recording: https://youtu.be/KaydMdIVtGc

 

About the Event: The United States is steadily losing ground in the race against China to pioneer the most important technologies of the 21st century. With technology a critical determinant of future military advantage, a key driver of economic prosperity, and a potent tool for the promotion of different models of governance, the stakes could not be higher. To compete, China is leveraging its formidable scale—whether measured in terms of research and development expenditures, data sets, scientists and engineers, venture capital, or the reach of its leading technology companies. The only way for the United States to tip the scale back in its favor is to deepen cooperation with allies. The global diffusion of innovation also places a premium on aligning U.S. and ally efforts to protect technology. Unless coordinated with allies, tougher U.S. investment screening and export control policies will feature major seams that Beijing can exploit.

On early June, join Stanford's Center for International Security and Cooperation (CISAC) and the Center for a New American Security (CNAS) for a unique virtual event that will feature three policy experts advancing concrete ideas for how the United States can enhance cooperation with allies around technology innovation and protection.

This webinar will be on-the-record, and include time for audience Q&A.

 

About the Speakers: 

Anja Manuel, Stanford Research Affiliate, CNAS Adjunct Senior Fellow, Partner at Rice, Hadley, Gates & Manuel LLC, and author with Pav Singh of Compete, Contest and Collaborate: How to Win the Technology Race with China.

 

Daniel Kliman, Senior Fellow and Director, CNAS Asia-Pacific Security Program, and co-author of a recent report, Forging an Alliance Innovation Base.

 

Martijn Rasser, Senior Fellow, CNAS Technology and National Security Program, and lead researcher on the Technology Alliance Project

Virtual Seminar

Anja Manuel, Daniel Kliman, and Martijn Rasser
Seminars
-
Image
event advertisement graphic

COVID-19 is having a profound impact on our online systems - exposing both the essential role they can and do play in our modern society, and the risks and vulnerabilities they represent. Substantial research is emerging on this topic, and the implications of that research  will have important consequences for both medium (e.g., 2020 elections) and long-term cyber policies. 

Welcome Remarks: Mike McFaul

  • Introduction to CPC and the center’s work on COVID from moderator Kelly Born
  • Alex Stamos of the Internet Observatory will discuss their work examining shifting narratives about coronavirus from Chinese and Russian State Media, early insights into covid misinformation in other countries (e.g., Nigeria), and how tech companies are responding; as well as how Zoom and other platforms have been working to adapt policies and practices to meet growing demands, and risks.
  • Nate Persily at PDI will discuss the challenges of running the elections in the current environment, including implications for state necessary changes to state policies and practices.
  • Eileen Donahoe at GDPI will discuss geopolitical threats to the international human rights law framework due to ineffective response to COVID by democratic govs; cite risks to 5 specific substantive civil/political rights; and recommend that democratic govs apply international human rights process principles in COVID-19 context.  [I will use ~2 slides  -only  if others use slides]
  • Marietje Schaake will discuss human-rights challenges (to privacy, freedom of association, and freedom of expression) that have arisen with various applications of AI in COVID-19 context - e.g.,contact tracking and content moderation; as well as emerging criteria for policymakers to consider when deploying tracing and related technologies. 
  • Andy Grotto from GTG will discuss recent work on How to Report Responsibly on Hacks and Disinformation, and the implications for mainstream media’s coverage of COVID.
  • 30 min for Q&A
Seminars
-

Links to Event Materials:

 

The Stanford Cyber Policy Center continues its online Zoom series: Digital Technology and Democracy, Security & Geopolitics in an Age of Coronavirus. These webinars will take place every other Wednesday at 10am PST. 

The next event, Digital Disinformation and Health: From Vaccines to the Coronavirus, will take place Wednesday, April 8, at 10am PST with Kelly Born, Executive Director of the Cyber Policy Center, in conversation with Professor David Broniatowski, from George Washington University, Professor Kathleen M. Carley, from Carnegie Mellon University, and Professor Jacob N. Shapiro, from Princeton University. 

In particular, Professor Broniatowski will discuss the results of new studies regarding bots and trolls in the vaccine debate, as well as what makes messages go viral from the standpoint of Fuzzy Trace TheoryProfessor Carley will explore how information moves from country to country, with a look at both the differences in who is broadcasting certain types of disinformation and the role bots play in the spread. Professor Shapiro will speak to trends and themes we are seeing in coronavirus disinformation narratives and in news reporting on COVID-related misinformation.


David Broniatowski 
Professor David Broniatowski conducts research in decision-making under risk, group decision-making, system architecture, and behavioral epidemiology. This research program draws upon a wide range of techniques including formal mathematical modeling, experimental design, automated text analysis and natural language processing, social and technical network analysis, and big data. Current projects include a text network analysis of transcripts from the US Food and Drug Administration's Circulatory Systems Advisory Panel meetings, a mathematical formalization of Fuzzy Trace Theory -- a leading theory of decision-making under risk, derivation of metrics for flexibility and controllability for complex engineered socio-technical systems, and using Twitter data to conduct surveillance of influenza infection and the resulting social response. 
Professor Kathleen M. Carley 
Professor Kathleen M. Carley is Director of the Center for Informed Democracy and Social-cybersecurity (IDeaS) and the director of the center for Computational Analysis of Social and Organizational Systems (CASOS). She specializes in network science, agent-based modeling, and text-mining within a complex socio-technical system, organizational and social theory framework. In her work, she examines how cognitive, social and institutional factors come together to impact individual, organizational and societal outcomes. Using this lens she has addressed a number of policy issues including counter-terrorism, human and narcotic trafficking, cyber and nuclear threat, organizational resilience and design, natural disaster preparedness, cyber threat in social media, and leadership.   
Professor Jacob N. Shapiro 
Professor Jacob N. Shapiro is professor of Politics and International Affairs at Princeton University and directs the Empirical Studies of Conflict Project, a multi-university consortium that compiles and analyzes micro-level data on politically motivated violence in countries around the world. His research covers conflict, economic development, and security policy. He is author of The Terrorist’s Dilemma: Managing Violent Covert Organizations and co-author of Small Wars, Big Data: The Information Revolution in Modern Conflict. His research has been published in broad range of academic and policy journals as well as a number of edited volumes. He has conducted field research and large-scale policy evaluations in Afghanistan, Colombia, India, and Pakistan.

Kelly BornKelly Born is the Executive Director of Stanford’s Cyber Policy Center, where she collaborates with the center’s program leaders to pioneer new lines of research, policy-oriented curriculum, policy workshops and executive education. Prior to joining Stanford, she helped to launch and lead The Madison Initiative at the William and Flora Hewlett Foundation, one of the largest philanthropic undertakings working to reduce polarization and improve U.S. democracy.  There, she designed and implemented strategies focused on money in politics, electoral reform, civic engagement and digital disinformation. Kelly earned a master’s degree in international policy from Stanford University.

Online, via Zoom: REGISTER

Professor David Broniatowski George Washington University
Professor Kathleen M. Carley Carnegie Mellon University
Professor Jacob N. Shapiro Princeton University
Seminars
-

The research on misinformation generally and fake news specifically is vast, as is coverage in media outlets. Two questions run throughout both the academic and public discourse: what explains the spread of fake news online, and what can be done about it? While there is substantial literature on who is likely to be exposed to and share fake news, these behaviors might not signal belief or effect. Conversely, there is far less work on who is able to differentiate between true and false stories and, as a result, who is most likely to believe fake news (or, conversely, not believe true news), a question that speaks directly to Facebook’s recent “community review” approach to combating the spread of fake news on its platform.

In his talk, Professor Tucker will report on initial findings from a new collaborative project between NYU’s Center for Social Media and Politics and Stanford’s Program on Democracy and the Internet designed to fill these gaps in the scholarly literature and inform the types of policy decisions being made by Facebook. The project has enlisted both professional fact checkers and random “crowds” of close to 100 people to fact check five “fresh” articles (that have appeared in the past 24 hours) per day, four days a week, for eights week using an innovative transparent and replicable algorithm for selecting the articles for fact checking. He will report on initial observations regarding (a) individual determinants of fact checking proficiency; (b) the viability using the “wisdom of the crowds” for fact checking, including examining the tradeoffs between crafting a more accurate crowd vs. a more representative crowd and (c) results from experiments designed to assess potential policy interventions to improve crowdsourcing accuracy.

About the Speaker:

Image
Joshua Tucker
Joshua A. Tucker is Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University. He is the Director of NYU’s Jordan Center for Advanced Study of Russia, a co-Director of the NYU Social Media and Political Participation (SMaPP) laboratory, a co-Director of the new NYU Center for Social Media and Politics, and a co-author/editor of the award-winning politics and policy blog The Monkey Cage at The Washington Post. He serves on the advisory boards of the American National Election Study, the Comparative Study of Electoral Systems, and numerous academic journals. Originally a scholar of post-communist politics, he has more recently studied social media and politics. His research in this area has included studies on the effects of network diversity on tolerance, partisan echo chambers, online hate speech, the effects of exposure to social media on political knowledge, online networks and protest, disinformation and fake news, how authoritarian regimes respond to online opposition, and Russian bots and trolls. His research has been funded by over $8 million in grants in the past three years, including a 2019 Knight Foundation “Research on the Future of an Informed Society” grant. His most recent book is the co-authored Communism’s Shadow: Historical Legacies and Contemporary Political Attitudes (Princeton University Press, 2017), and he is the co-editor of the forthcoming edited volume Social Media and Democracy (Cambridge University Press, 2020). 

Subscribe to Governance