Security

FSI scholars produce research aimed at creating a safer world and examing the consequences of security policies on institutions and society. They look at longstanding issues including nuclear nonproliferation and the conflicts between countries like North and South Korea. But their research also examines new and emerging areas that transcend traditional borders – the drug war in Mexico and expanding terrorism networks. FSI researchers look at the changing methods of warfare with a focus on biosecurity and nuclear risk. They tackle cybersecurity with an eye toward privacy concerns and explore the implications of new actors like hackers.

Along with the changing face of conflict, terrorism and crime, FSI researchers study food security. They tackle the global problems of hunger, poverty and environmental degradation by generating knowledge and policy-relevant solutions. 

-

Image
december 10th digital tech social media and 2020 election

Since the 2016 election, great attention has been paid to the impact of digital technologies on democracy in the United States and around the world.  Foreign intervention into the U.S. campaign through social media and online advertising, including the rise of "fake news" and computational propaganda, exacerbated concerns that new technologies posed a substantial threat to the normal workings of  the U.S. electoral process.  These concerns remain for 2020, alongside new threats related to the COVID-19 pandemic.  With in-person campaigning, voter mobilization, and even voting itself hindered by the pandemic, digital technologies promise to play an even more important role in 2020.  

On December 10, 2020, the Stanford Cyber Policy Center will bring together scholars, tech platforms, principals from the digital campaigns, journalists, and other key experts to explore the effect of digital technologies on the 2020 Election.  The conference will explore the role of digital technologies on election administration, campaign tactics, political advertising, the news media, foreign propaganda efforts, and the broader campaign information ecosystem.  It will also consider how changes in platform policies affected the campaign and information environment, and whether lessons learned in the 2020 elections suggest that further changes are warranted.  

9AM:   Introduction: Kelly Born, Stanford Cyber Policy Center

9:10:    Findings from the Stanford/MIT Healthy Elections Project:  Nate Persily, Stanford Law School

9:30:    Trends in 2020 Political Advertising:  Erika Franklin Fowler, Wesleyan Media Project

9:50:    Online Political Transparency:  Laura Edelson, New York University Political Ads Project

10:10:  BREAK

10:20: Center for Technology and Civic Life’s Elections Project:  Tiana Epps-Johnson, CTCL

10:40:  Platform Speech Policies and the Elections: Daniel Kreiss and Bridget Barrett, University of North             Carolina at Chapel Hill

11:00:  Findings from the Election Integrity Project:  Alex Stamos and Renee DiResta, Stanford Internet             Observatory

11:20: BREAK

11:30:  Experiences from the Platforms

  • Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook
  • Clement Wolf, Global Public Policy Lead for Information Integrity at Google
  • Yoel Roth, Director of Site Integrity at Twitter
  • Eric Han, Head of Safety at TikTok

12:30:  Full Panel Q&A/Discussion

1PM     Close

Nathaniel Persily
Erika Franklin Fowler
Daniel Kreiss
Clement Wolf
Nathaniel Gleicher
Yoel Roth
Eric Han
Bridget Barrett
Tiana Epps-Johnson
Laura Edelson
Paragraphs

THE 2020 ELECTION IN THE UNITED STATES will take place on November 3 in the midst of a global pandemic, economic downturn, social unrest, political polarization, and a sudden shift in the balance of power in the U.S Supreme Court. On top of these issues, the technological layer impacting the public debate, as well as the electoral process itself, may well determine the election outcome. The eight-week Stanford University course, “Technology and the 2020 Election: How Silicon Valley Technologies Affect Elections and Shape Democracy,” examines the influence of technology on America’s democratic process, revealing how digital technologies are shaping the public debate and the election.

The eight-week Stanford University course, “Technology and the 2020 Election: How Silicon Valley Technologies Affect Elections and Shape Democracy,” examines the influence of technology on America’s democratic process, revealing how digital technologies are shaping the public debate and the election...

 

All Publications button
1
Publication Type
Policy Briefs
Publication Date
Authors
Marietje Schaake
-

Image
election debrief event stanford

The US 2020 elections have been fraught with challenges, including the rise of "fake news” and threats of foreign intervention emerging after 2016, ongoing concerns of racially-targeted disinformation, and new threats related to the COVID-19 pandemic. Digital technologies will have played a more important role in the 2020 elections than ever before.

On November 4th at 10am PST, join the team at the Stanford Cyber Policy Center, in collaboration with the Freeman Spogli Institute, Stanford’s Institute for Human-Centered Artificial Intelligence, and the Stanford Center on Philanthropy and Civil Society, for a day-after discussion of the role of digital technologies in the 2020 Elections.  Speakers will include Nathaniel Persily, faculty co-director of the Cyber Policy Center and Director of the Program on Democracy and the Internet, Marietje Schaake, the Center’s International Policy Director and International Policy Fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, Alex Stamos, Director of the Cyber Center’s Internet Observatory and former Chief Security Officer at Facebook and Yahoo, Renee DiResta, Research Manager at the Internet Observatory, Andrew Grotto, Director of the Center’s Program on Geopolitics, Technology, and Governance, and Rob Reich, Faculty Director of the Center for Ethics in Society, in conversation with Kelly Born, the Center’s Executive Director.

Please note that we will also have a YouTube livestream available for potential overflow or for anyone having issues connecting via Zoom: https://youtu.be/H2k62-JCAgE

 

0
renee-diresta.jpg

Renée DiResta is the former Research Manager at the Stanford Internet Observatory. She investigates the spread of malign narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and business organizations, and has studied disinformation and computational propaganda in the context of pseudoscience conspiracies, terrorism, and state-sponsored information warfare.

You can see a full list of Renée's writing and speeches on her website: www.reneediresta.com or follow her @noupside.

 

Former Research Manager, Stanford Internet Observatory
Rob Reich
0
marietje.schaake

Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of The Tech Coup.


 

Non-Resident Fellow, Cyber Policy Center
Fellow, Institute for Human-Centered Artificial Intelligence
Date Label
-

Image
Digital Trade Wars

Please join the Cyber Policy Center, Wednesday, October 21, from 10 a.m. –11 a.m. pacific time, with host Marietje Schaake, International Policy Director of the Cyber Policy Center, in conversation with Dmitry Grozoubinski, founder of ExplainTrade.com, and visiting professor at University of Strathclyde, along with Anu Bradford, Henry L. Moses Professor of Law and International Organizations at Columbia Law School and author of How the European Union Rules the World, for a discussion and exploration of the digital trade war. 

This event is free and open to the public, but registration is required.

 

0
marietje.schaake

Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of The Tech Coup.


 

Non-Resident Fellow, Cyber Policy Center
Fellow, Institute for Human-Centered Artificial Intelligence
Date Label
Marietje Schaake
Anu Bradford
Dmitry Grozoubinski
Panel Discussions
-

Recent public outcries over facial recognition technology, police and state usage of automated surveillance tools, and racially motivated disinformation on social media have underscored the ways in which new digital technologies threaten to exacerbate existing racial and social cleavages.  What is known about how digital technologies are contributing to racial tensions, what key questions remain unanswered, and what policy changes, by government or tech platforms, might help?

On Wednesday, September 23rd, from 10 a.m. - 11 a.m. Pacific Time, please join us for Race and Technology, with Kelly Born, Executive Director of the Stanford Cyber Policy Center, in conversation with Julie Owono, the Executive Director of Internet Sans Frontières, a digital rights advocacy organization based in France, an affiliate of the Berkman Klein Center for Internet and Society at Harvard and at Stanford’s Digital Civil Society Lab, and a member of Facebook’s Oversight Board; Mutale Nkonde, CEO of AI for the People, a member of the recently formed TikTok Content Advisory Council, and a fellow at Stanford’s Digital Civil Society Lab; and Safiya Noble, Associate Professor at UCLA in the Departments of Information Studies and African American Studies, and author of Algorithms of Oppression

The event is open to the public, but registration is required.

0
Postdoctoral Fellow, Stanford Internet Observatory (2021-2022)
Predoctoral Fellow, Stanford Internet Observatory (2020-2021)
Josh-Goldstein.jpeg

Josh A. Goldstein is a past postdoctoral scholar at the Stanford Internet Observatory. He received his PhD in International Relations from the University of Oxford, where he studied as a Clarendon Scholar. At the Stanford Internet Observatory, Dr. Goldstein investigated covert influence operations on social media platforms, studied the effects of foreign interference on democratic societies, and explored how emerging technologies will impact the future of propaganda campaigns. He has given briefings to the Department of Defense, the State Department, and senior technology journalists based on this work, and published in outlets including Brookings, Lawfare, and Foreign Policy.

Prior to joining SIO, Dr. Goldstein received an MPhil in International Relations at Oxford with distinction and a BA in Government from Harvard College, summa cum laude. He also assisted with research and writing related to international security at the Belfer Center, Brookings Institution, House Foreign Affairs Committee, and Department of Defense.

Authors
Daphne Keller
News Type
Q&As
Date
Paragraphs

Daphne Keller leads the newly launched Program on Platform Regulation a program designed to offer lawmakers, academics, and civil society groups ground-breaking analysis and research to support wise governance of Internet platforms.

Q: Facebook, YouTube and Twitter rely on algorithms and artificial intelligence to provide services for their users. Could AI also help in protecting free speech and policing hate speech and disinformation?   

DK: Platforms increasingly rely on artificial intelligence and other algorithmic means to automate the process of assessing – and sometimes deleting – online speech. But tools like AI can’t really “understand” what we are saying, and automated tools for content moderation make mistakes all the time. We should worry about platforms’ reliance on automation, and worry even more about legal proposals that would make such automated filters mandatory. Constitutional and human rights law give us a legal framework to push back on such proposals, and to craft smarter rules about the use of AI. I wrote about these issues in this New York Times op ed and in some very wonky academic analysis in the Journal of European and International IP Law.

Q: Can you explain the potential impacts on citizens’ rights when the platforms have global reach but governments do not?

DK: On one hand, people worry about platforms displacing the legitimate power of democratic governments. On the other hand, platforms can actually expand state power in troubling ways. One way they do that is by enforcing a particular country’s speech rules everywhere else in the world. Historically that meant a net export of U.S. speech law and values, as American companies applied those rules to their global platforms. More recently, we’ve seen that trend reversed, with things like European and Indian courts requiring Facebook to take user posts down globally – even if the users’ speech would be legally protected in other countries. Governments can also use soft power, or economic leverage based on their control of access to lucrative markets, to convince platforms to “voluntarily” globally enforce that country’s preferred speech rules. That’s particularly troubling, since the state influence may be invisible to any given users whose rights are affected.   

There is such a pressing need for thoughtful work on the laws that govern Internet platforms right now, and this is the place to do it... We have access to the people who are making these decisions and who have the greatest expertise in the operational realities of the tech platforms.
Daphne Keller
Director of Program on Platform Regulation, Cyber Policy Center Lecturer, Stanford Law School

Q: Are there other ways that platforms can expand state power? 

DK: Yes, platforms can let states bypass democratic accountability and constitutional limits by using private platforms as proxies for their own agenda. States that want to engage in surveillance or censorship are constrained by the U.S. Constitution, and by human rights laws around the world. But platforms aren’t. If you’re a state and you want to do something that would violate the law if you did it yourself, it’s awfully tempting to coerce or persuade a platform to do it for you. This issue of platforms being proxies for other actors isn’t limited to governments – anyone with leverage over a platform, including business partners, can potentially play a hidden role like this.

I wrote about this complicated nexus of state and private power in Who Do You Sue? for the Hoover Institution.    

Q: What inspired you to create the Program on Platform Regulation at the Cyber Policy Center right now?

DK: There is such a pressing need for thoughtful work on the laws that govern Internet platforms right now, and this is the place to do it. At the Cyber Policy Center, there’s an amazing group of experts, like Marietje Schaake, Eileen Donahoe, Alex Stamos and Nate Persily, who are working on overlapping issues. We can address different aspects of the same issues and build on each other’s work to do much more together than we could individually.

The program really benefits from being at Stanford and in Silicon Valley because we have access to the people who are making these decisions and who have the greatest expertise in the operational realities of the tech platforms. 

The Cyber Policy Center is part of the Freeman Spogli Institute for International Studies at Stanford University.

Hero Image
Q&A with Daphne Keller
All News button
1
Subtitle

Keller explains some of the issues currently surrounding platform regulation

-

Join the Cyber Policy Center on August 26th, at 10 a.m. pacific time, for a look how governments around the world are pushing to ban strong encryption. The talk will feature speakers Sam Woolley, Riana Pfefferkorn and Mathew Baum as they explore the different policy issues being used by governments to justify their agendas. This event is open to the public, but registration is required.

REGISTER


Matthew A. Baum (Ph.D., UC San Diego, 2000) is the Marvin Kalb Professor of Global Communications and Professor of Public Policy at Harvard University's John F. Kennedy School of Government and Department of Government. His research focuses on delineating the effects of domestic politics on international conflict and cooperation in general and American foreign policy in particular, as well as on the role of the mass media and public opinion in contemporary American politics. His research has appeared in over a dozen leading scholarly journals, such as the American Political Science ReviewAmerican Journal of Political Science, and the Journal of Politics. His books include Soft News Goes to War: Public Opinion and American Foreign Policy in the New Media Age (2003, Princeton University Press), War Stories: The Causes and Consequences of Public Views of War (2009, Princeton University Press, co-authored with Tim Groeling), and War and Democratic Constraint: How the Public Influences Foreign Policy (2015, Princeton University Press, co-authored with Phil Potter). He has also contributed op-ed articles to a variety of newspapers, magazines, and blog sites in the United States and abroad. Before coming to Harvard, Baum was an associate professor of political science and communication studies at UCLA. 

Riana Pfefferkorn is the Associate Director of Surveillance and Cybersecurity at the Stanford Center for Internet and Society. Her work focuses on investigating and analyzing the U.S. government's policy and practices for forcing decryption and/or influencing crypto-related design of online platforms and services, devices, and products, both via technical means and through the courts and legislatures. Riana also researches the benefits and detriments of strong encryption on free expression, political engagement, economic development, and other public interests.

Dr. Samuel Woolley is a writer and researcher. He is an assistant professor in the School of Journalism and in the School of Information (by courtesy) at the University of Texas at Austin.  He is the program director of propaganda research at the Center for Media Engagement at UT. Woolley's work focuses on the ways in which emerging technology are leveraged for both democracy and control. He is the author of the book "The Reality Game: How the Next Wave of Technology Will Break the Truth" (PublicAffairs), an exploration of how tools from artificial intelligence to virtual reality are being used in efforts to manipulate public opinion and discusses what society can do to respond. He is the co-editor (with Dr. Philip N. Howard), of the book "Computational Propaganda" (Oxford University Press), a series of country-based case studies on social media and digital information operations. 

Paragraphs

Prime Minister Theresa May’s political fortunes may be waning in Britain, but her push to make internet companies police their users’ speech is alive and well. In the aftermath of the recent London attacks, Ms. May called platforms like Google and Facebook breeding grounds for terrorism. She has demanded that they build tools to identify and remove extremist content. Leaders of the Group of 7 countries recently suggested the same thing. Germany wants to fine platforms up to 50 million euros if they don’t quickly take down illegal content. And a European Union draft law would make YouTube and other video hosts responsible for ensuring that users never share violent speech.

All Publications button
1
Publication Type
Commentary
Publication Date
Authors
Daphne Keller
Paragraphs

Daphne Keller's work focuses on platform regulation and Internet users' rights. She has testified before legislatures, courts, and regulatory bodies around the world, and published both academically and in popular press on topics including platform content moderation practices, constitutional and human rights law, copyright, data protection, and national courts' global takedown orders. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations. Until 2020, Daphne was the Director of Intermediary Liability at Stanford's Center for Internet and Society. She also served until 2015 as Associate General Counsel for Google, where she had primary responsibility for the company’s search products. Daphne has taught Internet law at Stanford, Berkeley, and Duke law schools. She is a graduate of Yale Law School, Brown University, and Head Start.

Daphne blogs about platform regulation and Internet users' rights.

All Publications button
1
Publication Type
Blogs
Publication Date
Authors
Daphne Keller
Age Range
Secondary - Community College
Subscribe to Security