-

Multilateral Negotiations on ICTs (information and communications technologies) and International Security: Process and Prospects for the UN Group of Government Experts and the UN Open-Ended Working Group

Abstract: The intent of this seminar is to provide an update on recent events at the UN relevant to international discussions of cybersecurity (and a primer of sorts on current UN processes for addressing this topic).

In 2018, UN Member States decided to establish two concurrent negotiations with nearly identical mandates on the international security dimension of ICTs—a sixth limited membership UN Group of Governmental Experts (GGE) and an Open-Ended Working Group (OEWG) open to all governments. How did this happen? Are they competing or complementary endeavors? Is it likely that one will be able to bridge the longstanding divides on how international law applies to cyberspace or agree by consensus to additional norms of responsible State behavior? What would be a good outcome of each process? And how do these negotiations fit into the wider UN ecosystem, including the follow-up to the Secretary-General’s High Level Panel on Digital Cooperation.  

Image
Kerstin Vignard
About the Speaker: Kerstin Vignard is an international security policy professional with nearly 25 years’ experience at the United Nations, with a particular interest in the nexus of international security policy and technology. Vignard is Deputy to the Director at UNIDIR, currently on temporary assignment leading UNIDIR’s team supporting the Chairmen of the latest Group of Governmental Experts (GGEs) on Cyber Security and the Open-Ended Working Group. She has led UNIDIR’s team supporting four previous cyber GGEs. From 2013 to 2018, she initiated and led UNIDIR’s work on the weaponization of increasingly autonomous technologies, and is the co-Principal Investigator of a CIFAR AI & Society grant examining potential regulatory approaches for security and defence applications of AI.

Paragraphs

Despite pressure from President Donald Trump and Attorney General William Barr, Apple continues to stand its ground and refuses to re-engineer iPhones so law enforcement can unlock the devices. Apple has maintained that it has done everything required by law and that creating a "backdoor" would undermine cybersecurity and privacy for iPhone users everywhere.

Apple is right to stand firm in its position that building a "backdoor" could put user data at risk.

At its most basic, encryption is the act of converting plaintext (like a credit card number) into unintelligible ciphertext using a very large, random number called a key. Anyone with the key can convert the ciphertext back to plaintext. Persons without the key cannot, meaning that even if they acquire the ciphertext, it should still be impossible for them to discover the meaning of the underlying plaintext.

Full Text at CNN

 

 

 

 

All Publications button
1
Publication Type
Commentary
Publication Date
Authors
-

Abstract:

Considerable scholarship has established that algorithms are an increasingly important part of what information people encounter in everyday life. Much less work has focused on studying users’ experiences with, understandings of, and attitudes about how algorithms may influence what they see and do. The dearth of research on this topic may be in part due to the difficulty in studying a subject about which there is no known ground truth given that details about algorithms are proprietary and rarely made public. In this talk, I will report on the methodological challenges of studying people’s algorithm skills based on 83 in-person interviews conducted in five countries. I will also discuss the types of algorithm skills identified from our data. The talk will advocate for more such scholarship to accompany existing system-level analyses of algorithms’ social implications and offers a blue print for how to do this.

Image
Eszter Hargittai
About the Speaker:

Eszter Hargittai is Professor and Chair of Internet Use and Society at the Institute of Communication and Media Research, University of Zurich. Previously, she was the Delaney Family Professor in the Communication Studies Department at Northwestern University. In 2019, she was elected Fellow of the International Communication Association and also received the William F. Ogburn Mid-Career Achievement Award from the American Sociological Association’s section on Communication, Information Technology and Media Sociology. For over two decades, she has been researching people’s Internet uses and skills, and how these relate to questions of social inequality.

 

Paragraphs

Protecting Electoral Integrity in the Digital Age | The Report of the Kofi Annan Commission on Elections and Democracy in the Digital Age

New information and communication technologies (ICTs) pose difficult challenges for electoral integrity. In recent years foreign governments have used social media and the Internet to interfere in elections around the globe. Disinformation has been weaponized to discredit democratic institutions, sow societal distrust, and attack political candidates. Social media has proved a useful tool for extremist groups to send messages of hate and to incite violence. Democratic governments strain to respond to a revolution in political advertising brought about by ICTs. Electoral integrity has been at risk from attacks on the electoral process, and on the quality of democratic deliberation.

The relationship between the Internet, social media, elections, and democracy is complex, systemic, and unfolding. Our ability to assess some of the most important claims about social media is constrained by the unwillingness of the major platforms to share data with researchers. Nonetheless, we are confident about several important findings.

All Publications button
1
Publication Type
Annual Reports
Publication Date
Authors
Alex Stamos
-

On January 11, 2020 Taiwan held its presidential and legislative elections. Many observers expected the People’s Republic of China (PRC) to run an online disinformation campaign during the lead-up to the election in support of their preferred candidate, Han Kuo-yu, who was challenging incumbent Tsai Ing-wen. Such concerns were increased by demonstrated PRC online disinformation targeting the Hong Kong protests, and claims by an alleged PRC spy saying he led disinformation efforts targeting Taiwan during the 2018 elections. 

In this talk, we delve into case studies that highlight the role social media plays in disinformation at large in the Taiwanese information environment. We examine that while the fears of disinformation were generally not realized, we did find evidence of coordinated inauthentic behavior on Facebook, in particular on fan Pages and Groups for the two candidates. Our findings hold implications for researchers trying to distinguish authentic hyper-partisan domestic activism from coordinated disinformation. 

Image
Carly Miller

Carly Miller is a social science researcher at the Stanford Internet Observatory. In addition to covering the Taiwanese election, she assists the team in other digital forensic research and thinking about how researchers external to social media platforms think about disinformation campaign and concepts such as attribution. Before coming to Stanford, Carly was a Team Lead at the Human Rights Investigations Lab at Berkeley Law School where she worked to unearth patterns of various bad actors’ media campaigns. Carly received her BA with honors in political science from the University of California, Berkeley in May 2019.

 

Image
Vanessa Molter

 

Vanessa Molter is a Research Assistant at SIO and a Master in International Policy candidate at Stanford University, where she focuses on International Security in East Asia. At SIO, she monitors and writes on the Taiwanese social media environment. Previously, she has studied Taiwanese security affairs at the Institute for National Defense and Security Research in Taipei, Taiwan, a government-affiliated defense think-tank. Vanessa is fluent in Mandarin and holds a B.S. in International Business and East Asian studies from Tubingen University, Germany.

 

-

Abstract: A Supply and Demand Framework for YouTube Politics (with Joseph Phillips)

Youtube is the most used social network in the United States. However, for a combination of sociological and technical reasons, there exist little quantitative social science research on the political content on Youtube, in spite of widespread concern about the growth of extremist YouTube content. An emerging journalistic consensus theorizes the central role played by the video "recommendation engine," but we believe that this is premature. Instead, we propose the "Supply and Demand" framework for analyzing politics on YouTube. We discuss a number of novel technological affordances of YouTube as a platform and as a collection of videos, and how each might drive supply of or demand for extreme content. We then provide large-scale longitudinal descriptive information about the supply of and demand for alternative political content on YouTube. We demonstrate that viewership of far-right videos peaked in 2017.

Image
Kevin Munger
Kevin Munger is Assistant Professor of Political Science and Social Data Analytics, Penn State University. Ph.D., New York University, 2018. His research looks at social media and other contemporary internet technology has changed political communication. He has published research on the subject using a variety of methodologies, including textual analysis, field experiments, longitudinal surveys and qualitative theory. His research has appeared in leading journals like the American Journal of Political Science, Political Behavior, Political Communication, and Political Science Research & Methods. His present interests include cohort conflict in American politics and developing new methods for social science in a rapidly changing world.

Authors
News Type
Q&As
Date
Paragraphs

The science of cyber risk looks at a broad spectrum of risks across a variety of digital platforms. Often though, the work done within the field is limited by a failure to explore the knowledge of other fields, such as behavioral science, economics, law, management science, and political science. In a new Science Magazine article, “Cyber Risk Research Impeded by Disciplinary Barriers,” cyber risk experts and researchers at Stanford University make a compelling case for the importance of a cross-disciplinary approach. Gregory Falco, security researcher at the Program on Geopolitics, Technology, and Governance, and lead author of the paper, talked recently with the Cyber Policy Center about the need for a holistic approach, both within the study of cyber risk, and at a company level when an attack occurs.

CPC: Your recent perspective paper in Science Magazine highlights the issue of terminology when it comes to how organizations and institutions define a cyber attack. Why is it so important to have consistent naming when we are talking about cyber risk?

Falco: With any scientific discipline or field, there is a language for engaging with other experts. If there’s no consistent language or at least dialect for communication around cyber risk, it’s difficult to engage with scholars from different disciplines. For example: The phrase “cyber event” is contested and the threshold for what an organization considers to be a cyber event varies substantially. Some organizations consider someone pinging their network as a cyber event, others only consider something a cyber event once an intrusion has been publicly disclosed. So there’s a disparity when comparing metrics of cyber events from organization to organization because of the different thresholds of what’s considered an event.

CPC: We’ve all been sent one of those emails letting us know our data may have been compromised and your paper points out it’s nearly impossible to put foolproof protections into place; attacks are inevitable. Given that, how should companies weigh the various ways they can protect themselves?

Falco: The first exercise each organization should go through when they decide to be serious about cyber risk is to prioritize their assets. What is business critical? What is safety critical? Then, like all other risks, a cost-benefit analysis must be done for each asset based on its priority. If the asset is safety-critical, then resources should be allocated to help protect that asset or at least ensure its resilience. Trade-offs are inevitable, no company has unlimited resources. But starting with an understanding of where the priorities are, is critical.

CPC: In companies, cyber security often falls entirely to the Chief Information Security Officer (CISO). Your paper argues that’s shortsighted. What is gained when a company takes a more holistic approach?

Falco: Distributing responsibility across the organization catalyzes a security culture. A security culture is one where there is a constant vigilance or at least broad awareness of cybersecurity concerns throughout the organization. Fostering a security culture is often suggested as a mechanism to help reduce cyber risk in organizations. The problem with not distributing responsibility is that when something happens, it’s too easy to resort to finger-pointing at the CISO, and that’s counterproductive. Efforts after an attack should be on responding and being resilient, not finding the scapegoat.

CPC: Cyber risk largely focuses on prevention, but your paper argues that it’s what happens after an attack in that needs greater attention. Why is that?

Falco: Every organization will be attacked. However organizations can differentiate themselves from a cyber risk standpoint by appropriately managing the situation after an attack. Some of the most significant damages to organizations can be reputational if communication after an attack is unclear or botched. Poor communication after an attack can result in major regulatory fines or valuation adjustments as seen in cases like Yahoo and that can have major business implications. Communications aren’t the only important element of post-attack response. A thorough post-mortem of the organization’s response to the attack can be an important learning experience and a way to plan for future attacks.

CPC: Protecting against cyber attacks and the losses that go with them can obviously be costly for companies. You make a case for collaboration among different fields, say among data scientists and economists. How can that be encouraged?

Falco: We argue that cross-disciplinary collaboration rarely happens organically. Therefore, we call on funding agencies like the NSF or DARPA to specify a preference for cross disciplinary research when funding cyber risk projects. Typically, this isn’t currently a feature of calls for proposals, but for cyber risk programs it should be. We encourage researchers to explore cyber risk questions at the margins of their discipline. Those questions may lend themselves to potential overlap with other disciplines and foster a starting point for cross-disciplinary collaboration.

For more on these topics, see a full list of recent publications from the Cyber Policy Center and the Program on Geopolitics, Technology, and Governance.

Hero Image
Gregory Falco Rod Searcey
All News button
1
-

Image
Ashish Goel
Abstract:

While the Internet has revolutionized many aspects of our lives, there are still no online alternatives for making democratic decisions at large scale as a society. In this talk, we will describe algorithmic and market-inspired approaches towards large scale decision making that our research group is exploring. We will start with a model of opinion dynamics that can potentially lead to polarization, and relate that to commonly used recommendation algorithms. We will then describe the algorithms behind Stanford's participatory budgeting platform, and the lessons that we learnt from deploying this platform in over 70 civic elections. We will use this to motivate the need for a modern theory of social choice that goes beyond voting on candidates. We will then describe ongoing practical work on an automated moderator bot for civic deliberation (in collaboration with Jim Fishkin's group), and ongoing theoretical work on deliberative approaches to decision making. We will conclude with a summary of open directions, focusing in particular on fair advertising. 

Ashish Goel Bio

Lunch Seminar Series Flyer
  • E207, Encina Hall
  • 616 Jane Stanford Way, Stanford, CA 94305
 
Ashish Goel Professor of Management Science and Engineering
Seminars
Paragraphs

The Program on Democracy and the Internet runs the work of the Kofi Annan Commission on Elections and Democracy in the Digital Age which will produce guidelines to support democracies, particularly those of the global south. 

In the span of just two years, the widely shared utopian vision of the internet’s impact on governance has turned decidedly pessimistic.  The original promise of digital technologies was unapologetically democratic: empowering the voiceless, breaking down borders to build cross-national communities, and eliminating elite referees who restricted political discourse. 

That promise has been undercut by concern that the most democratic features of the internet are, in fact, endangering democracy itself.  Democracies pay a price for internet freedom, under this view, in the form of disinformation, hate speech, incitement, and foreign interference in elections.  They also become captive to the economic power of certain platforms, with all the accompanying challenges to privacy and speech regulation that these new, powerful information monopolies have posed.

As it forges ahead in its mandate, the Kofi Annan Commission on Elections and Democracy in the Digital Age must consider these many challenges, as well as the opportunities they present. Professor Nathaniel Persily, a member of the Kofi Annan Commission, he has produced a framing paper for its work, available for download.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Authors
News Type
News
Date
Paragraphs

Marietje Schaake, an outgoing Member of the European Parliament who initiated the net neutrality law now in effect throughout Europe, will be the Cyber Policy Center’s international policy director, and an international policy fellow at the university’s Institute for Human-Centered Artificial Intelligence. 

 

Marietje Schaake standing on train platform

The Freeman Spogli Institute for International Studies (FSI) and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) are pleased to announce that Marietje Schaake has been named to international policy roles in each of their organizations.

At FSI, Schaake will serve as the first international policy director of the Cyber Policy Center. With a focus on cybersecurity, disinformation, digital democracy and election security, the Cyber Policy Center’s research, teaching and policy engagement aims to bring new insights and solutions to national governments, international institutions and industry.

Schaake will also be an international policy fellow at Stanford HAI, which seeks to advance artificial intelligence (AI) research, education, policy and practice to improve the human condition. The university-wide institute is committed to working with industry, governments and civil society organizations that share the goal of a better future for humanity through AI. 

Connecting Cyber Research with the World

As international policy director at the Cyber Policy Center, Schaake will conduct policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she will represent the center to governments, NGOs and the technology industry. 

“Over the course of her career in the European Parliament, Marietje Schaake has distinguished herself as someone who not only has a deep understanding of cyber policy issues, but knows how to enact the appropriate policy-related measures in the real world,” said Nathaniel Persily, the center’s faculty co-director, and the James B. McClatchy Professor of Law at Stanford Law School. “She is a fantastic addition to our growing team of researchers and practitioners from across disciplines, and I can’t wait to welcome her to campus in the fall.” 

In addition to research and policy outreach, Schaake will teach courses on cyber policy, particularly from an international perspective, and bring leaders to Stanford from around the world to discuss cyber policy.  

“Marietje’s extensive experience in politics, with a special focus on cyber policy, will bring a critical perspective to our classrooms,” said Michael McFaul, director of FSI. “Her stellar reputation and track record as a policymaker will be key in building connections between Stanford’s community of students, scholars and relevant policy influencers around the world.” 

At the Forefront of AI Policy and Scholarship

As the inaugural international policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, Schaake will work with faculty to translate research into practical and implementable policy recommendations, and support the institute’s work to partner with AI leaders across sectors.

“AI is a technology that will affect every dimension of human life, and to ensure that its development and deployment is broadly beneficial for humans and society, we need to incorporate global perspectives into our work,” said Rob Reich, HAI associate director and professor of political science. “Marietje played a leading role in establishing the field of cyber policy in Europe, and will contribute enormously to the creation of a community of research, policy and practice focused on addressing the real-world impact of AI. And through her writing and teaching, she can help to shape the future generation of leaders across academia, government, industry and civil society.”

A Career of Policy Impact

Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament, where she was first elected in 2009. 

In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.

“It is an honor to be joining the talented and dedicated teams at FSI and HAI on the Stanford campus,” said Schaake. “I look forward to researching and developing sensible cyber policy recommendations and to continue to bridge the gaps between governments and the technology sector around the world.”

###

About the Cyber Policy Center

The digital age has exposed countries to new security threats and sovereignty challenges that policymakers have only begun to address. In addition, social media and network technologies increasingly strain the balance between protecting freedom of expression and preventing foreign actors from influencing elections. To date, technological advancement in this domain has outpaced government policies, doctrines or regulations. The Cyber Policy Center at the Freeman Spogli Institute for International Studies at Stanford University aims to address this need through research, policy advocacy and teaching. Program areas address topics including cybersecurity, election security, misinformation, digital democracy and human rights, and emerging technologies. Through research, policy engagement and teaching, the Cyber Policy Center brings cutting-edge insights and solutions to national governments, international institutions and industry.

About the Institute for Human-Centered Artificial Intelligence

At Stanford HAI, our vision for the future is led by our commitment to studying, guiding and developing human-centered AI technologies and applications. We believe AI should be collaborative, augmentative, and enhancing to human productivity and quality of life. Our mission is to advance AI research, education, policy, and practice to improve the human condition. Stanford HAI leverages the university’s strength across all disciplines, including business, economics, education, genomics, law, literature, medicine, neuroscience, philosophy and more. These complement Stanford's tradition of leadership in AI, computer science, engineering and robotics.

Marietje Schaake can be reached by email at mschaake@stanford.edu. Her website is www.marietjeschaake.eu.

Media Inquiries: Mike Sellitto, Deputy Director, Stanford Institute for Human-Centered Artificial Intelligence, shai-press@stanford.edu

 
Hero Image
Marietje Schaake
All News button
1
Subscribe to Europe