-

Abstract:

China’s cyberspace and technology regime is going through a period of change—but it’s taking a while. The U.S.–China economic and tech competition both influences Chinese government developments and awaits their outcomes, and the 2017 Cybersecurity Law set up a host of still-unresolved questions. Data governance, security standards, market access, compliance, and other questions saw only modest new clarity in 2019. But 2020 promises new laws on personal information protection and data security, and the Stanford-based DigiChina Project in the Program on Geopolitics, Technology, and Governance, is devoted to monitoring, translating, and explaining these developments. From AI governance to the the nexus of cybersecurity and supply chains, this talk will summarize recent Chinese policymaking and lay out expectations for the year to come.

Image
Graham Webster
About the Speaker:

Graham Webster is editor in chief of the Stanford–New America DigiChina Project at the Stanford University Cyber Policy Center and a China digital economy fellow at New America. He was previously a senior fellow and lecturer at Yale Law School, where he was responsible for the Paul Tsai China Center’s U.S.–China Track 2 and Track 1.5 dialogues for five years before leading programming on cyberspace and technology issues. In the past, he wrote a CNET News blog on technology and society from Beijing, worked at the Center for American Progress, and taught East Asian politics at NYU's Center for Global Affairs. Webster holds a master's degree in East Asian studies from Harvard University and a bachelor's degree in journalism from Northwestern University. Webster also writes the independent Transpacifica e-mail newsletter.

Graham Webster
-

Multilateral Negotiations on ICTs (information and communications technologies) and International Security: Process and Prospects for the UN Group of Government Experts and the UN Open-Ended Working Group

Abstract: The intent of this seminar is to provide an update on recent events at the UN relevant to international discussions of cybersecurity (and a primer of sorts on current UN processes for addressing this topic).

In 2018, UN Member States decided to establish two concurrent negotiations with nearly identical mandates on the international security dimension of ICTs—a sixth limited membership UN Group of Governmental Experts (GGE) and an Open-Ended Working Group (OEWG) open to all governments. How did this happen? Are they competing or complementary endeavors? Is it likely that one will be able to bridge the longstanding divides on how international law applies to cyberspace or agree by consensus to additional norms of responsible State behavior? What would be a good outcome of each process? And how do these negotiations fit into the wider UN ecosystem, including the follow-up to the Secretary-General’s High Level Panel on Digital Cooperation.  

Image
Kerstin Vignard
About the Speaker: Kerstin Vignard is an international security policy professional with nearly 25 years’ experience at the United Nations, with a particular interest in the nexus of international security policy and technology. Vignard is Deputy to the Director at UNIDIR, currently on temporary assignment leading UNIDIR’s team supporting the Chairmen of the latest Group of Governmental Experts (GGEs) on Cyber Security and the Open-Ended Working Group. She has led UNIDIR’s team supporting four previous cyber GGEs. From 2013 to 2018, she initiated and led UNIDIR’s work on the weaponization of increasingly autonomous technologies, and is the co-Principal Investigator of a CIFAR AI & Society grant examining potential regulatory approaches for security and defence applications of AI.

Paragraphs

Despite pressure from President Donald Trump and Attorney General William Barr, Apple continues to stand its ground and refuses to re-engineer iPhones so law enforcement can unlock the devices. Apple has maintained that it has done everything required by law and that creating a "backdoor" would undermine cybersecurity and privacy for iPhone users everywhere.

Apple is right to stand firm in its position that building a "backdoor" could put user data at risk.

At its most basic, encryption is the act of converting plaintext (like a credit card number) into unintelligible ciphertext using a very large, random number called a key. Anyone with the key can convert the ciphertext back to plaintext. Persons without the key cannot, meaning that even if they acquire the ciphertext, it should still be impossible for them to discover the meaning of the underlying plaintext.

Full Text at CNN

 

 

 

 

All Publications button
1
Publication Type
Commentary
Publication Date
Authors
-

Abstract:

Considerable scholarship has established that algorithms are an increasingly important part of what information people encounter in everyday life. Much less work has focused on studying users’ experiences with, understandings of, and attitudes about how algorithms may influence what they see and do. The dearth of research on this topic may be in part due to the difficulty in studying a subject about which there is no known ground truth given that details about algorithms are proprietary and rarely made public. In this talk, I will report on the methodological challenges of studying people’s algorithm skills based on 83 in-person interviews conducted in five countries. I will also discuss the types of algorithm skills identified from our data. The talk will advocate for more such scholarship to accompany existing system-level analyses of algorithms’ social implications and offers a blue print for how to do this.

Image
Eszter Hargittai
About the Speaker:

Eszter Hargittai is Professor and Chair of Internet Use and Society at the Institute of Communication and Media Research, University of Zurich. Previously, she was the Delaney Family Professor in the Communication Studies Department at Northwestern University. In 2019, she was elected Fellow of the International Communication Association and also received the William F. Ogburn Mid-Career Achievement Award from the American Sociological Association’s section on Communication, Information Technology and Media Sociology. For over two decades, she has been researching people’s Internet uses and skills, and how these relate to questions of social inequality.

 

Paragraphs

Protecting Electoral Integrity in the Digital Age | The Report of the Kofi Annan Commission on Elections and Democracy in the Digital Age

New information and communication technologies (ICTs) pose difficult challenges for electoral integrity. In recent years foreign governments have used social media and the Internet to interfere in elections around the globe. Disinformation has been weaponized to discredit democratic institutions, sow societal distrust, and attack political candidates. Social media has proved a useful tool for extremist groups to send messages of hate and to incite violence. Democratic governments strain to respond to a revolution in political advertising brought about by ICTs. Electoral integrity has been at risk from attacks on the electoral process, and on the quality of democratic deliberation.

The relationship between the Internet, social media, elections, and democracy is complex, systemic, and unfolding. Our ability to assess some of the most important claims about social media is constrained by the unwillingness of the major platforms to share data with researchers. Nonetheless, we are confident about several important findings.

All Publications button
1
Publication Type
Annual Reports
Publication Date
Authors
Alex Stamos
-

On January 11, 2020 Taiwan held its presidential and legislative elections. Many observers expected the People’s Republic of China (PRC) to run an online disinformation campaign during the lead-up to the election in support of their preferred candidate, Han Kuo-yu, who was challenging incumbent Tsai Ing-wen. Such concerns were increased by demonstrated PRC online disinformation targeting the Hong Kong protests, and claims by an alleged PRC spy saying he led disinformation efforts targeting Taiwan during the 2018 elections. 

In this talk, we delve into case studies that highlight the role social media plays in disinformation at large in the Taiwanese information environment. We examine that while the fears of disinformation were generally not realized, we did find evidence of coordinated inauthentic behavior on Facebook, in particular on fan Pages and Groups for the two candidates. Our findings hold implications for researchers trying to distinguish authentic hyper-partisan domestic activism from coordinated disinformation. 

Image
Carly Miller

Carly Miller is a social science researcher at the Stanford Internet Observatory. In addition to covering the Taiwanese election, she assists the team in other digital forensic research and thinking about how researchers external to social media platforms think about disinformation campaign and concepts such as attribution. Before coming to Stanford, Carly was a Team Lead at the Human Rights Investigations Lab at Berkeley Law School where she worked to unearth patterns of various bad actors’ media campaigns. Carly received her BA with honors in political science from the University of California, Berkeley in May 2019.

 

Image
Vanessa Molter

 

Vanessa Molter is a Research Assistant at SIO and a Master in International Policy candidate at Stanford University, where she focuses on International Security in East Asia. At SIO, she monitors and writes on the Taiwanese social media environment. Previously, she has studied Taiwanese security affairs at the Institute for National Defense and Security Research in Taipei, Taiwan, a government-affiliated defense think-tank. Vanessa is fluent in Mandarin and holds a B.S. in International Business and East Asian studies from Tubingen University, Germany.

 

-

Abstract: A Supply and Demand Framework for YouTube Politics (with Joseph Phillips)

Youtube is the most used social network in the United States. However, for a combination of sociological and technical reasons, there exist little quantitative social science research on the political content on Youtube, in spite of widespread concern about the growth of extremist YouTube content. An emerging journalistic consensus theorizes the central role played by the video "recommendation engine," but we believe that this is premature. Instead, we propose the "Supply and Demand" framework for analyzing politics on YouTube. We discuss a number of novel technological affordances of YouTube as a platform and as a collection of videos, and how each might drive supply of or demand for extreme content. We then provide large-scale longitudinal descriptive information about the supply of and demand for alternative political content on YouTube. We demonstrate that viewership of far-right videos peaked in 2017.

Image
Kevin Munger
Kevin Munger is Assistant Professor of Political Science and Social Data Analytics, Penn State University. Ph.D., New York University, 2018. His research looks at social media and other contemporary internet technology has changed political communication. He has published research on the subject using a variety of methodologies, including textual analysis, field experiments, longitudinal surveys and qualitative theory. His research has appeared in leading journals like the American Journal of Political Science, Political Behavior, Political Communication, and Political Science Research & Methods. His present interests include cohort conflict in American politics and developing new methods for social science in a rapidly changing world.

Authors
News Type
Q&As
Date
Paragraphs

The science of cyber risk looks at a broad spectrum of risks across a variety of digital platforms. Often though, the work done within the field is limited by a failure to explore the knowledge of other fields, such as behavioral science, economics, law, management science, and political science. In a new Science Magazine article, “Cyber Risk Research Impeded by Disciplinary Barriers,” cyber risk experts and researchers at Stanford University make a compelling case for the importance of a cross-disciplinary approach. Gregory Falco, security researcher at the Program on Geopolitics, Technology, and Governance, and lead author of the paper, talked recently with the Cyber Policy Center about the need for a holistic approach, both within the study of cyber risk, and at a company level when an attack occurs.

CPC: Your recent perspective paper in Science Magazine highlights the issue of terminology when it comes to how organizations and institutions define a cyber attack. Why is it so important to have consistent naming when we are talking about cyber risk?

Falco: With any scientific discipline or field, there is a language for engaging with other experts. If there’s no consistent language or at least dialect for communication around cyber risk, it’s difficult to engage with scholars from different disciplines. For example: The phrase “cyber event” is contested and the threshold for what an organization considers to be a cyber event varies substantially. Some organizations consider someone pinging their network as a cyber event, others only consider something a cyber event once an intrusion has been publicly disclosed. So there’s a disparity when comparing metrics of cyber events from organization to organization because of the different thresholds of what’s considered an event.

CPC: We’ve all been sent one of those emails letting us know our data may have been compromised and your paper points out it’s nearly impossible to put foolproof protections into place; attacks are inevitable. Given that, how should companies weigh the various ways they can protect themselves?

Falco: The first exercise each organization should go through when they decide to be serious about cyber risk is to prioritize their assets. What is business critical? What is safety critical? Then, like all other risks, a cost-benefit analysis must be done for each asset based on its priority. If the asset is safety-critical, then resources should be allocated to help protect that asset or at least ensure its resilience. Trade-offs are inevitable, no company has unlimited resources. But starting with an understanding of where the priorities are, is critical.

CPC: In companies, cyber security often falls entirely to the Chief Information Security Officer (CISO). Your paper argues that’s shortsighted. What is gained when a company takes a more holistic approach?

Falco: Distributing responsibility across the organization catalyzes a security culture. A security culture is one where there is a constant vigilance or at least broad awareness of cybersecurity concerns throughout the organization. Fostering a security culture is often suggested as a mechanism to help reduce cyber risk in organizations. The problem with not distributing responsibility is that when something happens, it’s too easy to resort to finger-pointing at the CISO, and that’s counterproductive. Efforts after an attack should be on responding and being resilient, not finding the scapegoat.

CPC: Cyber risk largely focuses on prevention, but your paper argues that it’s what happens after an attack in that needs greater attention. Why is that?

Falco: Every organization will be attacked. However organizations can differentiate themselves from a cyber risk standpoint by appropriately managing the situation after an attack. Some of the most significant damages to organizations can be reputational if communication after an attack is unclear or botched. Poor communication after an attack can result in major regulatory fines or valuation adjustments as seen in cases like Yahoo and that can have major business implications. Communications aren’t the only important element of post-attack response. A thorough post-mortem of the organization’s response to the attack can be an important learning experience and a way to plan for future attacks.

CPC: Protecting against cyber attacks and the losses that go with them can obviously be costly for companies. You make a case for collaboration among different fields, say among data scientists and economists. How can that be encouraged?

Falco: We argue that cross-disciplinary collaboration rarely happens organically. Therefore, we call on funding agencies like the NSF or DARPA to specify a preference for cross disciplinary research when funding cyber risk projects. Typically, this isn’t currently a feature of calls for proposals, but for cyber risk programs it should be. We encourage researchers to explore cyber risk questions at the margins of their discipline. Those questions may lend themselves to potential overlap with other disciplines and foster a starting point for cross-disciplinary collaboration.

For more on these topics, see a full list of recent publications from the Cyber Policy Center and the Program on Geopolitics, Technology, and Governance.

Hero Image
Gregory Falco Rod Searcey
All News button
1
-

Image
Ashish Goel
Abstract:

While the Internet has revolutionized many aspects of our lives, there are still no online alternatives for making democratic decisions at large scale as a society. In this talk, we will describe algorithmic and market-inspired approaches towards large scale decision making that our research group is exploring. We will start with a model of opinion dynamics that can potentially lead to polarization, and relate that to commonly used recommendation algorithms. We will then describe the algorithms behind Stanford's participatory budgeting platform, and the lessons that we learnt from deploying this platform in over 70 civic elections. We will use this to motivate the need for a modern theory of social choice that goes beyond voting on candidates. We will then describe ongoing practical work on an automated moderator bot for civic deliberation (in collaboration with Jim Fishkin's group), and ongoing theoretical work on deliberative approaches to decision making. We will conclude with a summary of open directions, focusing in particular on fair advertising. 

Ashish Goel Bio

Lunch Seminar Series Flyer
  • E207, Encina Hall
  • 616 Jane Stanford Way, Stanford, CA 94305
 
Ashish Goel Professor of Management Science and Engineering
Seminars
Paragraphs

The Program on Democracy and the Internet runs the work of the Kofi Annan Commission on Elections and Democracy in the Digital Age which will produce guidelines to support democracies, particularly those of the global south. 

In the span of just two years, the widely shared utopian vision of the internet’s impact on governance has turned decidedly pessimistic.  The original promise of digital technologies was unapologetically democratic: empowering the voiceless, breaking down borders to build cross-national communities, and eliminating elite referees who restricted political discourse. 

That promise has been undercut by concern that the most democratic features of the internet are, in fact, endangering democracy itself.  Democracies pay a price for internet freedom, under this view, in the form of disinformation, hate speech, incitement, and foreign interference in elections.  They also become captive to the economic power of certain platforms, with all the accompanying challenges to privacy and speech regulation that these new, powerful information monopolies have posed.

As it forges ahead in its mandate, the Kofi Annan Commission on Elections and Democracy in the Digital Age must consider these many challenges, as well as the opportunities they present. Professor Nathaniel Persily, a member of the Kofi Annan Commission, he has produced a framing paper for its work, available for download.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Authors
Subscribe to Sub-Saharan Africa