Society

FSI researchers work to understand continuity and change in societies as they confront their problems and opportunities. This includes the implications of migration and human trafficking. What happens to a society when young girls exit the sex trade? How do groups moving between locations impact societies, economies, self-identity and citizenship? What are the ethnic challenges faced by an increasingly diverse European Union? From a policy perspective, scholars also work to investigate the consequences of security-related measures for society and its values.

The Europe Center reflects much of FSI’s agenda of investigating societies, serving as a forum for experts to research the cultures, religions and people of Europe. The Center sponsors several seminars and lectures, as well as visiting scholars.

Societal research also addresses issues of demography and aging, such as the social and economic challenges of providing health care for an aging population. How do older adults make decisions, and what societal tools need to be in place to ensure the resulting decisions are well-informed? FSI regularly brings in international scholars to look at these issues. They discuss how adults care for their older parents in rural China as well as the economic aspects of aging populations in China and India.

-

On January 11, 2020 Taiwan held its presidential and legislative elections. Many observers expected the People’s Republic of China (PRC) to run an online disinformation campaign during the lead-up to the election in support of their preferred candidate, Han Kuo-yu, who was challenging incumbent Tsai Ing-wen. Such concerns were increased by demonstrated PRC online disinformation targeting the Hong Kong protests, and claims by an alleged PRC spy saying he led disinformation efforts targeting Taiwan during the 2018 elections. 

In this talk, we delve into case studies that highlight the role social media plays in disinformation at large in the Taiwanese information environment. We examine that while the fears of disinformation were generally not realized, we did find evidence of coordinated inauthentic behavior on Facebook, in particular on fan Pages and Groups for the two candidates. Our findings hold implications for researchers trying to distinguish authentic hyper-partisan domestic activism from coordinated disinformation. 

Image
Carly Miller

Carly Miller is a social science researcher at the Stanford Internet Observatory. In addition to covering the Taiwanese election, she assists the team in other digital forensic research and thinking about how researchers external to social media platforms think about disinformation campaign and concepts such as attribution. Before coming to Stanford, Carly was a Team Lead at the Human Rights Investigations Lab at Berkeley Law School where she worked to unearth patterns of various bad actors’ media campaigns. Carly received her BA with honors in political science from the University of California, Berkeley in May 2019.

 

Image
Vanessa Molter

 

Vanessa Molter is a Research Assistant at SIO and a Master in International Policy candidate at Stanford University, where she focuses on International Security in East Asia. At SIO, she monitors and writes on the Taiwanese social media environment. Previously, she has studied Taiwanese security affairs at the Institute for National Defense and Security Research in Taipei, Taiwan, a government-affiliated defense think-tank. Vanessa is fluent in Mandarin and holds a B.S. in International Business and East Asian studies from Tubingen University, Germany.

 

-

Abstract: A Supply and Demand Framework for YouTube Politics (with Joseph Phillips)

Youtube is the most used social network in the United States. However, for a combination of sociological and technical reasons, there exist little quantitative social science research on the political content on Youtube, in spite of widespread concern about the growth of extremist YouTube content. An emerging journalistic consensus theorizes the central role played by the video "recommendation engine," but we believe that this is premature. Instead, we propose the "Supply and Demand" framework for analyzing politics on YouTube. We discuss a number of novel technological affordances of YouTube as a platform and as a collection of videos, and how each might drive supply of or demand for extreme content. We then provide large-scale longitudinal descriptive information about the supply of and demand for alternative political content on YouTube. We demonstrate that viewership of far-right videos peaked in 2017.

Image
Kevin Munger
Kevin Munger is Assistant Professor of Political Science and Social Data Analytics, Penn State University. Ph.D., New York University, 2018. His research looks at social media and other contemporary internet technology has changed political communication. He has published research on the subject using a variety of methodologies, including textual analysis, field experiments, longitudinal surveys and qualitative theory. His research has appeared in leading journals like the American Journal of Political Science, Political Behavior, Political Communication, and Political Science Research & Methods. His present interests include cohort conflict in American politics and developing new methods for social science in a rapidly changing world.

-

Abstract: The problem of online disinformation is only getting worse. Social media may well play a role in the US 2020 presidential election and other major political events. But that doesn’t even begin to describe what future propaganda will look like. As Samuel Woolley shows, we will soon be navigating new technologies such as human-like automated voice systems, machine learning, ‘deep-fake’ AI-edited videos and images, interactive memes, virtual reality and augmented reality. In stories both deeply researched and compellingly written, Woolley describes this future, and explains how the technology can be manipulated, who might control it and its impact on political strategy. Finally, Woolley proposes strategic responses to this threat with the ultimate goal of empowering activists and pushing technology builders to design for democracy.

Image
Samuel Woolley
Samuel Woolley is a researcher with a focus on emerging media technologies, propaganda and politics. His work looks at how automation, algorithms and AI are leveraged for both democracy and control. His forthcoming book, The Reality Game: How the Next Wave of Technology Will Break the Truth, will be released in January of 2020 by PublicAffairs/Hachette. It explores the future of digital disinformation and provides a pragmatic roadmap for how society can respond.

Woolley is an assistant professor in the School of Journalism at the Moody College of Communication at the University of Texas-Austin. He is the Program Director of disinformation research at the Center for Media Engagement (CME) at UT. He holds a PhD from the University of Washington-Seattle. His academic work has appeared in the Journal of Information Technology and Politics, the International Journal of Communication, the Routledge Handbook of Media, Conflict and Security, A Networked Self: Platforms, Stories, Connections and The Political Economy of Robots.  He is one of the founders of the Computational Propaganda Research Project, now based at the Oxford Internet Institute, University of Oxford. Woolley is also the founder of the Digital Intelligence Lab at the Institute for the Future (IFTF)–a 50-year-old think-tank based in Palo Alto, CA.

Lectures
Authors
News Type
Q&As
Date
Paragraphs

The science of cyber risk looks at a broad spectrum of risks across a variety of digital platforms. Often though, the work done within the field is limited by a failure to explore the knowledge of other fields, such as behavioral science, economics, law, management science, and political science. In a new Science Magazine article, “Cyber Risk Research Impeded by Disciplinary Barriers,” cyber risk experts and researchers at Stanford University make a compelling case for the importance of a cross-disciplinary approach. Gregory Falco, security researcher at the Program on Geopolitics, Technology, and Governance, and lead author of the paper, talked recently with the Cyber Policy Center about the need for a holistic approach, both within the study of cyber risk, and at a company level when an attack occurs.

CPC: Your recent perspective paper in Science Magazine highlights the issue of terminology when it comes to how organizations and institutions define a cyber attack. Why is it so important to have consistent naming when we are talking about cyber risk?

Falco: With any scientific discipline or field, there is a language for engaging with other experts. If there’s no consistent language or at least dialect for communication around cyber risk, it’s difficult to engage with scholars from different disciplines. For example: The phrase “cyber event” is contested and the threshold for what an organization considers to be a cyber event varies substantially. Some organizations consider someone pinging their network as a cyber event, others only consider something a cyber event once an intrusion has been publicly disclosed. So there’s a disparity when comparing metrics of cyber events from organization to organization because of the different thresholds of what’s considered an event.

CPC: We’ve all been sent one of those emails letting us know our data may have been compromised and your paper points out it’s nearly impossible to put foolproof protections into place; attacks are inevitable. Given that, how should companies weigh the various ways they can protect themselves?

Falco: The first exercise each organization should go through when they decide to be serious about cyber risk is to prioritize their assets. What is business critical? What is safety critical? Then, like all other risks, a cost-benefit analysis must be done for each asset based on its priority. If the asset is safety-critical, then resources should be allocated to help protect that asset or at least ensure its resilience. Trade-offs are inevitable, no company has unlimited resources. But starting with an understanding of where the priorities are, is critical.

CPC: In companies, cyber security often falls entirely to the Chief Information Security Officer (CISO). Your paper argues that’s shortsighted. What is gained when a company takes a more holistic approach?

Falco: Distributing responsibility across the organization catalyzes a security culture. A security culture is one where there is a constant vigilance or at least broad awareness of cybersecurity concerns throughout the organization. Fostering a security culture is often suggested as a mechanism to help reduce cyber risk in organizations. The problem with not distributing responsibility is that when something happens, it’s too easy to resort to finger-pointing at the CISO, and that’s counterproductive. Efforts after an attack should be on responding and being resilient, not finding the scapegoat.

CPC: Cyber risk largely focuses on prevention, but your paper argues that it’s what happens after an attack in that needs greater attention. Why is that?

Falco: Every organization will be attacked. However organizations can differentiate themselves from a cyber risk standpoint by appropriately managing the situation after an attack. Some of the most significant damages to organizations can be reputational if communication after an attack is unclear or botched. Poor communication after an attack can result in major regulatory fines or valuation adjustments as seen in cases like Yahoo and that can have major business implications. Communications aren’t the only important element of post-attack response. A thorough post-mortem of the organization’s response to the attack can be an important learning experience and a way to plan for future attacks.

CPC: Protecting against cyber attacks and the losses that go with them can obviously be costly for companies. You make a case for collaboration among different fields, say among data scientists and economists. How can that be encouraged?

Falco: We argue that cross-disciplinary collaboration rarely happens organically. Therefore, we call on funding agencies like the NSF or DARPA to specify a preference for cross disciplinary research when funding cyber risk projects. Typically, this isn’t currently a feature of calls for proposals, but for cyber risk programs it should be. We encourage researchers to explore cyber risk questions at the margins of their discipline. Those questions may lend themselves to potential overlap with other disciplines and foster a starting point for cross-disciplinary collaboration.

For more on these topics, see a full list of recent publications from the Cyber Policy Center and the Program on Geopolitics, Technology, and Governance.

Hero Image
Gregory Falco Rod Searcey
All News button
1
-

Image
Ashish Goel
Abstract:

While the Internet has revolutionized many aspects of our lives, there are still no online alternatives for making democratic decisions at large scale as a society. In this talk, we will describe algorithmic and market-inspired approaches towards large scale decision making that our research group is exploring. We will start with a model of opinion dynamics that can potentially lead to polarization, and relate that to commonly used recommendation algorithms. We will then describe the algorithms behind Stanford's participatory budgeting platform, and the lessons that we learnt from deploying this platform in over 70 civic elections. We will use this to motivate the need for a modern theory of social choice that goes beyond voting on candidates. We will then describe ongoing practical work on an automated moderator bot for civic deliberation (in collaboration with Jim Fishkin's group), and ongoing theoretical work on deliberative approaches to decision making. We will conclude with a summary of open directions, focusing in particular on fair advertising. 

Ashish Goel Bio

Lunch Seminar Series Flyer
  • E207, Encina Hall
  • 616 Jane Stanford Way, Stanford, CA 94305
 
Ashish Goel Professor of Management Science and Engineering
Seminars
Paragraphs

The Program on Democracy and the Internet runs the work of the Kofi Annan Commission on Elections and Democracy in the Digital Age which will produce guidelines to support democracies, particularly those of the global south. 

In the span of just two years, the widely shared utopian vision of the internet’s impact on governance has turned decidedly pessimistic.  The original promise of digital technologies was unapologetically democratic: empowering the voiceless, breaking down borders to build cross-national communities, and eliminating elite referees who restricted political discourse. 

That promise has been undercut by concern that the most democratic features of the internet are, in fact, endangering democracy itself.  Democracies pay a price for internet freedom, under this view, in the form of disinformation, hate speech, incitement, and foreign interference in elections.  They also become captive to the economic power of certain platforms, with all the accompanying challenges to privacy and speech regulation that these new, powerful information monopolies have posed.

As it forges ahead in its mandate, the Kofi Annan Commission on Elections and Democracy in the Digital Age must consider these many challenges, as well as the opportunities they present. Professor Nathaniel Persily, a member of the Kofi Annan Commission, he has produced a framing paper for its work, available for download.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Authors
Paragraphs

When China's government announced its ambitions for the country’s theoretical, technological, and applied artificial intelligence development to reach a “worldleading level” by 2030, governments and markets worldwide took notice. So did DigiChina. The New Generation Artificial Intelligence Development Plan (AIDP), drafed by experts across China’s bureaucracy and issued by the State Council in July 2017, was one of this nascent project's first major translations. Our team of four translators then split up to provide three different views of its significance—as a legacy of central planning, “not a moonshot”; as a bureaucratic maneuver by its authors, but one with an “uncommonly foresighted approach” to AI governance challenges; and as a detailed plan that could portend “surpassing the United States.” Since 2017, Chinese officials, businesspeople, and researchers have mobilized remarkable efforts, even if the AIDP’s authors might acknowledge their specific dozen-year targets were educated guesses.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Authors
-

Image
jeffhancock profile

 

Abstract:

A new trust framework is emerging – fueled by social, economic and technological forces that will profoundly alter how we trust, not only what we see and read online, but also one another. At the same time, technology is influencing how we behave and relate to one another, with AI starting to mediate human-to-human communication. In this talk we will discuss how principles from psychology and communication can help understand and predict trust dynamics in a world in which fake news is salient and uncertainty about AI is rampant. We will discuss several studies that reveal key principles to guide how we think about truth and trust on the internet.

Jeff Hancock Bio

Downloadable Flyer

-

Image
Kate Starbird
Abstract:

This talk describes the disinformation campaign targeting the Syria Civil Defense (or “White Helmets”), a humanitarian response group that works in rebel held areas of Syria. The White Helmets provide medical aid, search, and rescue to people affected by the civil war in Syria. They also document the impacts of atrocities — including airstrikes and chemical weapons attacks — perpetrated by the Syrian regime and their Russian allies. For several years, the White Helmets have been the target of a campaign to undermine and delegitimize their work. In this talk, I describe a multi-study research effort that reveals how this multi-dimensional, cross-platform campaign “works” — including a look at the media ecosystems that support the campaign, the networks of actors who collaborate to produce and spread its narratives (including government agents and “unwitting crowds” of online activists), and the “work” that these actors participate in, using the affordances of social media platforms to connect, recruit, organize, promote their messages, attack opposing messages, and otherwise advance the goals of their campaign. 

Kate Starbird Bio

 

 

Subscribe to Society