Security

FSI scholars produce research aimed at creating a safer world and examing the consequences of security policies on institutions and society. They look at longstanding issues including nuclear nonproliferation and the conflicts between countries like North and South Korea. But their research also examines new and emerging areas that transcend traditional borders – the drug war in Mexico and expanding terrorism networks. FSI researchers look at the changing methods of warfare with a focus on biosecurity and nuclear risk. They tackle cybersecurity with an eye toward privacy concerns and explore the implications of new actors like hackers.

Along with the changing face of conflict, terrorism and crime, FSI researchers study food security. They tackle the global problems of hunger, poverty and environmental degradation by generating knowledge and policy-relevant solutions. 

-

Image
riot at capital

Social media and digital technologies have come under fire for their contribution to the development of the groups that ultimately stormed the U.S. Capitol on January 6. Following the insurrection attempt, Facebook, Twitter, Google and other major platforms have banned or suspended President Trump’s accounts. Google and Apple removed Parler from their app stores, while Amazon removed the site from its cloud hosting service, putting an indefinite end to Parler’s reach. This panel will discuss the role of social media during the Trump presidency, including the role of platform policies in fomenting or responding to the recent violence, the benefits and risks posed by steps subsequently taken, and what this means for the future of speech online.

Panelists include:

  • Nate Persily, faculty co-director of the Stanford Cyber Policy Center, director of the Center’s Program on Democracy and the Internet, and Professor at Stanford Law School
  • Daphne Keller, Director of the Cyber Policy Center’s Program on Platform Regulation
  • Alex Stamos, Director of the Cyber Policy Center’s Internet Observatory
  • Renee DiResta, Research Manager at the Cyber Policy Center’s Internet Observatory
  • Moderated by Kelly Born, Executive Director of the Cyber Policy Center

 

0
renee-diresta.jpg

Renée DiResta is the former Research Manager at the Stanford Internet Observatory. She investigates the spread of malign narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and business organizations, and has studied disinformation and computational propaganda in the context of pseudoscience conspiracies, terrorism, and state-sponsored information warfare.

You can see a full list of Renée's writing and speeches on her website: www.reneediresta.com or follow her @noupside.

 

Former Research Manager, Stanford Internet Observatory
0
daphne-keller-headshot.jpg

Daphne Keller's work focuses on platform regulation and Internet users' rights. She has testified before legislatures, courts, and regulatory bodies around the world, and published both academically and in popular press on topics including platform content moderation practices, constitutional and human rights law, copyright, data protection, and national courts' global takedown orders. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations. Until 2020, Daphne was the Director of Intermediary Liability at Stanford's Center for Internet and Society. She also served until 2015 as Associate General Counsel for Google, where she had primary responsibility for the company’s search products. Daphne has taught Internet law at Stanford, Berkeley, and Duke law schools. She is a graduate of Yale Law School, Brown University, and Head Start.

Other Affiliations and Roles:

PUBLICATIONS LIST

Director of Program on Platform Regulation, Cyber Policy Center
Lecturer, Stanford Law School
Date Label
Seminars
Paragraphs

Appeared originally in Lawfare, November, 2020

Code is law. Lawrence Lessig’s 1999 assertion was that in a digital world, programmers were scripting a values system into their technology, often in a fit of absent-mindedness. Twenty years later, the U.S. and Europe are living in the geopolitical landscape those early pioneers created. One-time plucky startups have grown into supergiants vacuuming up ever more data and market share. Artificial intelligence (AI) is becoming both an enabler for social well-being and an instrument of authoritarian control. Emerging technologies are transforming militaries, creating new battlefields and changing the nature of warfare. U.S. and Chinese officials crisscross the world in a geostrategic great game for 5G dominance. And social media has become a vector for bad actors—including illiberal states like Russia and China—to disrupt and degrade democracies. In 2020, code is power.

The coronavirus has accelerated these trends. The pandemic has fueled data processing in contact-tracing apps; exposed vulnerabilities in supply chains; created new dependencies in classrooms and boardrooms on video communications technologies; and powered a spike in anti-vaxxer disinformation, QAnon conspiracy theories and radicalization.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Authors
Marietje Schaake
Tyson Barker
Paragraphs

THE IMMEDIATE AFTERMATH OF ELECTION DAY 2020 offers lessons about the state of American democracy as well as technology’s influence on voters, the voting process, and democratic institutions. Whatever the outcome of an election, and despite a polarized society, it is clear that all Americans share a common stake in protecting the integrity and independence of the administration of elections, the declaration of winners and losers, and a peaceful transition of power. Yet other questions persist: How have disinformation campaigns, whether domestic or foreign, affected the electoral process? And what does the future hold in terms of a tech agenda?

All Publications button
1
Publication Date
Authors
Marietje Schaake
Paragraphs

ANTITRUST AND PRIVACY CONCERNS are two of the most high-profile topics on the tech policy agenda. Checks and balances to counteract the power of companies such as Google, Amazon, and Facebook are under consideration in Congress, though a polarized political environment is a hindrance. But a domestic approach to tech policy will be insufficient, as the users of the large American tech companies are predominantly outside the United States. We need to point the way toward a transnational policy effort that puts democratic principles and basic human rights above the commercial interests of these private companies.

These issues are central to the eight-week Stanford University course, “Technology and the 2020 Election: How Silicon Valley Technologies Affect Elections and Shape Democracy.” The joint class for Stanford students and Stanford’s Continuing Studies Community enrolls a cross-generational population of more than 400 students from around the world.

All Publications button
1
Publication Type
White Papers
Publication Date
Authors
Marietje Schaake
Authors
Stanford Internet Observatory
News Type
Blogs
Date
Paragraphs

On November 5, 2020 Facebook announced the takedown of two networks: 

  • 25 Pages, 31 profiles, and 2 Instagram accounts affiliated with the Muslim Brotherhood. According to Facebook, the operation originated in Egypt, Turkey, and Morocco. The network targeted audiences both in Egypt directly and across the Middle East and North Africa.

  • 11 Pages, 6 Groups, 33 profiles, and 47 Instagram accounts that originated in Afghanistan and Iran and targeted Farsi/Dari speakers in Afghanistan.

Facebook suspended these networks not due to the content of their posts, but for coordinated inauthentic behavior: the Facebook Pages and Groups were managed by fake accounts. A summary of the two networks is below, and full reports are linked at the top of this post. Facebook's announcement is here.

Muslim Brotherhood-Linked Takedown

Many social media disinformation campaigns—and associated takedowns—have been linked to Saudi Arabia, the UAE, and Egypt. But we believe this is the first takedown linked to opposing pro-Muslim Brotherhood actors. Interestingly, this network appears markedly similar to networks from anti-Muslim Brotherhood disinformation campaigns on Facebook. Both sides create professional branding for Pages and share polished, original videos. We conjecture that like anti-Muslim Brotherhood operations, this network may be linked to a digital marketing firm in Egypt. These firms have a particular signature.   

Key takeaways:

  • This was a complex cross-platform operation with a substantial audience. The Facebook Pages the Stanford Internet Observatory analyzed had nearly 1.5 million followers. The operation was also linked to many Twitter accounts, YouTube channels, and Telegram channels, many of which had large followings. 
  • The network created and shared hundreds of original videos and dozens of original songs. 
  • While most of the profiles linked to this operation were stub accounts, one of the profiles ran a social media advertising agency in Egypt.
  • Central messages included:
    • Praise for the Muslim Brotherhood-supporting governments of Turkey and Qatar.
    • Criticism of Saudia Arabian, Egyptian, and UAE governments. 
    • Accusations that the Egyptian government had imprisoned and killed Muslim Brotherhood supporters, and that Muslim Brotherhood supporters were being detained across countries
    • The Facebook Page names were direct and unsubtle. Examples include Tunisia Against the UAE, Hearts with Qatar, and YemenAgainstKSAUEA [sic].

 

The People and Hearts with Qatar Facebook Page The People and Hearts with Qatar Facebook Page

Takedown of Accounts Originating in Afghanistan and Iran

 

This operation produced content oriented towards women, including promoting women's rights. It also promoted the narrative that Iran is a good ally for Afghanistan, highlighted the brutality of the Taliban, and criticized Pakistani and American intervention in Afghanistan. 

Key takeaways:

  • The network aimed to appeal to women. Fifty-three percent of the Instagram accounts had profile photos of women (compared to 11% with photos of men), and the network shared stories about the educational success of women. It is possible the intent was to undermine the peace negotiations between the Afghan government and the Taliban; the Taliban is known for restricting women’s rights.  

  • The network shared messaging that criticized Pakistan, the Taliban, and the U.S. Content about the U.S. criticized U.S. President Donald Trump in general, and specifically claimed that Trump was colluding with the Taliban. The network praised the role Iran could play in Afghan peace negotiations.

  • Posts from accounts purporting to be in Afghanistan used the term Farsi to describe its language, instead of Dari, often explicitly saying they were proud to use the term Farsi. The two languages are very similar; Iran uses the term Farsi and Afghanistan uses the term Dari. 

  • The Facebook profiles and Instagram accounts were as actively involved in pushing particular narratives as the Pages and Groups, and in many cases had larger followers. 

  • We identified five Telegram channels linked to this Facebook/Instagram operation. 

 

Afghanistan My Passion Instagram A post from the Afghanistan My Passion Instagram account using a fabricated photo. The Taliban are shown praying for their “partner” Trump.

Read More

Nigeria Takedown twitter graphic
Blogs

Analysis of an October 2020 Facebook Takedown Linked to the Islamic Movement in Nigeria

In this post and in the attached report we investigate an operation that called for the release from prison of Sheikh Ibrahim El-Zakzaky.
Analysis of an October 2020 Facebook Takedown Linked to the Islamic Movement in Nigeria
graphic illustration of facebook posts from yemen
Blogs

The Ministry of Made-Up Pages: Yemen-Based Actors Impersonate Government Agencies to Spread Anti-Houthi Content

We analyzed a now-suspended network of Facebook Pages, Groups, and profiles linked to individuals in Yemen. We found accounts that impersonated government ministries in Saudi Arabia, posts that linked to anti-Houthi websites, and pro-Turkish Pages and Groups.
The Ministry of Made-Up Pages: Yemen-Based Actors Impersonate Government Agencies to Spread Anti-Houthi Content
twittertakedownapril2
Blogs

Analysis of April 2020 Twitter takedowns linked to Saudia Arabia, the UAE, Egypt, Honduras, Serbia, and Indonesia

Analysis of April 2020 Twitter takedowns linked to Saudia Arabia, the UAE, Egypt, Honduras, Serbia, and Indonesia
All News button
1
Paragraphs

POPULAR CULTURE HAS ENVISIONED SOCIETIES of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.

In our paper “How Relevant is the Turing Test in the Age of Sophisbots,” we argue that society is on the brink of an AI-driven technology that can simulate many of the most important hallmarks of human behavior. As the variety and scale of these so called “deepfakes” expands, they will likely be able to simulate human behavior so effectively and they will operate in such a dynamic manner that they will increasingly pass Turing’s test. 

All Publications button
1
Publication Type
Policy Briefs
Publication Date
Authors
Patrick McDaniel
Nicolas Papernot
Paragraphs

IN 2016 WE LEARNED ABOUT EFFORTS BY FOREIGN ACTORS to interfere in the U.S. election by injecting misinformation and disinformation into public discourse on social media. False events and personas added to the polarization and manipulation of voters.

The 2020 election is marked by similar efforts, though this time the actors are domestic as well as foreign. What do we do know about people with deceptive or malicious agendas trying to influence citizens ahead of elections, to degrade democracy and sow distrust in the election results, or to spread confusion about science and facts more generally through our information systems?

 

All Publications button
1
Publication Type
Policy Briefs
Publication Date
Authors
Marietje Schaake
Rob Reich
-

The internet is now the most common source of political news for almost half of Americans, and social media is now the primary source of news for those under 30. Yet today’s youth have little capacity to evaluate the credibility of digital sources, with colleges across the country often relying on severely outdated guidelines supporting digital literacy education. Join Stanford’s Sam Wineburg, Washington State University’s Mike Caulfield, and Rowan University’s Andrea Baer and Dan Kipnis, in conversation with the Cyber Center’s Kelly Born, about the many challenges and opportunities facing media literacy.

Andrea Baer
Mike Caulfield
Dan Kipnis
Sam Wineburg
Seminars
Paragraphs

VOTERS ARE BEING INUNDATED WITH POLITICAL ADVERTISING on social media and online platforms during the 2020 election season. Campaigns, PACs and third parties have added new tools and tactics for gathering data on voters and targeting them with advertising, and now they can pinpoint niches of potential voters on social media in ways unknown in prior election cycles.

Where once advertising conveyed reasons to vote for a candidate, now it frequently aims to convey misinformation, undermine trust, and depress turnout. The risk is that the spread of misinformation through such means could influence the U.S. vote, cast doubt on the democratic process and raise suspicions about the accuracy of the election outcome. 

 

All Publications button
1
Publication Type
Policy Briefs
Publication Date
Authors
Marietje Schaake
Rob Reich
Authors
Jody Berger
News Type
Q&As
Date
Paragraphs

Graham Webster leads the DigiChina Project, which translates and explains Chinese technology policy for an English-language audience so that debates and decisions regarding cyber policy are factual and based on primary sources of information.  

Housed within Stanford’s Program on Geopolitics, Technology, and Governance (GTG) and in partnership with New America, DigiChina and its community of experts have already published more than 80 translations and analyses of public policy documents, laws, regulations and political speeches and are creating an open-access knowledge base for policy-makers, academics, and members of the tech industry who need insight into the choices China makes regarding technology.

Q. Why is this work important?

A lot of tech is produced in China so it’s important to understand their policies. And in Washington, D.C., you hear a lot of people say, “Well, you can’t know what China’s doing on tech policy. It’s all a secret.” But while China’s political system is often opaque, if you happen to read Chinese, there’s a lot that’s publicly available and can explain what the Chinese government is thinking and planning.

With our network of experts, DigiChina works at the intersection of two policy challenges. One is how do we deal with high technology, and the questions around economic competitiveness, personal autonomy and the security risks that our dependence on tech creates.

The other challenge is, from a US government, business or values perspective, what needs to be done about the increased prominence and power of the Chinese government and its economic, technological and military capabilities.

These questions cut across tech sectors from IT infrastructure to data-driven automation, and cutting-edge developments in quantum technology, biotech, and other fields of research.

Q: How was DigiChina started?

A number of us were working at different organizations, think tanks, consultancies and universities and we all had an interest in explaining the laws and the bureaucratic language to others who aren’t Chinese policy specialists or don’t have the language skills.  

We started working informally at first and then reached out to New America, which is an innovative type of think tank combining research, innovation, and policy thinking on challenges arising during this time of rapid technological and social change. Under the New America umbrella, and through partnerships with the Leiden Asia Centre, a leading European research center based at the University of Leiden, and the Ethics and Governance of Artificial Intelligence Initiative at Harvard and MIT, we were able to build out the program and increase the number of experts in our network.

Q: Who is involved in DigiChina and what types of expertise do you and others bring to the project?

More than 40 people have contributed to DigiChina publications so far, and it’s a pretty diverse group. There are professors and think tank scholars, students and early-career professionals, and experienced government and industry analysts. Everyone has a different part of the picture they can contribute, and we reach out to other experts both in China and around the world when we need more context.

As for me, I was working at Yale Law School’s China Center when I was roped into what would become DigiChina and had spent several years in Beijing and New Haven working more generally on US-China relations and Track 2 dialogues, where experts and former officials from the two countries meet to take on tough problems. As a journalist and graduate student, I had long studied technology and politics in China, and I took on a coordinating role with DigiChina as I turned back to that pursuit full time.

Stanford is an ideal home because the university is a powerhouse in Chinese studies and an epicenter of global digital development.
Graham Webster
Editor in Chief, DigiChina Project

Q. Are there other organizations involved as well?

We have a strong tie to the Leiden Asia Centre at the University of Leiden in the Netherlands, where one of DigiChina’s cofounders, Rogier Creemers, is a professor, and where staff and student researchers have contributed to existing and forthcoming work. We coordinate with a number of other groups on translations, and the project benefits greatly from the time and knowledge contributed by employees of various institutions. I hope that network will increasingly be a resource for contributors and their colleagues.

The project is currently supported by the Ford Foundation, which works to strengthen democratic values, promote international cooperation and advance human achievement around the world. A generous grant from Ford will keep the lights on for two years, giving us the ability to build our open-access resource and, with further fundraising, the potential to bring on more in-house editorial and research staff.

We hope researchers and policy thinkers, regardless of their approaches or ideologies, can use our translations to engage with the real and messy evolution of Chinese tech policy.
Graham Webster
Editor in Chief, DigiChina Project

Q. Do you have plans to grow the project?

We are working to build an accessible online database so researchers and scholars can review primary source documents in both the original Chinese and in English. And we are working toward a knowledge base with background entries on key institutions, legal concepts, and phrases so that a broader audience can situate things like Chinese legal language in their actual context. Providing access to this information is especially important now and in the near future, whether we have a second Trump Administration or a Biden Administration in the United States.

On any number of policy challenges, effective measures are going to depend on going beyond caricatures like an “AI arms race,” “cyber authoritarianism,” or “decoupling,” which provide useful frameworks for debate but can tend to prejudge the outcomes of a huge number of developments. We hope researchers and policy thinkers, regardless of their approaches or ideologies, can use this work to engage with the real and messy evolution of Chinese tech policy.

Hero Image
All News button
1
Subtitle

Webster explains how DigiChina makes Chinese tech policy accessible for English speakers

Subscribe to Security