The Stanford Cyber Policy Center, a joint initiative of the Freeman Spogli Institute for International Studies and Stanford Law School, is Stanford University's research center for the interdisciplinary study of issues at the nexus of technology, governance and public policy focused on digital technologies impacting democracy, security, and geopolitics globally.
Our Programs
The Cyber Policy Center is home to six programs, all focused on issues at the nexus of technology, governance, law and public policy.
The mission of the Global Digital Policy Incubator is to inspire policy and governance innovations that reinforce democratic values, universal human rights, and the rule of law in the digital realm. We serve as a collaboration hub for the development of norms, guidelines, and laws that enhance freedom, security, and trust in the global digital ecosystem. The bottom line question that guides this initiative: How do we help governments and private sector technology companies establish governance norms, policies, and processes that allow citizens and society to reap the upside benefits of technology, while protecting against the downside risks?
The Stanford Internet Observatory is a cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies, with a focus on social media. Under the program direction of computer security expert Alex Stamos, the Observatory was created to learn about the abuse of the internet in real time, to develop a novel curriculum on trust and safety that is a first in computer science, and to translate our research discoveries into training and policy innovations for the public good.
The Program on Democracy and the Internet envisions digital technologies supporting rather than subverting democracy by maximizing the benefits and minimizing the threats through changes in policy, technology, and social and ethical technological norms. Through knowledge creation and education, and by leveraging the convening power of Stanford University, PDI creates and shares original empirical research around how digital technologies are impacting democracy to inform and educate decision-makers in the field, including the next generation of technologists, business leaders, and policymakers.
A new program launched in 2023, the Program on Governance of Emerging Technologies was formed to shape the technical, ethical, and governance infrastructure of emerging technologies and the next-generation internet.
The Program on Platform Regulation, headed by Daphne Keller, focuses on current or emerging law governing Internet platforms, with an emphasis on laws’ consequences for the rights and interests of Internet users and the public.
The Stanford Social Media Lab works on understanding psychological and interpersonal processes in social media. The team specializes in using computational linguistics and behavioral experiments to understand how the words we use can reveal psychological and social dynamics, such as deception and trust, emotional dynamics, and relationships.
We analyzed a now-suspended network of Facebook Pages, Groups, and profiles linked to individuals in Yemen. We found accounts that impersonated government ministries in Saudi Arabia, posts that linked to anti-Houthi websites, and pro-Turkish Pages and Groups.
A new partnership with Stanford Internet Observatory, the Program on Democracy and the Internet, DFRLab, Graphika and the University of Washington's Center for an Informed Public will tackle electoral disinformation in real time as we strive for a healthy and successful election.
The white paper, in collaboration with the Hoover Institution, dives into China’s capabilities and raises an important question: how do states with full-spectrum propaganda capabilities put them to use in modern-day information operations? We examine 3 case studies: Hong Kong's 2019-2020 protests; Taiwan’s 2020 election; and COVID-19.
Google, Facebook, YouTube and Instagram feature suicide prevention resources on English language searches in the US, but are those resources available to a broader audience? Performance varies across countries and languages and supportive content appears more frequently in wealthy European and East Asian countries and less frequently other regions.
On Monday, June 30, 2020, Reddit updated its policy on hate speech. As part of research for a forthcoming book based on the Stanford Internet Observatory’s Trust and Safety Engineering course, we present a comparative assessment of platform policies and enforcement practices on hate speech, and discuss how Reddit fits into this framework.
On June 12, 2020, Twitter removed 1,152 accounts attributed to Current Policy, “a media website engaging in state-backed political propaganda within Russia.” These accounts were taken down because they violated Twitter’s policy on platform manipulation. Our latest white paper analyzes their activities and motivations.
China has been shipping medical supplies to countries battling the coronavirus pandemic, an effort dubbed “mask diplomacy.” It remains to be seen if China will be able to tailor its messages effectively to win hearts and minds of people around the world.
On June 3, 2020 Twitter shared with the Stanford Internet Observatory three distinct takedown datasets from China, Turkey and Russia. In this post and in the attached white papers on the China and Turkey operations, we look at the topics and tactics of these operations.
As the coronavirus pandemic spread around the world, RT’s English-language branches worked to undermine lockdown measures in Western countries while extolling the Russian and Chinese governments’ success in containing the virus’s spread.
In the June 2020 Sabin-Aspen Vaccine Science Policy Report, "Meeting the Challenge of Vaccination Hesitancy," Stanford Internet Observatory research manager Renée DiResta and First Draft lead strategist Claire Wardle write about how anti-vaccination movements' effective storytelling helps spread misinformation online.
We identify hundreds of scam social media accounts across Facebook, Instagram, Twitter, LinkedIn, and TikTok targeting individuals in Nigeria. These accounts post on compromised accounts claiming to have earned money through a fake investment scheme, and encourage others to “invest”. The potential for harm is high: by one estimate thousands have been scammed.