September 25-26 | Trust & Safety Research Conference
The CPC's Trust and Safety Research Conference focuses on research in trust and safety for those in academia, industry, civil society, and government. Applications to present at the conference are open now!
Social Media Lab Appointed as Lead Academic Partner for Australian Legislation
The Stanford Social Media Lab (SML) at Stanford's Cyber Policy Center has announced its partnership with the Australian Government's eSafety Commission as Lead Academic Partner on the recently passed Social Media Minimum Age legislation.
Research by CPC's Ronald E. Robertson and co-authors, point to the need for greater transparency on search engines' content moderation practices, especially around important events like elections.
Japan’s unique strategy – combining regulatory oversight, resource efficiency, and international partnership – offers a potential blueprint for the world. By GDPi's Charles Mok.
Join us for a weekly webinar series organized by Stanford’s Cyber Policy Center (CPC). Our speakers include those who focus on policy to others who concentrate on empirical work around cyber issues. There will be both in person and virtual zoom options and attendees can register for all events in the series or single events. Events begin April 1st and run through the end of May.
March 11 | The Power of Purpose-Driven AI: Implications for Design, Adoption, and Policy
How a purpose-driven approach to AI differs from the current industry approach and why it is critical for realizing the widespread adoption and beneficial impact we hope to see from AI. With Nathanael Fast, PhD.
February 25 | Adolescents, Literacy, and Health: Implications for Cyber Policy
Opportunities and challenges from the perspective of health services and policy research and implications for efforts to promote positive youth development. With Jonathan D. Klein.
The Program on Platform Regulation focuses on current or emerging law governing Internet platforms, with an emphasis on laws’ consequences for the rights and interests of Internet users and the public.
The Stanford Social Media Lab works on understanding psychological and interpersonal processes in social media. The team specializes in using computational linguistics and behavioral experiments to understand how the words we use can reveal psychological and social dynamics, such as deception and trust, emotional dynamics, and relationships.
The Program on Democracy and the Internet seeks to promote research, convenings, and courses that engage with the challenges new technologies pose to democracy in the digital age.
The mission of the Global Digital Policy Incubator at the Stanford Cyber Policy Center is to inspire policy and governance innovations that reinforce democratic values, universal human rights, and the rule of law in the digital realm.
The Program on Governance of Emerging Technologies aims to build a path for future research and policymaking in order to explore the impacts of emerging technologies on democratic governance, rule of law, and socioeconomic inequality.
In The Politics of Order in Informal Markets: How the State Shapes Private Governance, Grossman explores findings that challenge the conventional wisdom that private good governance in developing countries thrives when the government keeps its hands off private group affairs.
Scholars at the Freeman Spogli Institute for International Studies hope that President Joe Biden’s meeting with Russian President Vladimir Putin will lay the groundwork for negotiations in the near future, particularly around nuclear weapons.
When we’re faced with a video recording of an event—such as an incident of police brutality—we can generally trust that the event happened as shown in the video. But that may soon change, thanks to the advent of so-called “deepfake” videos that use machine learning technology to show a real person saying and doing things they haven’t.
In a new blog post, Daphne Keller, Director of the Program on Platform Regulation at the Cyber Policy Center, looks at the need for transparency when it comes to content moderation and asks, what kind of transparency do we really want?
India' information technology ministry recently finalized a set of rules that the government argues will make online service providers more accountable for their users’ bad behavior. Noncompliance may expose a provider to legal liability from which it is otherwise immune.
Researchers from Stanford University, the University of Washington, Graphika and Atlantic Council’s DFRLab released their findings in ‘The Long Fuse: Misinformation and the 2020 Election.’
The audio chat app “Clubhouse” went viral among Chinese-speaking audiences. Stanford Internet Observatory examines whether user data was protected, and why that matters.
As Parler gained millions of users - and plenty of notoriety - in recent months, understanding the dynamics of the platform has become an increasing priority. A report by the Stanford Internet Observatory analyzes three Parler datasets to understand a platform designed for non-moderation, and to map its domestic and increasingly international growth.
Riana Pfefferkorn joined the Stanford Internet Observatory as a research scholar in December. She comes from Stanford’s Center for Internet and Society, where she was the Associate Director of Surveillance and Cybersecurity.
Wikipedia celebrates its 20th anniversary this month. This blog post, the second of two, looks at how open source investigators can conduct research on Wikipedia.
Wikipedia celebrates its 20th anniversary this month. This is the first of two blog posts exploring the use, misuse, and ultimate resilience of this open, community-edited platform.