New research from the CPC's Ronald E. Robertson looks at content moderation by web search engines. Across three data collection waves (Oct 2023, Mar 2024, Sept 2024), researchers found that Google returned a warning banner for about 1% of search queries, with substantial churn in the set of queries that received a banner across waves...
Regulating Under Uncertainty: Governance Options for Generative AI
The two years since the release of ChatGPT have been marked by an exponential rise in development and attention to the technology. Unsurprisingly, governmental policy and regulation have lagged behind the fast pace of technological development.
Inspired by the Federalist Papers, the Digitalist Papers seeks to inspire a new era of governance, informed by the transformative power of technology to address the significant challenges and opportunities posed by AI and other digital technologies.
In The Tech Coup, Marietje Schaake, Fellow at the CPC and at the Institute for Human-Centered Artificial Intelligence (HAI) offers a behind-the-scenes account of how technology companies crept into nearly every corner of our lives and our governments.
Moderated Content host Evelyn Douek discusses Twitter’s data security problems and what this says about privacy regulation more generally with Whitney Merrill, the Data Protection Officer and Privacy Counsel at Asana and long-time privacy lawyer including as an attorney at the FTC, and Riana Pfefferkorn, a Research Scholar at the Stanford Internet Observatory.
Riana Pfefferkorn is a research scholar at the Stanford Internet Observatory and a member of the Global Encryption Coalition. This first appeared in Brookings TECH STREAM.
When we’re faced with a video recording of an event—such as an incident of police brutality—we can generally trust that the event happened as shown in the video. But that may soon change, thanks to the advent of so-called “deepfake” videos that use machine learning technology to show a real person saying and doing things they haven’t.
India' information technology ministry recently finalized a set of rules that the government argues will make online service providers more accountable for their users’ bad behavior. Noncompliance may expose a provider to legal liability from which it is otherwise immune.
The audio chat app “Clubhouse” went viral among Chinese-speaking audiences. Stanford Internet Observatory examines whether user data was protected, and why that matters.
Riana Pfefferkorn joined the Stanford Internet Observatory as a research scholar in December. She comes from Stanford’s Center for Internet and Society, where she was the Associate Director of Surveillance and Cybersecurity.