Regulating Under Uncertainty: Governance Options for Generative AI
The two years since the release of ChatGPT have been marked by an exponential rise in development and attention to the technology. Unsurprisingly, governmental policy and regulation have lagged behind the fast pace of technological development.
Inspired by the Federalist Papers, the Digitalist Papers seeks to inspire a new era of governance, informed by the transformative power of technology to address the significant challenges and opportunities posed by AI and other digital technologies.
Japan’s unique strategy – combining regulatory oversight, resource efficiency, and international partnership – offers a potential blueprint for the world. Authored by Charles Mok and Athena Tong for The Diplomat
How to Fix the Online Child Exploitation Reporting System
A new Stanford Internet Observatory report examines how to improve the CyberTipline pipeline from dozens of interviews with tech companies, law enforcement and the nonprofit that runs the U.S. online child abuse reporting system.
This article examines the problem of statutory obsolescence in the regulation of rapidly evolving technologies, with a focus on GDPR and generative AI. It shows how core GDPR provisions on lawful processing, accuracy, and erasure prove difficult—if not impossible—to apply to AI systems, generating legal uncertainty and divergent national enforcement. The analysis highlights how comprehensive, principle-based instruments can quickly become inadequate in fast-moving technological domains. Drawing lessons from the GDPR, the article reflects on the need for more adaptive, flexible, and responsive regulatory approaches in the technological age.
Japan’s unique strategy – combining regulatory oversight, resource efficiency, and international partnership – offers a potential blueprint for the world. By Charles Mok and Athena Tong.
In an era where digital technology serves as both a tool for liberation and a threat to democracy, the term “digital authoritarianism” has emerged to describe the strategies employed by authoritarian regimes to exert control in the digital sphere. This chapter explores the defining characteristics of digital authoritarianism as exemplified by countries such as China and Russia, identifying three primary pillars: information control, mass surveillance, and the creation of a fragmented, isolated Internet. Furthermore, this chapter emphasizes that digital authoritarian practices are not confined to authoritarian regimes. Democratic governments and technologically advanced private corporations, especially the dominant tech companies shaping the modern Internet, are also capable of adopting authoritarian tactics. Finally, the chapter argues that the technology itself—through the omnipotence of code in cyberspace—may inherently foster a form of digital authoritarianism.
Existing Law and Extended Reality: An Edited Volume of the 2023 Symposium Proceedings, compiles and expands upon the ideas presented during the symposium. Edited by Brittan Heller, the collection includes contributions from symposium speakers and scholars who delve deeper into the regulatory gaps, ethical concerns, and societal impacts of XR and AI.
HAI and Stanford Cyber Policy Center,
August 6, 2024
A new report by Florence G'sell, visiting professor in the program on the Governance of Emerging Technologies, at the Cyber Policy Center addresses the urgent need for AI regulation.
The online child safety ecosystem has already witnessed several key improvements in the months following the April publication of a landmark Stanford Internet Observatory (SIO) report, writes Riana Pfefferkorn, formerly a research scholar at the SIO and now a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
The new law seeks to regulate critical infrastructure operators responsible for “continuous delivery of essential services” and "maintaining important societal and economic activities."
A new Stanford Internet Observatory report examines how to improve the CyberTipline pipeline from dozens of interviews with tech companies, law enforcement and the nonprofit that runs the U.S. online child abuse reporting system.
Texas and Florida are telling the Supreme Court that their social media laws are like civil rights laws prohibiting discrimination against minority groups. They’re wrong.