Regulating Under Uncertainty: Governance Options for Generative AI
The two years since the release of ChatGPT have been marked by an exponential rise in development and attention to the technology. Unsurprisingly, governmental policy and regulation have lagged behind the fast pace of technological development.
Inspired by the Federalist Papers, the Digitalist Papers seeks to inspire a new era of governance, informed by the transformative power of technology to address the significant challenges and opportunities posed by AI and other digital technologies.
In The Tech Coup, Marietje Schaake, Fellow at the CPC and at the Institute for Human-Centered Artificial Intelligence (HAI) offers a behind-the-scenes account of how technology companies crept into nearly every corner of our lives and our governments.
President of France, Emmanuel Macron has announced his intention to regulate minors' access to screens, whether on phones, computers, tablets, or even game consoles. It has brought together a group of experts, including Florence G'sell of the Program on Governance of Emerging Technologies.
Daphne Keller of the Program on Platform Regulation, and Francis Fukuyama, Olivier Nomellini Senior Fellow at the Freeman Spogli Institute for International Studies and Director of the Ford Dorsey Master's in International Policy at Stanford, have filed an amicus "friend of the court" brief in the NetChoice Supreme Court case(s)
New work in Nature Human Behaviour from SIO researchers, with other co-authors looks at how generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust.
The Kids Online Safety Act (KOSA) has bipartisan support from nearly half the Senate and the enthusiastic backing of President Joe Biden, but opponents fear the bill would cause more harm than good for children and the internet.
Schaake will serve alongside experts from government, private sector and civil society, and will engage and consult widely with existing and emerging initiatives and international organizations, to bridge perspectives across stakeholder groups and networks.
Decentralized social networks may be the new model for social media, but their lack of a central moderation function make it more difficult to combat online abuse.
The Journal of Online Trust and Safety published peer-reviewed research on privacy, deepfakes, crowd-sourced fact checking, and what influences online searches.
Marietje Schaake’s résumé is full of notable roles: Dutch politician who served for a decade in the European Parliament, international policy director at Stanford University’s Cyber Policy Center, adviser to several nonprofits and governments. Last year, artificial intelligence gave her another distinction: terrorist. The problem? It isn’t true. (From the New York Times)
On July 28, 2023, Stanford University and the Stanford Internet Observatory filed an amicus brief in the Fifth Circuit Court of Appeals in support of the Missouri v. Biden appellants.
Led by former Prime Minister of New Zealand Rt. Hon. Dame Jacinda Ardern, a delegation from the Christchurch Call joined Stanford scholars to discuss how to address the challenges posed by emerging technologies.
New report finds an increasingly decentralized social media landscape offers users more choice, but poses technical challenges for addressing child exploitation and other online abuse.
This annual competition provides an opportunity for emerging scholars to share new ideas on urgent global policy challenges, producing outstanding essays that make their original research more accessible to policymakers, practitioners, and the general public.
Recent developments suggest possible links between some ransomware groups and the Russian government. We investigate this relationship by creating a dataset of ransomware victims and analyzing leaked communications from a major ransomware group.