Regulating Under Uncertainty: Governance Options for Generative AI
The two years since the release of ChatGPT have been marked by an exponential rise in development and attention to the technology. Unsurprisingly, governmental policy and regulation have lagged behind the fast pace of technological development.
Inspired by the Federalist Papers, the Digitalist Papers seeks to inspire a new era of governance, informed by the transformative power of technology to address the significant challenges and opportunities posed by AI and other digital technologies.
In The Tech Coup, Marietje Schaake, Fellow at the CPC and at the Institute for Human-Centered Artificial Intelligence (HAI) offers a behind-the-scenes account of how technology companies crept into nearly every corner of our lives and our governments.
What’s in Hong Kong’s Proposed Critical Infrastructure Bill?
Charles Mok writes about the new law seeking to regulate critical infrastructure operators responsible for “continuous delivery of essential services” and "maintaining important societal and economic activities." From The Diplomat
How to Fix the Online Child Exploitation Reporting System
A new Stanford Internet Observatory report examines how to improve the CyberTipline pipeline from dozens of interviews with tech companies, law enforcement and the nonprofit that runs the U.S. online child abuse reporting system.
Existing Law and Extended Reality: An Edited Volume of the 2023 Symposium Proceedings, compiles and expands upon the ideas presented during the symposium. Edited by Brittan Heller, the collection includes contributions from symposium speakers and scholars who delve deeper into the regulatory gaps, ethical concerns, and societal impacts of XR and AI.
This brief presents the findings of an experiment that measures how persuasive AI-generated propaganda is compared to foreign propaganda articles written by humans.
The online child safety ecosystem has already witnessed several key improvements in the months following the April publication of a landmark Stanford Internet Observatory (SIO) report, writes Riana Pfefferkorn, formerly a research scholar at the SIO and now a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
The new law seeks to regulate critical infrastructure operators responsible for “continuous delivery of essential services” and "maintaining important societal and economic activities."
A new Stanford Internet Observatory report examines how to improve the CyberTipline pipeline from dozens of interviews with tech companies, law enforcement and the nonprofit that runs the U.S. online child abuse reporting system.
Partisanship and social media usage correlate with belief in COVID-19 misinformation and that misinformation shapes citizens’ willingness to get vaccinated. However, this evidence comes overwhelmingly from frequent internet users in rich, Western countries. We run a panel survey early in the pandemic leveraging a pre-pandemic sample of urban middle-class Nigerians, many of whom do not use the internet.
Texas and Florida are telling the Supreme Court that their social media laws are like civil rights laws prohibiting discrimination against minority groups. They’re wrong.
Social media has been fully integrated into the lives of most adolescents in the U.S., raising concerns among parents, physicians, public health officials, and others about its effect on mental and physical health. Over the past year, an ad hoc committee of the National Academies of Sciences, Engineering, and Medicine examined the research and produced this detailed report exploring that effect and laying out recommendations for policymakers, regulators, industry, and others in an effort to maximize the good and minimize the bad. Focus areas include platform design, transparency and accountability, digital media literacy among young people and adults, online harassment, and supporting researchers.
Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.