Regulating Under Uncertainty: Governance Options for Generative AI
About the Report
Although innovation in AI has occurred for many decades, the two years since the release of ChatGPT have been marked by an exponential rise in development and attention to the technology. Unsurprisingly, governmental policy and regulation has lagged behind the fast pace of technological development. Nevertheless, a wealth of laws, both proposed and enacted, have emerged around the world.
The title of this report – “Regulating Under Uncertainty: Governance Options for Generative AI” – seeks to convey the unprecedented position of governments as they confront the regulatory challenges AI poses. Regulation is both urgently needed and unpredictable. It also may be counterproductive, if not done well. However, governments cannot wait until they have perfect and complete information before they act, because doing so may be too late to ensure that the trajectory of technological development does not lead to existential or unacceptable risks. The goal of this report is to present all of the options that are “on the table” now with the hope that all stakeholders can begin to establish best practices through aggressive information sharing. The risks and benefits of AI will be felt across the entire world. It is critical that the different proposals emerging are assembled in one place so that policy proponents can learn from one another and move ahead in a cooperative fashion.
Please note: This document is the revised second edition of the report, updated as of September 2024.
We are greatly indebted to the Project Liberty Institute for their support of the Program on Governance of Emerging Technologies, which made this report possible.
The revolution underway in the development of artificial intelligence promises to transform the economy and all social systems. It is difficult to think of an area of life that will not be affected in some way by AI, if the claims of the most ardent of AI cheerleaders prove true. Although innovation in AI has occurred for many decades, the two years since the release of ChatGPT have been marked by an exponential rise in development and attention to the technology. Unsurprisingly, governmental policy and regulation has lagged behind the fast pace of technological development. Nevertheless, a wealth of laws, both proposed and enacted, have emerged around the world. The purpose of this report is to canvas and analyze the existing array of proposals for governance of generative AI.
Effective regulation of emerging technologies inevitably presents legislators with a set of difficult choices. If they act aggressively to mitigate all hypothetical risks, they might inhibit the development of the technology. If they act too conservatively at the outset, they might miss the chance to steer the industry toward the safe development of the technology and away from foreseeable harms. These choices must also be made at a time when the knowledge and expertise about the technology resides mostly in the private sector. As a result, governments often do not have the necessary expertise to design and enforce a new regulatory regime.
Proposed and existing government regulation occurs along a continuum, from a laissez faire model, that mostly characterizes the United States, to a more command-and-control model characteristic of traditional forms of regulation, with China at the extreme opposite pole from the U.S. In the middle are different degrees of co-regulation, such as that prevalent in the European Union, in which governments exist in a dialogic relationship with companies to respond incrementally to new developments and discovered harms from the technology.
Despite its success and widespread use, the term “generative AI” encompasses sophisticated technology and a complex, often opaque supply chain. Therefore, it is essential to clarify the nature of generative AI and its technical characteristics. This chapter begins by defining generative AI, followed by a brief overview of the main stages in developing a generative AI model. Finally, it highlights the key characteristics of the current supply chain.
Although no consensus has yet emerged as to the transformational potential of the current crop of generative AI, few dispute its widespread economic and social impact. Of course, some of these benefits are also viewed as risks or costs. AI’s transformation of the economy, like all previous technological transformations, will come with massive job displacement. And while AI may help find ways to mitigate climate change and energy shortages, the building of massive data centers and the training of new models promises to significantly increase energy demands in the short term due to the buildout of AI technology.
Disinformation threats from synthetic imagery, an explosion in virtual child pornography, scams using voice mimicking technology to defraud unsuspecting victims, discrimination and bias in AI algorithms, and a host of problems due to vulnerabilities to jailbreaking and inaccurate responses provided by chatbots are just a few of the problems already presented by existing generative AI tools. As these tools are rolled out for use in law enforcement, criminal justice, judicial process, employment, education, healthcare, and any number of other domains, both the malfunctioning of the systems and their abuse by bad actors cautions against overreliance on AI and wholesale replacement of human oversight of these processes.
Those who warn of the societal impacts from AI are alarmed by the potential for catastrophic harms —from novel viruses and new weapons to uncontrollable AI agents and cyberattacks. Early in the immediate aftermath of ChatGPT, leading AI scientists and business leaders called on governments to begin addressing collectively the problems of new existential risks posed by AI. However, experts remain divided on the plausibility of “loss of control” scenarios, where a highly intelligent “rogue AI” could surpass human oversight and potentially spiral out of control. And critics of that concern over future existential risks suggest focus should be placed instead on the immediate and tangible risks posed by generative AI that require government action.
The increasing public attention and evolving risks associated with generative AI have spurred AI companies to develop practices that mitigate risks while harnessing economic potential. It would be an overstatement to claim that individual measures by AI developers constitute industry-wide self-regulation, yet these initiatives may contribute to the creation of self-regulatory instruments. These emerging standards and practices are widely discussed and collaboratively refined within the AI community, often becoming recognized as best practices.
This chapter begins by offering a general overview of the practices commonly adopted by companies developing generative AI models and systems to address current risks and challenges. It then explores the collective initiatives within the industry that resemble self-regulation.
Government regulation of some kind is inescapable, if only to clarify how existing laws will apply to the newest technologies. The most significant and comprehensive piece of legislation passed thus far with respect to AI is the EU’s AI Act. The AI Act regulates AI systems based on risk levels and use cases, particularly in sensitive sectors. The Act categorizes risks based on the “intended” use of AI systems and classifies them into four risk categories: unacceptable risk, high risk, limited or “transparency” risk, and minimal or no risk. Depending on the risk level involved in the application, different legal requirements will apply. During the negotiation process over the AI Act, provisions were added to regulate general-purpose (i.e., foundation) models, shifting the focus from specific use cases to the technology itself. General-purpose AI models posing systemic risks must comply with additional obligations related to cybersecurity, red teaming, risk mitigation, incident reporting, and model evaluation.
Although the United States may be characterized as having a “hands-off” regulatory approach, either through the courts or through legislation, some regulation will be necessary. For now, most of the action in the US has occurred either in the executive branch or in the states. In addition to securing voluntary commitments from the major AI companies, the Biden administration issued Executive Order 14110, which outlines eight guiding principles and policy priorities for federal agencies and authorities. The absence of a comprehensive legal framework has prompted some individual states to enact their own AI-related legislation, addressing issues such as deepfakes and algorithmic discrimination. Perhaps the most significant proposal, California's pending “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” would impose significant compliance obligations on AI developers.
This chapter examines the AI regulatory approaches of other countries. China has implemented a series of laws that place significant restrictions on AI development. Other countries, such as Brazil and Canada, are actively engaged in formulating regulatory frameworks for AI. In contrast, countries like Japan and India initially refrained from enacting specific AI legislation but are now gradually progressing towards its adoption. The United Kingdom, while recognizing the risks and challenges associated with AI, particularly its most advanced models, has thus far chosen not to introduce specific AI legislation, though this stance may be reconsidered in the future.
Given the geopolitical implications of the race to control this new and powerful technology and the ease with which the technology will transcend the boundaries of a given regulator, it should come as no surprise that most major international organizations have proposed or are drafting new initiatives related to AI. The G7’s Hiroshima AI Process has resulted in non-binding yet influential frameworks, such as its Guiding Principles and Code of Conduct for AI. The G20 has also published its own G20 AI Principles, and the EU-US Trade and Technology Council has released collaborative AI projects. The five-nation BRICS group has formed an AI Study Group to foster innovation and establish AI governance standards, in alignment with China’s Global AI Governance Initiative.
In May 2024, the Council of Europe adopted the first international AI treaty, the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law, which requires ratification by at least five states to enter into force. The United Nations established a High-Level Advisory Body on AI and adopted its first AI resolution. UNESCO provided ethical guidelines and global guidance on AI use in education and research. Additionally, the African Union published various policy documents to guide AI development in Africa. Most of these international initiatives have been strongly influenced by the work of the OECD’s “Recommendation on Artificial Intelligence.”
Several high-level principles and observations emerge from this exploration of the different national and international initiatives related to AI:
* Regulation of the technology or its applications. Sector specific laws allow AI regulation to work incrementally and adapt laws in narrow ways to the changes made by AI. However, due to the uncertain reach and implications of AI technology and the development of general-purpose AI models, regulating the technology itself is particularly important.
* The importance of transparency and auditing. Because the impacts of generative AI are unknown, transparency in the development of this technology is critical. To fully understand the implications of foundation models and generative AI applications, both developers and external third parties must rigorously test them prior to deployment.
* The importance of enforcement. Enforcement will prove as important, if not more so, than legislation. This will require governments to hire AI talent and coordinate with companies and civil society to provide continuous guidance on how the rules on the books apply to new and emerging contexts.
* The relative power of the public and private sectors. The need to collect vast amounts of data, the scarcity of chips, and the high costs of computation have concentrated the resources required to develop and train the most powerful models in the hands of a few private companies. To “democratize” the production of AI may require massive public investment to ensure that other actors will be able produce the most cutting edge AI models.
* The promise and risks of open models. Open-source models might create a competitive environment, quite different from that of social media and search engines, which have been controlled by a few oligopolistic actors. However, once these models are released, they can be used and fine-tuned by bad actors for all kinds of intended and unintended purposes. Moreover, once they are “out the door,” there is little that the companies or governments can do to control their impact. Government regulation in this space must address the relative risks and benefits posed by open-source models.
Acknowledgements and Contributors
This work would not exist without the dedicated efforts of the members of the Governance of Emerging Technologies program at the Cyber Policy Center, directed by Florence G’sell and managed by Ben Rosenthal. We are greatly indebted to the Project Liberty Institute for their support of the Program on Governance of Emerging Technologies, which made this report possible.
This report has benefited from the significant inputs of several key contributors. Elliot Stewart focused on technology, meticulously examining the practices of major AI companies, a task made even more challenging by their constantly evolving practices and policies. Chris Suhler and Ashok Ayyar, assisted by Nikta Shahbaz, scrutinized the ongoing strategy of the U.S. federal administration as well as the regulations adopted and proposed at both the federal and state levels. Professor Jiaying Jiang, Jasmine Shao, and Sabina Nong joined their efforts to provide a comprehensive and precise overview of the Chinese framework. Zeke Gillman analyzed the frameworks of Canada, South Korea, Singapore, the UK, Israel, Saudi Arabia, and the Emirates, and also studied ongoing international initiatives. Arpit Gupta examined the legal framework of India and the collective practices of AI companies. Tally Smitas analyzed the legislation currently being adopted in Brazil. Ryoko Matsumoto provided an overview of Japan's framework. Professor Keeheon Lee offered significant insights into South Korea's regulatory strategy, while Nathan Levit and Maya Rodriguez finalized the presentation of the South Korean and Singaporean frameworks.
It is also appropriate to express gratitude to Sanna Ali, Tamian Derivry, and Luca Lefevre for their research efforts and assistance on various topics throughout the drafting of this report.
This report has also been enriched by the valuable feedback and comments of numerous colleagues who dedicated their time to review it and offer substantial and relevant suggestions. In particular, Professor Jingwen Wang and Professor Xinyu Fu provided invaluable insights on generative AI technology, while Dave Willner shared his expertise on the generative AI industry.
This work has also benefited from the precious comments of Nate Persily, Carlos Escapa, Sarah Cen, David Shao, Jiankun Ni, Sijia Liu, Mick Li, Chun Yu Hong, Hiroki Habuka, Shayne Longpre, Rishi Bommasani, Kevin Klyman, Dan Ho, Mark Lemley, Daphne Keller, Presley Warner, Suzanne Marton, Jerrold Soh, Conor Chapman and Samidh Chakrabarti.
Invaluable technical assistance was provided by copy editors Lisa Keen and Eden Beck, and designer Michi Turner.
Nick Amador, Zeke Gillman, Nathan Levit, Nate Low, Jasmine Shao, Nikta Shahbaz, and Harith Khawaja completed the Bluebooking.
Finally, all the students of the Spring 2023 Governance and Regulation of Emerging Technologies Policy Practicum should be thanked for their inspiring work, which served as the starting point for this policy report: Taylor Applegate, Sindy Braun, Alexis Dye, Drew Edwards, Mary Rose Fetter, Dakota Foster, Christopher Giles, August Gweon, Caroline Hunsicker, Poramin Insom, Crys Jain, Harith Khawaja, Atsushi Kono, Ashley Denise Leon, Zahavah Levine, Miranda Lin Li, Katherine McCreery, Caroline Meinhardt, David Mollenkamp, Ilari Papa Kate Reinmuth, Gabriela A. Romero, Greg D. Schwartz, Sade Snowden- Akintunde, Elliot Stahr, Silva Stewart, Christine Strauss, Chris Suhler, Kiran Wattamwar, and Ashton E. Woods..