Sections

Commentary

Can California fill the federal void on frontier AI regulation?

June 4, 2024


  • Despite a flurry of bills, frameworks, and hearings, Congress has still failed to pass any legislation to either narrowly target specific AI risks or broadly ensure the responsible development and deployment of AI systems.
  • In this emerging patchwork of AI regulatory policy, California is uniquely positioned to have a crucial impact on AI governance.
  • California legislation does not need to be a perfectly comprehensive substitute for federal legislation—it just needs to be an improvement over the current lack of federal legislation.
U.S. President Joe Biden, Governor of California Gavin Newsom and other officials attend a panel on Artificial Intelligence, in San Francisco, California, U.S., June 20, 2023.
U.S. President Joe Biden, Governor of California Gavin Newsom and other officials attend a panel on Artificial Intelligence, in San Francisco, California, U.S., June 20, 2023. REUTERS/Kevin Lamarque

For 60 years, the discussion of artificial intelligence (AI) remained largely within university classrooms, industry research labs, and academic journals. This began to dramatically change as these technologies became more general purpose, and with the release of Open AI’s ChatGPT in November 2022, it is likely that the technology will be more of a household name. AI’s unprecedented capabilities have sparked widespread discussion of the norms needed to harness the benefits and mitigate the risks of AI. Many governments have emerged as leaders in shaping these norms, including China, the EU, and the U.K. The United States has also taken steps to set an AI governance agenda, including releasing the Blueprint for an AI Bill of Rights, the National Institute of Standards and Technology AI Risk Management Framework 1.0, and the sweeping but largely non-binding Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

However, these actions have come almost solely from the executive branch, which has limited regulatory powers. Despite a flurry of bills, frameworks, and hearings, Congress has still failed to pass any legislation to either narrowly target specific AI risks or broadly ensure the responsible development and deployment of AI systems. The House’s bipartisan Task Force on Artificial Intelligence plans to draft policy “over the next several years,” but the rapid pace of AI development and the emerging risks that accompany it will not wait for the federal government to determine policy solutions. The more recent bipartisan Senate AI roadmap led by Senator Schumer (D-NY) also offers much promise—provided the chamber subcommittees can quickly draft and enact legislation.

In this federal legislative vacuum, states are emerging as today’s AI regulators. Some of the laws passed include measures to protect consumer data privacy (OR SB-619), build institutional understanding of AI (LA SCR49), prevent election interference (MI HB-5144), and establish state AI task forces (IL HB-3563), offices (CT SB-1103), and advisory councils (TX HB-2060). Recently, Utah passed a broader law (UT SB0149) establishing liability, notice of interaction requirements, and an Office of AI Policy. However, in this emerging patchwork, one state is uniquely positioned to have a crucial impact on AI governance: California.

The promise of California as an AI regulator

Perhaps California’s efforts to influence AI policy, particularly, for frontier systems, is due to its status as an AI powerhouse, its large economy, and its Democratic penchant for regulation. The Golden State is home to 32 of Forbes’ top 50 global AI companies and leading frontier players such as OpenAI, Anthropic, Meta, xAI, Google, and Microsoft, among others. OpenAI kickstarted the generative AI race with its release of ChatGPT back in 2022, and its GPT-4 large-language model is still the top preference for AI users more than a year after its release. Anthropic is one of OpenAI’s major competitors; their recently released Claude 3 model outperforms GPT-4 on many standard performance benchmarks. Meta, the parent company of Facebook and Instagram, is a market behemoth that produces open models—those whose inner workings are released to the public for widespread inspection, modification, and execution—and it just introduced its long-awaited Llama 3 model. The presence of such major AI developers makes California an attractive jurisdiction for advancing responsible AI policy. Furthermore, California produces 14.5% of the United States GDP, and if it were a sovereign country, it would have the world’s fifth- or sixth-largest economy behind the U.S. (without California), China, Japan, Germany, and possibly India.

California’s dual role as an AI hotspot and a powerful economy might enable their state AI regulation to accomplish many of the benefits of standard-setting legislation, such as requiring responsible development and deployment of frontier systems, mitigating theft of powerful dual-use models by malicious actors, and ensuring legal liability for AI harms. A newly introduced bill provides a model for such regulation and may garner many of these benefits for Californians and, incidentally, everyone else. In February 2024, State Senator Scott Wiener (D-San Francisco) introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill’s primary aim is to mitigate large potential risks posed by future frontier models, including the automation of large-scale cyberattacks and the production of novel biological weapons. It would address these risks by requiring the developers of a model trained using large amounts of computing power to demonstrate the safety of their system. If they were unable to make such a demonstration, they would be subject to additional requirements, including submitting yearly certifications of compliance with safety standards, implementing emergency shutdown protocols, and strengthening cybersecurity protections to prevent unauthorized access.

The bill also includes several other key provisions to increase safety. It would require providers of computing clusters to verify the identity and goal of customers seeking large amounts of computing power for AI training. It would also mandate that developers report safety incidents, including hazardous use and model theft. The state attorney general would be empowered to prosecute AI-related damages and threats to public safety. Additionally, the bill would establish a subsidized public computing cluster to democratize access to expensive AI training—cutting-edge AI-specialized computer chips cost tens of thousands of dollars. To oversee all these requirements, the bill would create the Frontier Model Division within the Department of Technology. Through its ambitious and broad provisions, this bill, if passed, could capture the promise of AI concentration in California.

Not only might California policies, like Senator Wiener’s, be able to achieve many of the benefits of national legislation, but passing such policies may also be easier in California than at the federal level. First, California’s government is overwhelmingly Democratic and therefore less subject to gridlock and polarization than is the case nationally. Governor Gavin Newsom is a Democrat and the state Assembly and Senate have wide Democratic margins.  The absence of partisan-based gridlock makes it easier to pass laws and regulations in California than places with greater party competition. Second, California has a strong track record of passing groundbreaking technology regulation, such as an Internet of Things security law and the California Consumers Privacy Act (CCPA). These considerations may make it easier to advance AI policies in California than at the federal level.

The limits of California as an AI regulator

Despite its promise, there may be some limits to California’s power. First, as with all regulations, there is the risk that the regulated companies will attempt to weaken or circumvent the policies impacting their business. For example, lobbying efforts by OpenAI and French startup Mistral successfully influenced the EU AI Act to favor their business. Circumventing location-based regulations becomes more difficult if AI policies apply to all companies doing business in the state, as opposed to only those incorporated in the state, as is the case with the CCPA. Although such a jurisdictional requirement makes evasion difficult, some companies may be willing to bite the bullet and leave California markets to avoid compliance burdens. For instance, Anthropic limits its business in the EU, which is famous for its rigorous digital regulations.

Second, laws that impose sweeping requirements on interstate AI operations might run afoul of the Dormant Commerce Clause (DCC). The DCC is an inferred consequence of the Commerce Clause which limits the ability of states to overburden interstate or international commerce. Some commentators have accused the CCPA of violating the DCC for the requirements it makes on out-of-state business activities without achieving commensurate social benefit for Californians. While the CCPA has not been directly challenged in court, the social, economic, and geopolitical importance of AI might incentivize cases against the constitutionality of far-reaching California AI legislation.

Third, there are AI policy goals that California cannot achieve due to its inherent limitations as a state government. The state cannot wield key policy tools for maintaining the U.S.’s international AI advantage, such as imposing export controls on AI-specialized chips or altering visa processes for highly skilled workers to attract foreign AI talent. It is also limited in other ways, such as in its ability to pursue international cooperation or influence military use.

Fourth, even if a California AI law succeeds in partially filling the regulatory void left by congressional inaction, it could interfere with or prevent the passage of comprehensive federal legislation. The American Data Privacy and Protection Act (ADPPA) of 2022 promised comprehensive data privacy protections for all Americans. However, Californians in Congress worried that the Act would provide weaker protections for their constituents than the already-enacted CCPA. Despite some analyses showing that the ADPPA actually provided stronger privacy protections overall, resistance from Californians (including then-Speaker Nancy Pelosi), combined with partisan disagreement over a clause exempting the CCPA from preemption, held up the bill long enough for the congressional session to expire. Although the outcome may have been contingent on unique situational factors, such as the most powerful member of Congress being from California, it still serves as a cautionary tale of how state legislation can block more widespread protections.

Finally, California is not the only state seeking to set the national agenda. Texas and Florida have been legislating on technology, developing regulations, and filing lawsuits. Unlike the Golden State, those locales are dominated by Republicans and are enacting rules from a conservative point of view. How the states work out their differences and what tech companies do in the face of competing mandates from liberal and conservative states could limit the power of any one state to shape the national landscape.

The future of California as an AI regulator

Despite these potential limits, California still holds promise for directly requiring responsible AI development and deployment inside and outside its borders. California legislation does not need to be a perfectly comprehensive substitute for federal legislation—it just needs to be an improvement over the current lack of federal legislation. The state’s power in the AI industry and economic presence can give it broad control in regulating the technology. Even if the above limits do prevent California from directly accomplishing wider AI regulation, it will still have substantial indirect influence, because the state is a trendsetter in public policy; in many cases, California regulations have spread to other jurisdictions. Regardless, California will certainly continue to play a key role in shaping the United States’ AI policy response in the coming years. While Congress lags and AI surges forward, the Golden State can help the country keep pace in requiring responsible AI.

Authors

  • Acknowledgements and disclosures

    Google, Meta, and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.