Sections

Research

Regulating general-purpose AI: Areas of convergence and divergence across the EU and the US

Semiconductor chip cooperation between USA and EU
Shutterstock / Maxx-Studio

The current fast-paced advancement of AI has been described as an “unprecedented moment in history” by one of the pioneers of this field, Yoshua Bengio, at a U.S. Senate hearing in 2023. In many cases, recent progress can be linked to the development of so-called “general-purpose AI” or “foundation models.” These models can be understood as the “building blocks” for many AI systems, used for a variety of tasks. OpenAI’s GPT-4, with its user-facing ​​​system​ ChatGPT and third-party applications building on it, is one example of a general-purpose AI model. Only a small number of actors with significant resources have released such models. Yet, they reach hundreds of millions of users with direct access, and power thousands of applications built on top of them across a range of sectors, including education, healthcare, media and finance. The developments surrounding the release and adoption of increasingly advanced general-purpose AI models have brought renewed urgency to the question of how to govern them, on both sides of the Atlantic and elsewhere.

The European Parliament has acknowledged that the speed of technological progress around general-purpose AI models is faster and more unpredictable than anticipated by policymakers. At the end of 2023, EU lawmakers reached political agreement on the EU AI Act, a pioneering legislative framework on AI, which introduces binding rules for general-purpose AI models and a centralised governance structure at the EU level through a new European AI Office.

Until recently, the U.S. government has pursued a more laissez-faire approach to AI regulation. The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” issued in fall 2023, outlines a comprehensive approach to U.S. AI governance. Members of Congress also have presented a variety of legislative proposals for AI, but a federal legislative process dedicated to regulating general-purpose AI models remains absent.

This article outlines recent developments of the European Union along with the United States in the area of general-purpose AI regulation, demarcating important areas of convergence and divergence among the EU AI Act and the U.S. executive order. Finally, the voluntary G7 Code of Conduct on AI is discussed as one mechanism to induce greater international alignment, delineating a shared route to the governance of more advanced AI.

The European Union: Setting the regulatory tone for addressing the risks of general-purpose AI

The European Union is often seen as a frontrunner in AI regulation, setting the regulatory tone for fostering trustworthy AI globally. With general-purpose AI models being integrated into numerous AI systems across various sectors, the European Commission emphasizes that these models are “becoming too important for the economy and society not to be regulated,” while trustworthy innovation is explicitly supported. The EU reached a political deal on the EU AI Act in December 2023. It was unanimously endorsed by Representatives of the 27 member states in February 2024 and approved by the European Parliament in March 2024. The final text of the AI Act still needs to be formally adopted, and rules related to general-purpose AI models would apply 12 months after the Act’s publication, which is expected between May and July 2024. This marks the final phase of a years-long effort across the EU to create a legal framework for AI.

The willingness to explicitly regulate general-purpose AI models marks a significant development from the initial draft of the EU AI Act. Originally, the European Commission in 2021 focused primarily on regulating AI as a tangible product with intended purpose. Subsequent positions by the Council of the European Union in 2022 and the European Parliament in 2023 have expanded its scope to directly address concepts related to general-purpose AI and foundation models.

Each iteration of the regulatory approach reflects a deeper appreciation of the potential opportunities and risks of AI, as well as the need for developing more concise rules concerning general-purpose AI models. These models could have a transformative impact through downstream AI applications, for example, from improvements in the drug discovery process in medicine to more personalized education. The many perceived areas of application are expected to have significant economic effects. At the same time, the focus on directly regulating general-purpose AI models has been driven by a growing public awareness of the associated risks. The European Commission mentions, for example, that powerful AI models could cause serious accidents, propagate harmful biases at scale or, reflecting a dual-use nature, be misused for cybercrime.

The EU’s approach to regulating general-purpose AI models centers around fair sharing of responsibilities along the AI value chain, including general-purpose AI model developers and downstream providers that are building applications on top of such models.

Currently, a limited number of well-resourced model developers such as OpenAI in partnership with Microsoft, Google DeepMind, Anthropic or Meta, for example, possess the necessary resources to exert significant influence over the general-purpose AI ecosystem. A few European start-ups such as Mistral AI or Aleph Alpha are developing models to compete with the established actors. Small and medium-sized enterprises in the EU have voiced strong concerns over their dependencies on a subset of general-purpose AI models. This viewpoint is supported by various European civil society organisations, which have advocated for “a clear set of obligations under the AI Act, avoiding that smaller providers and users bear the brunt of obligations better suited to original developers.”

Consequently, besides regulating the many use-case-specific systems that are built on top of a general-purpose AI model in a risk-based approach, the EU AI Act is meant to ensure greater transparency along the AI value chain. The Act obligates providers of all general-purpose AI models to disclose certain information to downstream system providers, enabling a better understanding of the underlying models. Concretely, providers need to draw up technical documentation or provide summaries about the content used for model training and explain how they are respecting existing EU copyright law. Providers of free and open-source models without systemic risks are exempted from most of these transparency obligations as they are considered to already ensure high levels of openness. Models used for the purpose of research and development before market release are also not covered by the regulation.

In addition, the EU acknowledges that some general-purpose AI models have the potential to pose systemic risks, especially if they are very capable or widely used. For now, models that were trained using a total computing power of more than 1025 FLOPs (floating point operations) are presumed to carry systemic risks, given that models trained with more compute tend to be more capable. Under the AI Act, this threshold can be updated in light of technological advances, and further criteria can be applied to designate models with systemic risk, such as the number of users or the degree of autonomy of the model. For general-purpose AI models that could represent systemic risks, the EU AI Act demands adherence to more stringent rules. Providers are specifically required to assess and mitigate systemic risks, conduct model evaluations and adversarial testing, report serious incidents, and ensure cybersecurity.

In terms of governance, the EU is establishing a European AI Office within the European Commission to enforce, in particular, rules on general-purpose AI models, strengthen development and use of trustworthy AI, and foster international cooperation. The European AI Office will be supported by a scientific panel of independent experts, and it should be a central point of expertise related to advancements in capabilities and other AI trends, potential benefits, and emerging risks. EU policymakers anticipate this centralized structure to be “the first body globally that enforces binding rules on AI and is therefore expected to become an international reference point.”

The AI Office will play an important role in classifying models with systemic risk, and it will facilitate and supervise the work on detailing rules for providers of general-purpose AI models in codes of practice, developed together with industry, the scientific community, civil society, and other experts. Recognising the potential benefits and unique challenges of open-source models, a dedicated forum of cooperation with the open-source community will be established to identify and develop best practices for safe development and use. Monitoring the implementation of the rules is facilitated by powers of the AI Office to request additional information from model providers, compel model evaluations either directly or through appointed experts, request providers to take corrective action, and impose fines.

The EU approach to governing general-purpose AI hinges on successfully setting up the AI Office with relevant expertise and resources. The flexibility to detail rules needs to be paired with an in-depth understanding of the evolving AI landscape and state-of-the-art measures in order to deliver on the promise to foster trustworthy AI innovation by setting a targeted legal framework.

The United States: A comprehensive AI governance approach emerges

AI policy took a decisive turn in the United States on October 30, 2023, when President Biden signed Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The Executive Order (EO) is by far the most comprehensive approach to AI governance in the U.S. to date, covering areas from new standards for AI safety and security to privacy, civil rights, workers, innovation, government use of AI, and international leadership.

Prior to the Executive Order, the United States had adopted a laissez-faire approach to the governance of AI, without a centralized federal regulatory framework dedicated to general-purpose AI. Instead, AI regulation was fragmented, with various federal agencies independently developing and implementing new policies on AI, tailored to specific needs and contexts, but lacking a unified national strategy.

Recent developments in general-purpose AI, however, has led some policymakers in Washington take a more active position on AI regulation, reflected in proposals such as the Bipartisan Framework for the U.S. AI Act introduced in Congress on September 8, 2023. Among other proposals, the Bipartisan Framework suggests establishing a licensing structure administered by an independent oversight body that would require companies developing general-purpose AI models or models used in high-risk situations to register directly with a designated authority.

The U.S. has also started investing in non-regulatory infrastructure, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF), which provides detailed guidance on when and how to handle risks throughout the AI lifecycle. In July 2023, NIST established a Generative AI Public Working Group to spearhead the development of a cross-sectoral AI RMF profile for managing the risks of generative AI, which is generally understood to include general-purpose AI models.

While the EO order was the first centralized effort to address AI regulation, the White House began exploring a proactive approach in the year leading to its release. For example, the White House Office of Science and Technology Policy (OSTP) released a “Blueprint for an AI Bill of Rights” along with several related agency actions in October 2022. In May 2023, the White House met with technology company CEOs on the topic of Advancing Responsible Artificial Intelligence Innovation. On July 21, 2023, leading U.S. AI companies including Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI signed up for a set of Voluntary Commitments devised by the White House with the aim of ensuring safe, secure, and trustworthy AI. On September 12, 2023, eight additional companies in the AI ecosystem—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI—also committed to the voluntary rules.

The executive order brings prior efforts together in a comprehensive and more streamlined approach to AI governance, including a focus on “dual-use foundation models” that are defined as large models that could be of risk to the economy, public health or safety, and national security. For categorizing these models, the executive order leverages a threshold based on quantity of computing power used for model training, which is the same approach taken in the EU AI Act. In terms of safety, the EO instructs the secretary of commerce, acting through the director of NIST, to establish guidelines and best practices to promote consensus industry standards for developing and deploying safe, secure, and trustworthy AI systems. That includes creating benchmarks for evaluating and auditing AI capabilities. One key goal is to enable developers of dual-use foundation models to conduct AI red-teaming tests, for example through new “testbeds”—facilities or mechanisms equipped for conducting rigorous, transparent, and replicable testing of tools and technologies powered by foundation models.

By invoking the Defense Production Act (DPA) as part of the executive order, the U.S. President is effectively authorized to compel or incentivize industry in the interest of national security. The executive order specifies that companies that develop or intend to develop potential dual-use foundation models are expected to provide the federal government with ongoing information regarding activities related to training, developing, or producing such models, including physical and cybersecurity protections. The ownership and possession of model weights and the results of red-team testing, based on additional guidance by NIST, also need to be reported. Companies that acquire, develop, or possess large-scale computing clusters need to report their existence, scale, and location, while model access by foreign entities needs to be strictly monitored. The secretary of commerce will further define a set of technical conditions for models and computing clusters that will be subject to reporting, with initial requirements linked to any model that was trained using computing power greater than 1026 FLOPs. Finally, the intersection of AI and chemical, biological, radiological, or nuclear (CBRN) threats, as well as risks posed by synthetic content created by AI systems, are also areas that receive greater and more coordinated attention under the executive order. In summary, the executive order represents a comprehensive approach to public sector oversight of foundation model development.

In terms of governance, the EO requires establishing a White House AI Council along with the appointment of chief AI officers at government agencies, as well as the implementation of internal AI governance boards within a smaller set of agencies. To secure comprehensive coordination across agencies, an interagency council composed of all chief AI officers will be established. Lastly, NIST’s newly established U.S. Artificial Intelligence Safety Institute will focus on collaborating with industry on areas such as dual-use foundation models.

In some ways, the EO can be viewed as a roadmap for future legislation in the area of AI safety. A recent U.S. Chamber of Commerce “Open Letter to State Leaders on Artificial Intelligence,” articulated concern that a patchwork of state-level proposals to regulate artificial intelligence could slow realization of its benefits and stifle innovation by making compliance complex and onerous. To avoid the prior trajectory towards fragmentation, U.S. cities and states can use the EO as direction-setting and further align their policies to reinforce the aspirations of the executive order. As part of these efforts, the U.S. AI Safety Institute Consortium (AISIC), brings more than 200 organizations together to develop science-based and empirically backed guidelines and standards for AI measurement and policy.

One of the key vulnerabilities of the executive order, however, is that it could easily be revoked or altered by a future president. Unlike laws passed by Congress which require a legislative process to enact and modify, executive orders can be undone with the stroke of a pen.

A comparison of EU and U.S. approaches to regulating general-purpose AI

When comparing the EU AI Act with the U.S. executive order, there are a few immediate differences. The EO primarily outlines guidelines for federal agencies to follow with a view to shape industry practices, but it does not impose regulation on private entities except for reporting requirements by invoking the Defense Production Act. The EU AI Act, on the other hand, directly applies to any provider of general-purpose AI models operating within the EU, which makes it more wide-ranging in terms of expected impact. While the U.S. EO can be modified or revoked, especially in light of electoral changes, the EU AI Act puts forward legally binding rules in a lasting governance structure.

Both the executive order and the EU AI Act base their governance frameworks on model size, but they differ in scope. By setting the threshold for general-purpose AI models with systemic risk at 1025 FLOPs (with flexibility for other criteria), the EU potentially governs a broader range of AI models, while the U.S. threshold is an order of magnitude higher. Currently, no existing AI model is known to fall within the U.S. threshold, while a few providers, including OpenAI and Google (DeepMind) could be subject to rules under the EU AI Act, based on estimates.

While the U.S. approach to governing general-purpose AI in the EO focuses specifically on the technology’s dual-use risks and potentials, the EU AI Act takes a wider view at systemic risks which also includes, for example, discrimination at scale, major accidents, and negative effects to human rights. The U.S. EO and the AI Act also refer to other risks of AI but without specific reference to general-purpose models. Both frameworks are aligned on other general-purpose AI measures, such as the need for documentation, model evaluation, and cybersecurity requirements.

While both approaches are comprehensive in nature, the U.S. federal government and the European Union may face challenges in executing plans within set timelines due to a potential gap in necessary expertise, despite recent recruiting efforts. The EO establishes deadlines for federal agencies to take immediate action, with many tasks already completed within the first 90 days and the majority required to be finalized within a few hundred days. Rules in the EU AI Act for general-purpose AI models are expected to be detailed by early 2025.

In terms of global influence, the executive order primarily sets a domestic policy tone, while its influence on global practices is expected to remain indirect in the short term. The EU AI Act, on the other hand, has the potential to set global precedence like the EU General Data Protection Regulation (GDPR) did. Given the importance of the European market, international companies could be expected to align some of their AI governance practices with the AI Act to maintain access to the European Union’s internal market, while it is still too early to tell the degree to which the EU AI Act will exercise a so-called Brussels effect.

Towards greater international alignment on AI governance

While the EU and the U.S. pursue different strategies with some converging elements on how to govern general-purpose AI models, their latest commitment together with the other G7 countries to create an AI code of conduct, agreed on September 7, 2023, signals a shift towards increased collaboration in the area of AI policies. This non-binding rulebook is part of the newly established Hiroshima AI Process, which was published in October 2023.

The voluntary “International Code of Conduct for Advanced AI Systems” could serve as an important stepping stone towards international governance of general-purpose AI models, while its effectiveness depends on the concrete rules agreed upon as well as industry buy-in and adoption. The code is described as a “non-exhaustive list of actions” which can be endorsed by organizations and which will be reviewed and updated as necessary. For the G7 code of conduct to drive substantial changes in industry practices, it would need to include an enforcement mechanism, either within the rulebook itself or through a credible link to incoming regulations.

For the U.S., the voluntary nature of the rules aligns well with the country’s preceding preference for industry self-governance, while the EO now introduces a mix of voluntary guidelines and mandatory reporting requirements, reflecting a more nuanced approach to AI governance and regulation.  For the EU, the code of conduct is in line with the future European AI Office mandate to contribute to a global approach to shaping the impact of AI. For both, the G7 code of conduct allows for tangible impact in the short term while creating a foundation for greater alignment amongst like-minded nations in the long term. The voluntary code could, in principle, provide a first international reference point of best practices in AI governance.

As international efforts such as the G7 code of conduct are slowly coming into focus, pressing questions remain about how to achieve greater global consensus in AI governance. For example, when focusing on the nascent efforts across the EU, the U.S. and the G7, one key question is how to achieve interoperable governance frameworks where domestic policies support—rather than undermine—each other. In line with the G7 code of conduct, the G20 or the United Nations could prove to be highly relevant forums to broaden these conversations. A focus on amplifying diverse voices, especially from underrepresented countries and regions, is another crucial step in fostering inclusive and global AI governance dialogues.

For now, transatlantic cooperation between the EU and the U.S. together with other G7 countries could serve as a model for aligning international efforts, providing an essential blueprint for how countries with different domestic strategies and regulatory approaches can come together to address the opportunities and challenges of increasingly advanced general-purpose AI models.

Authors

  • Acknowledgements and disclosures

    The views and opinions expressed are solely those of the authors and do not reflect or represent any official position of the World Economic Forum or the European Commission.

    Küspert is a policy officer with the European AI Office, and the European Commission has the right to review publications by its staff for accuracy and sensitive information. The authors did not receive financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. Other than the aforementioned, the authors are not currently an officer, director, or board member of any organization with a financial or political interest in this article.

  • Footnotes
    1. Introduced by U.S. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law.
    2. In coordination with the secretary of energy, the secretary of homeland security, and the heads of other relevant agencies.
    3. This work takes place in coordination with the secretary of energy and the director of the National Science Foundation (NSF), and the heads of other sector risk management agencies (SRMAs).
    4. In consultation with the secretaries of state, defense, and energy, and the director of national intelligence
    5. A lower threshold at 10^23 FLOPs is applied to AI models that use primarily biological sequence data. However, these would not necessarily be understood as general-purpose AI.  
    6. Recently, the European AI Office and U.S. AI Safety Institute agreed to deepen their collaboration, particularly to exchange scientific information on benchmarks, potential risks and technological trends.