The following is a summary of the 32nd session of the Congressional Study Group on Foreign Relations and National Security, a program for congressional staff focused on critically engaging the legal and policy factors that define the role that Congress plays in various aspects of U.S. foreign relations and national security policy.
On December 12, 2023, the Congressional Study Group on Foreign Relations and National Security convened in person in the U.S. Capitol to discuss regulating the use of artificial intelligence (AI) in armed conflicts. The role of AI over the last few years has expanded dramatically across many societal sectors, including in the national security context. Some of the most salient developments—perhaps with the most significant ethical and legal implications—have come on the battlefield. This session examined the prospects for U.S. and international regulation of AI in armed conflicts, including Congress’s constitutional authority to impose restrictions on the president’s military use of AI and the statutory approaches that may guide future regulation.
The study group was joined by two outside experts for this session:
- Ashley Deeks, professor at the University of Virginia School of Law, who recently served as White House associate counsel and deputy legal adviser to the National Security Council; and
- Rebecca Crootof, professor at the University of Richmond School of law, who writes widely about the legal constraints on the use of autonomous weapons.
Prior to the discussion, the study group circulated the following background readings:
- Ashley Deeks, “Too Much Too Soon: China, the U.S., and Autonomy in Nuclear Command and Control,” Lawfare (Dec. 4, 2023);
- Ashley Deeks, “Regulating National Security AI Like Covert Action?,” Lawfare (July 25, 2023);
- Ashley Deeks and Matt Waxman, “Can Congress Bar Fully Autonomous Nuclear Command and Control?,” Lawfare (June 5, 2023);
- Ashley Deeks, “National Security AI and the Hurdles to International Regulation,” Lawfare (Mar. 27, 2023);
- Rebecca Crootof, “AI and the Actual IHL Accountability Gap,” Center for International Governance Innovation (Nov. 28, 2022); and
- Rebecca Crootof and Charlie Dunlap, “’Changing the Conversation: The ICRC’s New Stance on Autonomous Weapon Systems,” Lawfire (May 24, 2021).
Crootof began the discussion by defining autonomous systems and describing how they are different from automated ones. Automation is a system’s capacity to conduct tasks with limited human involvement. Automated systems, like anti-tank landmines, do not operate differently based on the environment in which they are located. Autonomous systems are similar to automated ones, and they both rely on AI, but autonomous systems have more independence. They can adjust based on their environments, react unpredictably, and select and engage targets absent significant human input. A number of autonomous systems are currently operational on the battlefield—including South Korea’s SGR-A1, which can identify human forms, give orders to stop or surrender, and shoot targets.
Crootof noted two considerations that might limit the utility of AI in armed conflicts. First, the effectiveness of AI depends on the quality of data fed into it. We have a relative scarcity of data from armed conflicts compared to other areas where AI operates. Second, AI works well when it is identifying patterns, but it works less well in circumstances with more variance. Armed conflict fits into the latter category.
Concluding her remarks, Crootof provided an update on the current status of international regulation of AI. Establishing a framework of regulation requires answering difficult questions, such as who should be accountable for harms caused by weapons relying on AI? And how should states respond when an AI decision-assistant makes a recommendation to kill civilians? Crootof said a significant portion of the debate regarding the regulation of autonomous weapons comes down to one binary question: Should the world ban autonomous weapons systems? Crootof contended that the “ban or no ban” focus is unproductive, arguing that concentrating on the nature of the bans will promote better solutions. According to Crootof, the U.S. attempted to avoid the complex international regulatory debate concerning autonomous weapons by proposing a set of relatively non-controversial, non-binding principles on the military use of AI. But even these principles only drew endorsements from fewer than 50 states. Crootof said that overall, she believes there is little room for international coordination on regulating autonomous military systems.
Deeks began by noting that there is limited conversation in the U.S. and Europe about how the international community should regulate national security AI. Military AI presents a “double black box,” according to Deeks. This means that AI adds another layer of concealment to national security operations that are already relatively hidden from public view.
To ensure the national security AI systems in use by U.S. agencies align with principles of good governance, Deeks offered a few questions lawmakers should consider when regulating these systems: Do the systems work effectively and efficiently? Do they operate legally? And do they foster accountability? Deeks cautioned that officials working in classified settings can engage in groupthink, which promotes bias—a pitfall which will need to be avoided if AI is to be used to serve the public interest. Most of the regulation concerning national security uses of AI in the U.S., Deeks said, will restrict executive branch behavior. This might make crafting and passing regulations easier because the ambit of regulation affecting executive branch actions is relatively discrete. But the widespread belief in the United States that the U.S. and China are in a “Cold-War” type competition with one another might slow or prevent regulatory progress.
Will Congress meaningfully regulate national security AI? And if so, how? Deeks began to answer these questions by discussing the two areas in which Congress typically imposes national security regulations. The first is when the use of a tool or system might jeopardize the safety of U.S. persons or the U.S.’s international standing—for example, the Detainee Treatment Act and the War Crimes Act. The second area of regulation concerns giving Congress visibility into U.S. military activities.
Any regulation of military AI will present serious constitutional questions about the balance of power between the president and Congress in directing U.S. military policy and activities. The deepest source of congressional authority to regulate military AI, Deeks asserted, is the power of the purse. But she said the Foreign Commerce Clause could be another constitutional basis for regulation. Additionally, Deeks noted that there are less direct paths to influence U.S. companies’ development of AI that could be used for national security purposes. For example, Congress could seek to limit development of these technologies by tightening export rules concerning AI-enabled systems.
Deeks also discussed the specific statutory approaches Congress may take if it regulates military AI. These include requiring notification of the president if a high-risk AI-enabled tool is in use, which could mandate the president sign off on this use or not; prohibiting the president or secretary of defense from delegating authority to use AI in military contexts; restricting the use of AI-enabled weapons in circumstances where there is a foreseeable risk to U.S. persons; protecting Americans’ constitutional rights, such as privacy, from violation where AI systems are in use; prohibiting the use of AI where it is likely to influence U.S. politics; establishing a new oversight body, perhaps in the model of the Privacy and Civil Liberties Oversight Board (PCLOB), to oversee the use of military AI; prohibiting a particular use of military AI or its use in a particular context; or requiring a human to participate in decisions to use an AI-enabled weapon—though Deeks cautioned that requiring a “human in the loop” would run into executive branch arguments about the president’s commander-in-chief authorities.
Following the experts’ initial remarks, the study group participants engaged with the experts in an open discussion. The topics included: whether to advocate for more rapid advancement of military AI systems; the implications of dual-use AI; what questions to ask of government agencies who have reporting requirements under the National Defense Authorization Act related to military and intelligence systems relying on AI; fora for establishing principles-based consensus around military AI; and regulating the use of autonomous weapons by non-state actors, among other topics.
Visit the Congressional Study Group on Foreign Relations and National Security landing page to access notes and information on other sessions.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).