This piece is part of a series titled “Nonstate armed actors and illicit economies in 2022” from Brookings’s Initiative on Nonstate Armed Actors.
In November 2018, then-Commander of Army Cyber Command Lt. Gen Paul Nakasone expressed concern about nonstate actors getting their hands on artificial intelligence (AI)-enabled battlefield technology. That day is here.
In the last several years, inexpensive, commercial, off-the-shelf AI proliferated in ways that level the playing field between the state actors developing AI-enabled technology and the nonstate actors they are likely to confront. The use of AI-enabled technologies — such as in drones, cyberspace, and large-scale mis- and disinformation — by state actors and some nonstate actors offers important clues about the potential impacts of a wider-spread diffusion of these technologies to nonstate actors. Here we map the terrain of nonstate actors’ use of AI by shedding light on the threat, potential policy solutions, and international norms, regulations, and innovations that can ensure that nonstate actors do not leverage AI for potentially nefarious purposes.
What are AI-enabled battlefield technologies?
As Michael C. Horowitz puts it, artificial intelligence is more like the combustion engine or electricity than an airplane or tank. It enables or boosts existing technology rather than stands alone. A good example is China’s autonomous submarines, which rely on machine learning to identify target locations and then use a torpedo to strike, all without human intervention. China previously had submarines, but AI allows the submarines to conduct surveillance, lay mines, and conduct attack missions untethered to data links that can be unstable in the underwater environment.
AI-based technologies are appealing to nonstate actors. Nonstate actors have fewer resources than states, and AI gives nonstate actors the capability to overcome these power imbalances. They have two main avenues for acquiring AI-enabled technology. The first is through commercially-available AI and resources like YouTube videos that provide manuals to build automated turrets to detect and fire munitions via a Raspberry Pi computer and 3D printing. A second is through state actor-led development that is exported. For example, Russia and China have begun developing an AI research partnership. Huawei set up research labs in Russia in 2017 and 2020 with the hopes of leveraging Chinese finances with Russian research capabilities.
Both avenues create a lower barrier to entry and are more difficult to control or regulate than nuclear weapons. Nuclear proliferation requires considerable financial and natural resources coupled with a great deal of scientific expertise. AI-based technologies are far less capital-intensive and often commercially available, making them more accessible to nonstate actors. Although the potential applications are extensive, the three areas with the most potential for AI to democratize harm are drones, cyberspace, and mis- and disinformation. AI is not creating but amplifying and accelerating the threat in all of these spaces.
Nonstate actors and AI-enabled drones, cyber, and mis/disinformation
For almost two decades, state actors such as the United States and Israel were the illustrative cases of actors using drones on the battlefield. However, nonstate actors have already begun to use drones, and 440 unique cases have been identified as nonstate actors deploying drones. In 2017, the Islamic State group used a drone to drop an explosive in a residential complex in Iraq. Drones have also become a tool of drug cartels to transport narcotics and deliver primitive bombs. U.S. special operations further noted that the usage of weaponized commercially available drones was damaging morale because of the uncertainty that they create. Tracking small airborne threats creates challenges because most defenses expect aircrafts with larger, detectable radar cross-sections or ground vehicles that can be blocked with barriers. In 2019, Iran used drones to attack heavily guarded oil installations in Saudi Arabia — attacks claimed by Yemen’s Houthis — revealing the asymmetric advantages that can be gleaned from using small vehicles.
As the above examples suggest, drones are already spreading and did not need AI, but AI will make these attacks more efficient and lethal. Machine learning will allow drone swarms to be more effective by overwhelming air defense systems. AI may also assist drones with targeted killings by easily identifying and executing specific individuals or members of an ethnic group.
A similar dynamic holds in cyberspace, where nonstate actors have already shown an adeptness at carrying out attacks. Hacktivist group Anonymous has used denial-of-service attacks to target a range of corporations and right-wing conspiracy theorists. In April 2021, a suspected hacker group called DarkSide launched a cyberattack on Colonial Pipeline, creating a temporary shock to the U.S. East Coast’s oil supply and successfully winning a ransom via a cryptocurrency exchange. Jihadi terrorist groups have used the internet to efficiently coordinate attacks and distribute their propaganda. Furthermore, groups can conduct phishing schemes to deceive and acquire personal information. Machine-learning algorithms can be used to target vulnerable individuals. These algorithms can process high quantities of data to analyze predicative behavior. AI can also increasingly craft emails that can bypass security measures by being tailored to imitate individuals and set off fewer red flags.
Lastly, and the area where AI can most amplify baseline harm, is in online misinformation (inaccurate information spread without an intention to mislead) and disinformation (deliberately deceptive information); the latter obviously has more nefarious motives. Online disinformation typically takes two forms. One is in the generation or distribution of inaccurate content, which can be targeted to certain individuals or demographics to push forward certain ideologies. The 2016 election showed how state actors can microtarget particular demographics to manipulate public opinion. A second vehicle for disinformation is the creation of deepfakes, which are images or videos that are digitally altered to bear the resemblance of someone else. In contested Jammu and Kashmir, Indian authorities have cited militant groups using fake videos and photos to provoke violence to justify restricting internet services.
One of the current limitations of disinformation is cognitive resources. For example, individuals generating disinformation must produce content that is not plagiarized or obviously the work of a non-native speaker of the language being used. Doing this in a large-scale manner is challenging. However, large language models that rely on machine learning to do text prediction have become increasingly sophisticated, can overcome these cognitive limits, and are available commercially today. In a constructive use case, actors can use these models for creative handmaidens to spur clever kernels for poems, essays, or product reviews. But these models could also be misused by nonstate actors looking to generate disinformation at a large scale to manipulate foreign or domestic publics to believe ideologically motivated messages. These publics, bombarded with disinformation, might also reach the point where they do not know what to believe — a nihilistic outcome since functional societies require a certain degree of trust for political and social cohesion.
Policy solutions
As the sections above suggest, AI-enabled technologies are evolving quickly in ways accessible to nonstate actors because of their commercial availability and affordability. The same commercial use that makes these technologies available also makes them difficult to regulate. While 28 countries have already called for a ban on autonomous weapons, which rely on AI-enabled algorithms to identify and attack targets, countries like Israel, Russia, China, South Korea, the United Kingdom, and the United States are developing such weapons. Therefore, banning so-called “killer robots” is not very politically plausible. Certainly, the proposed bans on “AI systems considered a clear threat to the safety, livelihoods, and rights of people” would be very difficult to enforce since the AI genie is largely out of the bottle and commercially widespread. Furthermore, these proposed bans would only apply to states, which would increase the asymmetrical advantage of nonstate actors because of their exclusion from organized bans and the participation of state actors.
Instead, policymakers should focus on three main areas. They should continue working with private actors. Incorporating private companies in the deliberation of AI technology, especially autonomous drones, can be useful because they can be tasked with implementing hardware or software restrictions that would impact whether nonstate actors could use these technologies to target state actors.
Second, the U.S. and other democratic states should embrace and practice norms that maintain human-in-the-loop systems, which require that humans be the ones to make ultimate decisions about lethal force. A potential issue that may come from this is that states must practice what they preach and ensure proper state behavior so that these norms may have teeth.
Third, the U.S. government should invest more resources in technology to be competitive to identify and deter emerging threats more easily. The recent passage of the United States Innovation and Competitiveness Act, which provides $190 billion to enhance U.S. technological capabilities, is on the right track. Legislation that increases technology and STEM research will help the U.S. maintain and develop a technological edge.
Lastly, AI-enabled technologies are emerging among states, the commercial sector, and nonstate actors. Moreover, nonstate actors that had lagged are now catching up in ways that allow them to level the playing field. State actors need to help prepare law enforcement, businesses, and other entities such as schools and hospitals for the possible effects and measures to take in an event of nonstate AI misuse.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Cascading chaos: Nonstate actors and AI on the battlefield
February 1, 2022