Sections

Research

Democratizing harm: Artificial intelligence in the hands of nonstate actors

A destroyed vehicle is parked in front of Iraqi Prime Minister Mustafa al-Kadhimi's residence following an assassination attempt by an armed drone in Baghdad, Iraq in this screen grab taken from a handout video obtained by Reuters on November 7, 2021. PRIME MINISTER MEDIA OFFICE/Handout via REUTERS THIS IMAGE HAS BEEN SUPPLIED BY A THIRD PARTY. MANDATORY CREDIT

EXECUTIVE SUMMARY

Advances in artificial intelligence (AI) have lowered the barrier to entry for both its constructive and destructive uses. Just a few years ago, only highly resourced states and state-sponsored groups could develop and deploy AI-empowered drones, cyberattacks, or online information operations. Low-cost, commercial off-the-shelf AI means that a range of nonstate actors can increasingly adopt these technologies.

As the technology evolves and proliferates, democratic societies first need to understand the threat. Then they can formulate effective policy responses. This report helps them do both. It outlines the contours of AI advances by way of highlighting both the accessibility and appeal to nonstate actors such as terrorist, hacking, and drug trafficking groups. Based on the analysis, effective or feasible policy responses are unlikely to include outright bans on AI or autonomous vehicles that rely on AI because of questions about enforceability. AI is so diffuse that such bans are not practical and will not be effective. Instead, public-private partnerships will be key in incorporating software restrictions on commercial robotics, for example, which would address the potential consequences of nonstate actors using AI to program the flight and targeting of a drone.

Cultivating a broader and deeper talent pool in the science, technology, engineering, and math (STEM) fields will also help enrich the ability of democratic states to guard against the misuses of AI-enabled technology. Lastly, democratic societies should work together to develop ethical use norms, which may not preclude the misuse by nonstate actors but at least create guardrails that present obstacles to the export of harmful AI technologies from states to non-states and can shape the ways nonstate actors consider using these technologies.

  • Acknowledgements and disclosures

    Lori Merritt edited this paper, Rachel Slattery provided layout, and Richard Li offered helpful research assistance. Chris Meserole offered the generous opportunity to write about the topic of AI and nonstate actors for Brookings.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).