Sections

Commentary

Why the AI revolution hasn’t swept the military

A customer uses the new face-recognition software on the new iPhone X inside the Apple Store in Regents Street in London, Britain, November 3, 2017. REUTERS/Peter Nicholls

In games such as chess and Go, artificial intelligence has repeatedly demonstrated its ability to outwit the experts. Ad networks and recommendation engines are getting eerily good at predicting what consumers want to buy next. Artificial intelligence, it seems, is changing many aspects of our lives, especially on the internet.

But what has been described as a revolution in artificial intelligence hasn’t yet swept the U.S. military. While there are frequent forecasts that AI will revolutionize military work, there is a huge difference in the breadth and depth of AI adoption in the military and online commerce, and the difference has a lot to do with the data available to military AI systems.

The data that the U.S. military needs in order to train AI algorithms to recognize, for example, the signals coming from adversary sensors or platforms are difficult to collect. AI algorithms—in particular deep learning variants—generally require huge amounts of data that is accurately labeled and relevant to each specific problem domain. Military adversaries develop sophisticated tactics and technologies to prevent the collection of this data, or try to ensure that we get the wrong data. With corrupted information about the military situation, decisions we make may be far from optimal, handing an advantage to the adversary.

This is especially true in dynamic conflict situations in which technologically advanced adversaries struggle to gain information about their opponents using electromagnetic sensors and communications links. In these scenarios, each side simultaneously engages in electronic warfare (EW) to deny the use of those same sensors and communications links. When algorithms are used to adapt to these dynamic EW interactions, the kinds of learning applied to strategy games like Go is much more difficult to implement.

There are defense applications in which modern AI algorithms do well, such as recognizing complex patterns—or changes—in images or signals that can be collected in bulk by intelligence sensors. Satellite sensors, for example, can collect data on large areas over time and look for patterns of change in agricultural use, construction, transportation, and shipbuilding. Algorithms designed to detect such changes can greatly reduce time-consuming efforts to sort a massive pile of videos and images. The U.S. Defense Department’s Project Maven, for example, collects large samples of video that is then labeled by analysts as to what kinds of activities were recorded. Then, deep learning AI is used to recognize similar patterns in new videos.

The U.S. military has seen great success in rolling out autonomous vehicles—another class of systems that have been subsumed under the ever-widening scope of “artificial intelligence”—but these systems are largely controlled by algorithms that differ from many AI applications. These systems are largely guided by algorithms that are carefully crafted by engineers to ensure that the vehicles behave in predictable ways, and while these vehicle control algorithms are adaptable, they do not learn their basic behavior from millions of examples through trial and error like deep learning methods. In these cases, the development and behaviors of key algorithms is based on data that is relatively safe from denial or deception by an adversary.

Is more advanced AI actually secretly flourishing throughout the military, but unknown to civilians because of classification restrictions? The answer is no. It is true that specific system capabilities, tactics, or vulnerabilities must remain classified. But Department of Defense programs regularly share unclassified success stories in order to inspire new workers to enter the field, entice business to invest their internal R&D dollars to compete, and to justify continued funding. Oversight of the DoD budget requires most programs to provide some description of their purpose and progress.

By contrast, some of the internet’s most popular platforms are designed to produce data that is being fed back into AI systems and improve their performance. Every day, billions of internet users post data about their activity. That data is often labeled with captions, hashtags, locations, or links to friends. These bits of related data contain information about the subjects of images we post, our individual interests, social networks, preferences, and so on. As social media and internet commerce platforms absorb and store this data, they accumulate enormous databases that are ideal for training, testing, and rapidly improving AI algorithms. These AI algorithms are used in turn to infer our individual interests and then sell information about us to advertisers. Advertisers then then make decisions as to whether to pay for an ad that pops up on the screen.

This game is for the most part cooperative: we users provide access to our behavioral and preference data, and in return get free information as well as recommendations, while internet platforms provide free services to us and get money from selling ads. In the vast majority of daily data transactions, there is no player in this game that is trying to corrupt the decisions being made.

The military simply has no comparable sources of data to feed into its systems. Social media companies get data for free, while the military has to build specialized systems and pay people to collect and label information. And the data it most seeks—about adversary systems and behavior—is the most difficult to collect and requires careful sorting to avoid deception.

While the military will continue to face a decision-environment enveloped in the fog of war, popular games will continue to be learned through hundreds of millions of online plays in order to train the next AI. The military will need to continue to struggle to collect data for use in automating decisions, particularly against increasingly sophisticated adversaries. AI may prove useful in solving some problems, but it will be a long, slow evolution. In the meantime, we should continue to invest in the widest range of theory, computation, and algorithms to improve our information-driven systems in the face of much more sophisticated competition that we are facing now.

Tom Stefanick is a visiting fellow at the Brookings Institution.