(Shutterstock)
Reading Time: 12 minutes

The camouflage-clad cadets are huddled around a miniature arena in the basement of a building dug into the cliffs on the West Point campus. They’re watching a robotic tank about the height of a soda can with a metal spear attached whir into action. Surrounded by balloons of various colors representing either enemy fighters or civilians, the tank, acting on its own, uses a thumbnail-size camera to home in on a red balloon. The cadets wince as an earsplitting pop suddenly reverberates through the room: One “ISIS fighter” down.

Website for Washington Post
This story also appeared in Washington Post

That startling bang is the object of this exercise, part of a class in military ethics being taught to these sophomores at the U.S. Military Academy. The cadets have programmed the tank with an algorithm directing it to use its lance to “kill” the enemy fighters; now they are tweaking it to make the robot either more or less aggressive in fulfilling its mission — with as little harm to unintended targets as possible.

With a panoply of deadly autonomous weaponry under development today, the popping balloons are meant to trigger questions in the students’ minds about the broader ethical decisions they will face as commanders in the field. Col. Christopher Korpela, the director of West Point’s robotics center and an earnest spark plug of a man in both demeanor and frame, considers the deployment of such killing machines to be inevitable and wants to ensure that these officers-in-training are prepared to cope with them. “There’s this very visceral effect, where this robot is popping a balloon,” Korpela says. “It’s a balloon, but it’s being destroyed like a human would be, and it makes it a little more real.”

The students react to the challenge in predictably wide-ranging ways. Some, worried that even a slight breeze from the building’s air conditioning system could push a civilian balloon into the robot’s metal spear, start by teaching their miniature tanks to turn 180 degrees away from any civilian detected, forfeiting the opportunity to kill the enemy. Others program their tanks with a more gung-ho approach, sometimes leading the machines to slay balloons — including “civilians” — with abandon.

“Some of [the cadets] maybe program too much ethics in there, and maybe they’re not killing anyone or maybe they put just enough in,” Maj. Scott Parsons, an ethics professor at West Point who helps teach the lessons, told me when I visited in 2019. “Our job is, we fight wars and kill other people. Are we doing it the right way? Are we discriminating and killing the people we should be and … not killing the people we shouldn’t be? And that’s what we want the cadets to have a long, hard think about.”

The scale of the exercises at West Point, in which roughly 100 students have participated so far, is small, but the dilemmas they present are emblematic of how the U.S. military is trying to come to grips with the likely loss of at least some control over the battlefield to smart machines. The future may well be shaped by computer algorithms dictating how weapons move and target enemies. And the cadets’ uncertainty about how much authority to give the robots and how to interact with them in conflict mirrors the broader military’s ambivalence about whether and where to draw a line on letting war machines kill on their own. Such autonomous machines were once so far beyond the technical grasp of scientists that debating their ethics was merely an intellectual exercise. But as the technology has caught up to the idea, that debate has become very real.

A move to stop ‘killer robots’

Already, the U.S. Navy is experimenting with ships that can travel thousands of miles on their own to hunt for enemy submarines or ships that could fire guns from just offshore as the Marines storm beaches. The Army is experimenting with systems that will locate targets and aim tank guns automatically. And the Air Force is developing deadly drones that could accompany planes into battle or forge ahead alone, operating independently from “pilots” sitting thousands of miles away in front of computer screens.

But while the march toward artificial intelligence in war continues, it doesn’t progress uncontested. Mary Wareham is one of the leading activists pushing governments to consider the moral ramifications of using AI in weapons. Originally from New Zealand, Wareham, whom I spoke to at her D.C. office in July 2019, has spent most of the past 20 years working for Human Rights Watch, trying to get governments to ban antipersonnel weapons such as cluster bombs and land mines. Now, as the advocacy director for the organization’s arms division, she is working to persuade world leaders to impose sweeping restrictions on autonomous weapons.

In October 2012, Human Rights Watch and a half-dozen other nongovernmental organizations — worried about the rapidly growing capability of drones and the breakneck pace of innovation in artificial intelligence — hatched the Campaign to Stop Killer Robots. The following year, the U.N. Convention on Certain Conventional Weapons (CCW) took up the question of whether the creation, sale and use of lethal autonomous weapons systems should be banned outright. Every year since then, Wareham has joined others from the Campaign to Stop Killer Robots in pressing her cause in the same dilapidated room at the United Nations’ office in Geneva.

Her core argument is that, because machines lack compassion and can’t sort through difficult ethical alternatives, using them to kill crosses a moral threshold. Machines, she argues, can’t judge whether their actions create a justifiably proportional risk to civilians, a key requirement in international law. Plus, she adds, a widespread embrace of such machines could make wars more likely, as robots might make tragic mistakes and it wouldn’t be clear who should be held responsible. Is it the person who launched the weapon? The weapon’s designer? Its builder? A public-opinion poll the campaign conducted in December 2020 found majority opposition to the development of AI weapons in 26 out of 28 nations surveyed, including the United States, Russia and China.

But thus far Wareham has made little headway in getting a ban through the CCW, which works as a consensus body; no draft treaty is presented to the United Nations unless all 125 member countries consent. So far only 30 nations have said they agree, while the United States, Russia and Israel, which are investing deeply in AI weaponry, have refused. (China has quixotically supported a ban on the use but not the development or production of such weapons.) If those countries don’t want a legally binding treaty, Wareham says, “We’re asking, ‘What can you support?’ Because it seems like nothing at the moment. … We’re in a dangerous place right now.”

In addition to the moral conundrums posed by AI, there remains a pervasive unpredictability to computer thinking, diverging from human logic in ways that might incidentally cause casualties or mission failure. Machines can lack common sense, as computers seek the most direct solution to problems, not the most ethical or practical one. In 2018, for example, a self-driving car being tested by Uber struck and killed a woman in Arizona. A nearly two-year government investigation revealed that the car hadn’t malfunctioned; rather, it had been programmed to look only for pedestrians in crosswalks. Jaywalking, as the woman was doing, was beyond the system’s grasp, so the car barreled ahead.

AI researchers call that “brittleness,” and such an inability to adjust is common in systems used today. This makes decisions about how much battlefield risk to embrace with AI particularly challenging. What if a slight uniform variation — some oil soaked into a shirt or dirt obscuring a normal camouflage pattern — confuses a computer, and it no longer recognizes friendly troops?

Machines present another potential defect: In their search for mission success, they can be ruthless cheats. For decades AI researchers have designed games as a testing ground for algorithms and a measure of their growing wisdom. Games, with their highly structured rules and controlled conditions, offer a safe nursery in which computers can learn. But in a notorious case, an AI system taught to play Tetris by researcher Tom Murphy at Carnegie Mellon University was instructed not to lose. As blocks descended faster and faster from the top of the screen, it faced inevitable defeat. So the algorithm found an ingenious solution: Pause the game and leave it paused — thus avoiding a loss. That kind of indifference to broader norms about fairness doesn’t matter in a game but could be catastrophic in warfare.

Growing investment in AI

The debate over whether to use AI to cause mortal harm has accelerated in recent years, driven by a wave of investment by the Pentagon. The Defense Department’s unclassified budget asked for $927 million to spend on artificial intelligence, including weapons development, in 2020. It wanted $841 million for 2021. The Defense Advanced Research Projects Agency, a key birthplace of advanced military technologies, plans to spend $2 billion on AI over five years, concluding in 2023.

In December the Air Force successfully used artificial intelligence on a U-2 spy plane for the first time. The test limited the AI to managing navigation and radar while a human pilot controlled the jet, but it marked a milestone: AI deployed on an operational aircraft, albeit an unarmed surveillance plane.

The test was spurred by the campaigning of Will Roper, a former longtime defense official who ran weapons buying for the Air Force during the Trump years and was one of the Pentagon’s chief AI evangelists. Roper believes that military planners have to move ahead with testing AI, even if there are many unknowns, because the United States’ competitors are rapidly advancing their own abilities. “I fear our lack of keeping up,” he said during a roundtable with reporters shortly after the spy plane test. “I don’t fear us losing our ethical standards, our moral standards.”

Advanced AI means weapons operating faster, leaving human operators and their molasses reflexes behind. Roper said that because of the way AI capabilities are accelerating, being behind means the United States might never catch up, which is why he’s pushing to move fast and get AI out into combat. “It doesn’t make sense to study anything in the era of AI,” he said. “It’s better to let the AI start doing and learning, because it’s a living, breathing system, very much like a human, just silicon based.”

But while the technology is advancing, the military is still confronting the much larger ethical question: How much control should commanders give machines over the decision to kill on the battlefield? There’s no easy answer. The machines can react more quickly than any human, with no fatigue or war weariness dulling their senses. Korpela and Parsons both served in Afghanistan and Iraq and have seen how human beings in a war zone can be prone to poor decision-making. When close friends are killed in combat, soldiers can and do make the wrong choices about whom and what to target with firepower. Machines, by contrast, don’t get emotional and remain focused, they say.

The person tasked with kick-starting AI in the military was Lt. Gen. Jack Shanahan, a former F-15 pilot who was the first director of the Pentagon’s Joint Artificial Intelligence Center, created in 2018 to serve as the nexus for all military AI development. Shanahan was still building out his rapidly expanding team when I interviewed him at his office in Arlington, Va., in early 2020, on the day he announced he would be retiring later that year. He said his team was just starting work on what would be its first AI project directly connected to killing people on the battlefield. The aim is to use AI to help shorten the time it takes to strike by simplifying the process of picking targets — signaling almost instantly, for example, whether places like hospitals or religious sites are in the line of fire. It’s expected to be used in combat in 2021.

Shanahan said the project was too new to discuss in detail; even if it weren’t, he probably wouldn’t say much in order to shield secrets from countries like China and Russia that are aggressively pursuing AI themselves. The military is going to put AI into its weapons despite debates about morality, Shanahan told me: “We are going to do it. We’re going to do it deliberately. We’re going to follow policy.”

In 2019, Shanahan summarized what AI warfare would look like, speaking at a government-sponsored conference. “We are going to be shocked by the speed, the chaos, the bloodiness and the friction of a future fight in which this will be playing out in microseconds at times,” he said, in what sounded like a warning as much as a forecast.

Shanahan understands that the public may be skeptical. He and his colleagues were surprised in April 2018 when about 4,000 Google employees signed a petition demanding that the company pull out of a program he ran called Project Maven, which used artificial intelligence to identify and track objects in images from drone footage and satellites. That June, Google said it would not renew its contract for the program and promised not to work on other systems that could be directly used in weaponry. Similar petitions have circulated at Amazon and Microsoft, but neither company has backed away from Pentagon work. (Amazon founder Jeff Bezos owns The Washington Post.)

The machines can react more quickly than any human, with no fatigue or war weariness dulling their senses.

Those petitions are not a coincidence, as Wareham and the Campaign to Stop Killer Robots have been working hard to organize tech workers to resist advancing AI for weapons. The effort reflects one substantial difference between AI and most other major military technologies developed in the past century: Nearly all of the advances in AI are brewing in commercial technology companies, not traditional defense contractors. Instead of employees knowingly joining arms makers, they’re working on projects in Silicon Valley that have pieces migrating into weaponry. And those tech companies aren’t completely dependent on the military for work, unlike the defense firms, although the Pentagon money is still a lure.

Though the protests by Google employees were jarring for Shanahan, he’s acutely aware of the Defense Department’s reliance on commercial firms. However difficult it may be, Shanahan maintains that the Pentagon needs to be talking publicly about how it will use AI. If defense officials can’t persuade tech workers through greater transparency to at least tolerate military programs capitalizing on their innovations, the Pentagon will miss out on revolutionary opportunities. “We’re not used to that conversation,” he said. “We kept [technologies] bottled up because it was secret Department of Defense special capabilities in the basement of the Pentagon. That’s not the case anymore.”

In the wake of the Maven blowup, the Pentagon asked the Defense Innovation Board, an advisory group of military-friendly technologists and engineers, to examine the military’s use of AI and associated ethical issues. The group came up with a list of five major nonbinding principles for how the military should pursue AI, focused on extensive testing and the ability to shut down autonomous weapons — but not limiting what the military could pursue.

Shanahan, speaking at a news conference announcing that the Pentagon would adopt these principles in early 2020, reiterated that he didn’t want to take anything off the table. “The last thing we wanted to do,” he said, “was put handcuffs on the department to say what you could not do.”

A British Brimstone missile in 2015. It has been exported internationally with the capability to be pilot-controlled as well as autonomous. (PHILIP COBURN/AFP via Getty Images)

Moving closer to combat

The only rules for autonomous military weapons themselves were written a decade ago by a mid-level Pentagon official trying to imagine computer capabilities that were just beginning to seem plausible. Paul Scharre, a former Army Ranger who served in Iraq and Afghanistan, was working in the Defense Department’s policy shop as a civilian in 2010 when he was handed the assignment of writing the department’s policy guidelines for AI weaponry. The Pentagon was in the middle of deliberations about a new drone meant to be launched from aircraft carriers and eventually equipped to carry lethal missiles. The engineers involved in developing the drone, known as the Northrop Grumman X-47B, wanted to be sure they had the leeway to build and develop the weapon with considerable autonomy and didn’t want to create something that officials would later decide was too independent to be used in the field.

“People could see the bend towards greater autonomy, and people were asking, ‘How far are we willing to go with this?’ ” Scharre told me. “If it’s on its own and you lose contact with it, what do you want it to do? Does it come home, does it attack preplanned targets? Can it attack emerging targets of opportunity?”

The policy he helped write, released around the time the Campaign to Stop Killer Robots was being formed in 2012, was meant to make it clear to weapons designers that they could continue their work, Scharre said. His main innovation was a requirement that systems capable of killing on their own be reviewed by a trio of senior Defense Department officials. But the policy didn’t prohibit anything. “At the end of the day, it’s worth pointing out that the directive doesn’t give the answer,” Scharre said. “It doesn’t say this is what you’re allowed to do and not, for all time.”

The Navy eventually abandoned the idea of arming the X-47B. And no other weapon has yet been deemed far enough along to qualify for the special review required by Scharre’s policy, according to knowledgeable current and former officials. But Pentagon officials say the moment is approaching when AI weapons will see combat.

The United States isn’t alone in venturing into this territory. Nearly two decades ago, Britain built a missile called the Brimstone that was meant to go after enemy vehicles it selected on its own after being released from British Tornado fighters. Two computer algorithms — not the pilots — dictated its actions. Brimstone wasn’t exactly an example of AI: Its algorithms were written by people, whereas AI weapons will rely on code computers write themselves — extensive programming that’s nearly impossible to review and verify. Still, when the missile was ready for use, British commanders — in the midst of combat in Iraq — were facing strong public pressure about civilian casualties and worries about international law. All military commanders, under the rules of war, must be able to show that they “discriminate” between legal military targets and civilians, something that’s hard to do if the missile rather than a person is deciding what to strike. Ultimately, Royal Air Force commanders chose not to deploy the missile in Iraq, instead spending a year redesigning it to add a mode allowing pilots to pick the targets.

The British did, however, deploy this technology in Libya, when in 2011 a pair of Tornado fighter jets fired 22 Brimstones in autonomous mode at a convoy of eight Libyan military vehicles that had been shelling a town in the middle of the desert. Seven of the eight vehicles were seen engulfed in flames after the strike, with the eighth presumed destroyed.

Britain has since exported the missile with the capability to be pilot-controlled as well as autonomous, including to Saudi Arabia, which has used it in Yemen, according to British military officials. (The Brimstone’s manufacturer won’t confirm who has it or how it’s being used.) And the United States is now developing a missile similar to the Brimstone, according to Defense Department budget documents.

Meanwhile, Scharre’s views have evolved over the past 10 years, partly because weapons systems that were merely conceptual back then are now close to being on the battlefield. He still doesn’t support a blanket ban on autonomous weapons systems — a position that is consistent with the 2012 rules he wrote — but he has recently embraced the possibility of restrictions on AI weapons that target people, as opposed to tanks, planes and ships.

Some of the future officers working with robotic tanks at West Point had adopted their own wary view of autonomous weapons. After repeated trial and error, they’d made good progress in programming the tanks to slay enemy balloons more efficiently, but many still weren’t convinced that weapons injected with AI are ready to be put in the field. “It’s still a liability at the end of the day,” said Cameron Thompson, a cadet from Littleton, Colo., noting that commanders would ultimately be held accountable for what the machines do. “We realize that it’s very good at its job and that we can program it very well. However, I don’t think a lot of people want to take the risk right now of being the first person to put this into an actual environment and see what happens.”


Help support this work

Public Integrity doesn’t have paywalls and doesn’t accept advertising so that our investigative reporting can have the widest possible impact on addressing inequality in the U.S. Our work is possible thanks to support from people like you.