After three days of playing chicken with enemy fighter jets piloted by computers, an algorithm designed to guide an F-16 in combat finally got to square off against a human Thursday in the finale of a three-day competition of combat simulations run by the Pentagon’s advanced research group. The final score: computer five, human zero.
The competition, called AlphaDogfight, was designed to test eight artificial intelligence systems that had been trained in air combat. It’s part of a program run by the Defense Advanced Research Agency (DARPA) gauging whether computers are ready to take over more of the fighting U.S. Air Force pilots typically handle.
DARPA’s conclusion is an emphatic yes.
“The tools which we’ve developed are now ready for weapons systems designers to be in the toolbox and be used,” Dr. David Honey, the acting deputy director of the agency, said in a speech opening the competition.
The winning computer pilot, trained by a Maryland-based company called Heron Systems, spent the first three rounds of computer simulations dominating its artificial intelligence (AI) opponents with a game plan unimaginable for human pilots. The system would immediately turn toward its enemy in each combat scenario, flying directly at its opponent while firing its gun and veering away at the last possible moment, sometimes within 100 feet of a midair collision.
It’s the sort of tactic that wouldn’t be allowed in “Top Gun”-style contests the Air Force and Navy run as training for their pilots. Normally, pilots aren’t permitted to get within 500 feet of each other or directly face off. The distance is meant to prevent accidents, but also has a practical combat rationale: if a pilot blows up the enemy at too close a range, the pilot is liable to fly into the resulting debris, sucking metal into their own engine and destroying their aircraft.
The AI system isn’t constrained by self-preservation instincts. It’s a sign of why AI might be helpful — computer brains can develop novel solutions — but also why they have to be monitored for unintended consequences.
The AlphaDogfight contest is one leg of a five-year, $2 billion DARPA initiative to get AI ready for combat. While most of that money is going to research that’s more theoretical in nature, Thursday’s contest showed how close weapons developers are to putting AI in the driver’s seat.
The DARPA official running the competition, Col. Dan Javorsek, said he saw the shift in air combat through a historical lens, comparing it to the moment tanks were introduced to the battlefield during World War I.
“How are the fighter pilots of today not the horse-mounted cavalry of the 21st century?” he asked rhetorically. Javorsek, a former F-16 pilot, said DARPA is focusing on finding ways for computers to take over more and more of the menial tasks pilots tick off during most flights. But machines eventually will play a key role in combat once pilots learn to trust the algorithms.
“The fighter pilot community just doesn’t inherently trust this stuff quite yet, but the trend has been moving, albeit slowly, in the right direction,” he said.
In the short term, Pentagon weapons developers are studying ways for pilots to work with computer “wingmen” — other aircraft controlled by algorithms that can help the pilots carry out their missions.
One particularly useful application of AI pilots is as potential sacrificial lambs, absorbing enemy ammunition while shielding human pilots, according to Peter Singer, author of Burn-In: A Novel of the Real Robotic Revolution.
“It’s more akin to how a king would use a peasant back in the day when they go on a lion hunt,” said Singer, a senior fellow at New America, a think tank.
The AI pilots competing in the DARPA contest spent a year practicing in thousands of simulations, developing expertise through trial and error. AlphaDogfight, launched in September 2019, included several rounds of tests before this week’s finale. At first, the computers repeatedly made simple mistakes, crashing their aircraft into the ground. On Thursday, one of the four finalists crashed three times. But all of the systems got better as the contest progressed, yielding competitive dogfights in the finals.
Heron Systems’ hyper-aggressive strategy led to quick victories in the majority of the heats in which it competed, tallying 213 kills when facing other computers. Heron’s pilot, in turn, died 16 times.
After winning earlier contests, the Heron system went up against an F-16 pilot whose identity was withheld by DARPA due to security concerns but who goes by the call sign Banger. The pilot, wearing a virtual-reality headset and flying using a joystick, was given five chances to try to beat Heron. Having watched the prior heats, Banger tried to counter the Heron attack strategy with evasive maneuvers. Although he avoided quick death, Heron managed to win all five rounds without the human pilot firing a single shot — a perfect record.
The contest is emblematic of a John-Henry type struggle: a railroad worker facing obsolescence from a steam shovel. In the end, while Henry out-digs the machine, he dies from his exertion.
“There’s a nobility to the human role,” Singer said, “but it symbolically points to machines in more and more roles.”
Read more in National Security
An early stumble prompts the Defense Department to change its approach to the tech community but hasn’t lessened its commitment to put artificial intelligence into weaponry.
Cybersecurity tests of new major weapons exposed simple passwords and other big software holes, according to government auditors