Reading Time: 2 minutes

An Army strategist, John Arquilla, who in the early ’90s worked at the California think tank RAND, was one of the first to imagine a “cyberwar” force that would feature widely-spread, small groups of soldiers equipped with the latest sensors and communications technology.

This “highly networked” army would be capable of instantly reacting to an enemy and hitting him where he’s weakest. “What distinguishes the victors is their grasp of information,” Arquilla and a colleague wrote in 1993.

Arquilla’s vision proved irresistible to Army planners. Observing the rapid spread of personal computers and the growing popularity of Internet use, the Army in the late 1990’s decided to create a digital revolution of its own.

By transforming every soldier into a communications node, capable of transmitting and receiving large volumes of data from many sources, Army leaders imagined they could chart the path to an era of high-tech wars in which information was as important as bullets and shells.

But in doing so, the planners went the wrong way, according to independent analysts. Instead of repairing their communications problems with lighter, easier-to-use radios, and a simpler network, they chose heavier, more complex devices.

That solution, says Col. Gian Gentile, an Iraq war veteran now teaching at the U.S. Military Academy at West Point, “took [the challenges of] close fighting out of the equation.” It ignored, in effect, the risk of trying to spread high-tech electronics everywhere amid the rough and tumble of brutal combat.


Help support this work

Public Integrity doesn’t have paywalls and doesn’t accept advertising so that our investigative reporting can have the widest possible impact on addressing inequality in the U.S. Our work is possible thanks to support from people like you.