Dailymaverick logo

Maverick News

Maverick News, Op-eds

Dystopia rising — Winner takes all in the race for AI warfare supremacy

Dystopia rising — Winner takes all in the race for AI warfare supremacy
Your friendly chatbot may make mistakes, exhibit crude stereotypes, deceive with elan, possibly even take your job. But the coming machines of AI warfare will dwarf all our other concerns.

On 16 September 1983, Stanislav Petrov, a Lieutenant Colonel in the Russian Air Force, was alerted by Russian defence computers that a nuclear attack from the US was under way. He was requested to approve a massive nuclear response, with only minutes to respond. He had full authority. Something felt off to Petrov, and he refused to act.

It was a false alarm. Petrov had prevented a nuclear war.

Those of us who are Generation X or older will probably remember the classic sci-fi film The Terminator, directed by James Cameron and released a mere year after the Petrov event. Arnold Schwarzenegger played the part of a cyborg travelling back in time from 2029 to kill a young woman in 1984. The film was a megahit. It was thrilling and horrifying in equal measure. Most deliciously chilling was the personality of the cyborg – he was the perfect killing machine, cold, focused, precise, brutal, unstoppable, autonomous. The Terminator and Stanislav Petrov have little in common. 

You know where this is going, right?

The New York Times ran a feature last week about the rise of citizen startups developing weapons in Ukraine, most of them jury-rigged with off-the-shelf bits and pieces, glued together with home-grown software and AI smarts. The news report was liberally sprinkled with photos; most of the weapons looked as though they had been built by your mad uncle Hector. They employed amateur hobby drones, elastic bands, PlayStation controllers, cheap VR headsets, remote-controlled go-kart guns, text-based computer interfaces, crude ordnance.

The story was accompanied by impressively grainy videos of these machines hunting down and destroying individual tanks and humans (there was no actual video footage of human deaths in the report, but it included descriptions of soldiers running away and then simply giving up and waiting to be killed by barely audible overhead drones).

The attitude of the article was upbeat, even jaunty, as befits a citizen resistance movement after an illegal invasion. And yet, it was disquieting on a number of levels. 

State-supported research


Not reported on, of course, is the activity taking place at well-capitalised and state-supported arms companies. Only rumours seep out, no grainy videos. The explosion of AI capabilities and the plummeting cost of remote munitions (especially drones) is changing the shape of warfare in real time, with proof of concepts forged in the crucible of small, vicious and chaotic conflicts in places like Ukraine and Gaza.

There has been much written about the marriage of AI and warfare and what horror it may portend. A recent report in The Economist makes several salient points, beyond the obvious:

We have long read about ‘the fog of war’ – the real-time confusion of the battlefield – the noise, the smoke, the challenge of unknown terrain or urban topology, the difficulty of identifying the enemy or differentiating them from civilians – as well as the dark arts of weapons choice and deployment and trigger rules.

Much of this boils down to a ‘command and control’ problem. Who makes the critical decisions in the moment? Is it the terrified soldier in an unfamiliar downtown ruin? A field commander? An officer at some remote location with suboptimal communications and dodgy intelligence as his only guide? 

Or an autonomous AI, as will be the case, very soon now. 

AI is able to ingest, analyse and act on vastly more information than any human – continuously informed by changing video, GPS locations, radio signals, enemy movements and human reports. More importantly, AI can choose appropriate weapons and target-locks, learning as it goes along, refining its tactics in real time and (most frighteningly) making autonomous decisions about when to kill, how many to kill, and what infrastructure to destroy.

It is in this last area where the ethics of AI warfare becomes complicated. In all the heated debate about the governance and regulation of AI to prevent racial discrimination or political misinformation or theft of copyright, the matter of AI warfare is the most vexed. 

Existential competition


The first global nuclear weapons agreement was negotiated with common sense. On the understanding that nobody wins a nuclear war, and everyone is equally threatened by it, agreements were drawn up and everyone signed. This will not be the case when armies are commanded by AI. The one with smarter and faster “command and control” of the myriad variables of war will win, not only the war, but probably the geopolitical throne too.

Here is the reason. If you are just a little bit ahead in the race for autonomous AI, you are essentially uncatchable. This is due to both the nature of exponential improvement and the dynamics of autonomous AI. You will not only be ahead of the competition, but accelerating away into the distance. It is a zero-sum game and an existential competition. Possibly the last to be contested. So don’t expect a global agreement on the ethics of AI warfare; the risk/reward maths is not the same as with nuclear weapons game theory. 

The only governance we have currently (at least in the case of those militaries who choose to talk about it) are optimistic assurances that there is “always a human in the loop”. This is faint comfort. A human in the loop is a guarantee of inefficiency and risk and delay, at least in the cold calculus of AI warfare. And that human will surely be quietly removed at some point because the guys on the other side will make the decision without the slightest moral qualm. 

So, with regard to Petrov in 1983, there will be no such heroes in the dystopian future of autonomous warfare. And there is also double jeopardy now in play. 

Ungovernable, uncontrollable and unstoppable


First, the sandboxes of places like Ukraine and the Middle East will catalyse an epidemic of citizens (or more likely terrorists) launching swarms of cheap drones and other home-grown weapons (a 10,000 drone swarm is rumoured to be planned by a country in the Middle East). These will be operating outside of the disciplines and protocols of any national military chain of command, all acting under the questionable guidance of amateurishly hacked, and most probably bug-ridden, software, fuelled by open-source AI models downloaded off the Internet.

Second, the military-industrial complexes of larger and richer countries are already up and running, spending money, hiring scientists and engineers and strategists to design and build scary things, all of them presumably terrified of the consequences of being left behind in the winner-takes-all race for AI warfare supremacy.  

Your friendly chatbot may make mistakes, exhibit crude stereotypes, deceive with elan, possibly even take your job. All regrettable. But the coming machines of AI warfare, both those commissioned by governments and the ones currently being manufactured in hidden special-interest workshops, will dwarf all our other concerns.

When it comes to AI warfare, no one seems to have the vaguest idea how to govern it, control it or stop it. DM

Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg. His new book, It’s Mine: How the Crypto Industry is Redefining Ownership, is published by Maverick451 in SA and Legend Times Group in UK/EU, available now.