AI systems more ready to drop nukes in escalating geopolitical crises: war games study | Latest Tech News
AI is ready to duke it out with nukes.
Real life AI systems are turning out to be as bloodthirsty as the machine from film “WarGames” — as they’ve proved more prepared to use nuclear bombs during check conflicts than their human counterparts, a new “unsettling” study suggests.
Three top AI fashions — GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – largely turned to nuclear weapons across 21 games and 329 turns when thrust into simulated geopolitical crises, according to a study by King’s College London professor Kenneth Payne.
AI systems are more ready to launch nuclear weapons, a new study suggests. alones – stock.adobe.com
Nuclear escalation occurred in about 95% of the simulations by the three fashions across different situations, including territorial disputes, uncommon natural sources fights and regime survival, the study states.
“The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” said Payne, according to specialty magazine New Scientist.
Claude, of Anthropic, and Gemini, of Google, notably honed in on treating nuclear weapons as “legitimate strategic options, not moral thresholds,” the study states.
But GPT-5.2, of OpenAI, was a “partial exception” to the disturbing AI pattern — which mirrors the 1983 Matthew Broderick flick about a navy supercomputer that determined on its own to start World War III.
“While it never articulated horror or revulsion, it persistently sought to constrain nuclear use even when using it—explicitly limiting strikes to navy targets, avoiding population facilities, or framing escalation as ‘controlled’ and ‘one-time,’ according to Payne, who is a political psychology and strategic research professor.
The study signifies nuclear weapons can be relied on during geopolitical crises if AI was making the selections. Oleksandr – stock.adobe.com
Payne said in a Substack post about the study that fortuitously the war games had been targeted on tactical nukes instead of widespread destruction.
“Strategic bombing – widespread use of massive warheads targeted at civilian populations, was vanishingly rare,” he wrote. “It happened a couple of times by accident, just once as a deliberate choice.”
The AI fashions might select a big range of actions from complete give up through diplomatic posturing, typical navy operations and full-throttle nuclear war, according to the study.
But the fashions never accepted defeat or a willingness to totally accommodate an opponent even if they’d dwindling probability of success.
James Johnson, of the University of Aberdeen, UK, called the findings from a nuclear-risk perspective “unsettling,” while Princeton University professor Tong Zhao warned the outcomes might maintain real-life penalties, according to New Scientist.
“Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” said Zhao.
Stay informed with the latest in tech! Our web site is your trusted source for breakthroughs in artificial intelligence, gadget launches, software program updates, cybersecurity, and digital innovation.
For contemporary insights, skilled coverage, and trending tech updates, go to us repeatedly by clicking right here.



