Claude AI helped bomb Iran. But how, exactly?
The lack of visibility on how artificial intelligence is already being used in war is deeply troubling
THE same artificial intelligence (AI) model that can help you draft a marketing e-mail or a quick dinner recipe has also been used to attack Iran. US Central Command used Anthropic’s Claude AI for “intelligence assessments, target identification and simulating battle scenarios” during the strikes on the country, according to a report in The Wall Street Journal.
Hours earlier, US President Donald Trump had ordered federal agencies to stop using Claude after a dispute with its maker, but the tool was so deeply baked into the Pentagon’s systems that it would take months to untangle in favour of a more compliant rival. It was used, too, in the January operation that led to the capture of Venezuela’s Nicolas Maduro.
But what does “intelligence assessments” and “target identification” mean in practice? Was Claude flagging locations to strike or making casualty estimates? Nobody has made that disclosure and, alarmingly, no one has an obligation to.
TRENDING NOW
On the board but frozen out: The Taib family feud tearing Sarawak construction giant apart
OCBC consumer banking chief Sunny Quek aims to double wealth business by 2029
‘We’re not a bubble tea brand’: Chagee aims to double Asia-Pacific footprint to 600 stores by 2027
UMS Integration closes 10.2% higher after posting ‘strong’ double-digit sales growth in Q1