Claude AI helped bomb Iran. But how, exactly?
The lack of visibility on how artificial intelligence is already being used in war is deeply troubling
THE same artificial intelligence (AI) model that can help you draft a marketing e-mail or a quick dinner recipe has also been used to attack Iran. US Central Command used Anthropic’s Claude AI for “intelligence assessments, target identification and simulating battle scenarios” during the strikes on the country, according to a report in The Wall Street Journal.
Hours earlier, US President Donald Trump had ordered federal agencies to stop using Claude after a dispute with its maker, but the tool was so deeply baked into the Pentagon’s systems that it would take months to untangle in favour of a more compliant rival. It was used, too, in the January operation that led to the capture of Venezuela’s Nicolas Maduro.
But what does “intelligence assessments” and “target identification” mean in practice? Was Claude flagging locations to strike or making casualty estimates? Nobody has made that disclosure and, alarmingly, no one has an obligation to.
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
Share with us your feedback on BT's products and services