When principle meets power: the Anthropic-Pentagon stand-off
What is the future of AI governance – especially in military and national security contexts?
THE confrontation between Anthropic and the US Department of Defense (DOD) that erupted in February is not merely a contract dispute.
It is a defining moment in the long-brewing tension between artificial intelligence (AI) companies and the national security establishment, a reckoning over who gets to set the rules for how the most powerful technology of our age is deployed in war and surveillance.
The facts are now widely reported. Anthropic, maker of the Claude AI model, the only AI deployed on the Pentagon’s classified networks – insisted on two contractual guard rails: Its technology would not be used for domestic mass surveillance of Americans, and it would not be used to develop or operate fully autonomous weapons systems.
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
Copyright SPH Media. All rights reserved.