AI governance

THINKING ALOUD

When principle meets power: the Anthropic-Pentagon stand-off

What is the future of AI governance – especially in military and national security contexts?

Smart rules do not slow agentic AI; they accelerate responsible deployment by making accountability and controls explicit.

Governing agentic AI: when guard rails drive growth

Human judgment is an indispensable feature, not a brake, in agentic operations

In November 2025, Anthropic partnered with Palantir Technologies, a data analytics company that does a lot of work for the Pentagon, turning its LLM Claude AI into the reasoning engine inside a decision-support system for the US military.

Claude AI helped bomb Iran. But how, exactly?

The lack of visibility on how artificial intelligence is already being used in war is deeply troubling

The technology industry has “perfected” a costly playbook over 30 years.

The AI agent trap: How Singapore firms can avoid tech’s costliest pattern

The deployment of AI agents should be a risk management decision at the executive level

Deep public-private collaboration can help create an integrated AI ecosystem in Asean where member states at every level of AI maturity advance together.

Leaving no one behind: A collaborative blueprint for Asean’s AI future

The 11-member bloc must move towards a harmonised governance framework and forge deep public-private partnerships

From left: Demis Hassabis, co-founder and CEO, Google DeepMind; Zanny Minton Beddoes, editor-in-chief, The Economist; and Dario Amodei, CEO and co-founder, Anthropic.

The lab leaders want to slow down on AI. Someone needs to help them

Here is why ‘middle powers’ like Singapore may hold the key to coordinating the advance of AI

Installation art 'Dynamics of a Dog on a Leash'. Watching a machine strain against a chain and seeing people empathise with its yearning to be free is a preview of one of the most quietly consequential shifts in AI.

Why do we feel empathy for robots?

The hardest governance problems may not be what these systems can do, but how they make us feel

Answering the fundamental question of how AI should serve humanity demands that we develop, share and nurture a common understanding of our core civic values.
THE BROAD VIEW

Holding the line between people and algorithms

Efficiency alone is not progress; technology must augment our humanity – not erode it

Companies need to be agile in their governance frameworks even as they ensure IT security.

Shadow AI is a growing threat, but one companies can harness

Tightening security in a way that erodes internal trust can drive the behaviour further underground

The current voluntary commitments for AI safety are much like allowing car manufacturers to self-regulate, says AI governance expert Robert Trager.
THE BOTTOM LINE

National approaches to AI safety diverge in focus

Countries face competing incentives in the artificial intelligence race