AI governance

Will the step forward in frontier AI mean a step backward in cybersecurity?

Organisation leaders must rethink their approach to managing IT risks

Many users are so eager to capture AI productivity gains that they are cobbling together a messy mix of tools on risky platforms such as OpenClaw.

AI’s future at work looks more like OpenClaw than OpenAI

An eagerness to move faster will result in companies allocating excessive manpower to maintain systems

As more advanced AI models enter real-world testing, organisations are being forced to contend not just with more sophisticated threats, but with the speed at which they unfold.

Cybersecurity’s Tower of Babel: Why we are still lost in translation

When leadership teams and security functions operate in different languages of risk, critical signals might be dismissed until the problem has escalated

In a world where information is plentiful, judgment – the ability to discern, evaluate and act responsibly – may be more valuable than knowledge alone.

The coming AI-driven ‘abundance’ shock

Singapore has long excelled at navigating scarcity, but the emerging challenge is structurally different

The questions that stall AI roll-outs are practical, not philosophical: Where did this data come from? What rights apply? Can we trace an output back to its inputs when something goes wrong?

Singapore faces AI’s challenge early, and it’s about the data you can prove

Governance that lives only in policy documents fails in production; it must be embedded in how data is collected, shared and used

While voluntary commitments from frontier labs signal intent and set reference points for accountability, soft norms alone cannot bend behaviour towards safety, says the writer.
NEW GLOBAL ORDER

AI governance: The summit stage is necessary but it isn’t sufficient

As the geostrategic environment deteriorates, we must accelerate efforts to boost international coordination and build governance infrastructure

Anthropic has insisted that its technology is not to be used for domestic mass surveillance of Americans, nor to develop or operate fully autonomous weapons systems.
THINKING ALOUD

When principle meets power: the Anthropic-Pentagon stand-off

What is the future of AI governance – especially in military and national security contexts?

Smart rules do not slow agentic AI; they accelerate responsible deployment by making accountability and controls explicit.

Governing agentic AI: when guard rails drive growth

Human judgment is an indispensable feature, not a brake, in agentic operations

In November 2025, Anthropic partnered with Palantir Technologies, a data analytics company that does a lot of work for the Pentagon, turning its LLM Claude AI into the reasoning engine inside a decision-support system for the US military.

Claude AI helped bomb Iran. But how, exactly?

The lack of visibility on how artificial intelligence is already being used in war is deeply troubling

The technology industry has “perfected” a costly playbook over 30 years.

The AI agent trap: How Singapore firms can avoid tech’s costliest pattern

The deployment of AI agents should be a risk management decision at the executive level