Who’s accountable when AI agents go rogue?

As agentic AI reveals new frontiers of risk, cybersecurity and its governance must also evolve

    • Traditional cybersecurity frameworks are designed for systems with predictable behaviours. Agentic AI breaks that predictability.
    • Traditional cybersecurity frameworks are designed for systems with predictable behaviours. Agentic AI breaks that predictability. PHOTO: BT FILE
    Published Tue, Oct 21, 2025 · 07:00 AM

    EARLIER this year, security researchers proved that an artificial intelligence (AI) assistant could be hijacked through something as ordinary as a calendar invite. Hidden within the invitation was a set of malicious instructions that, once triggered, caused connected lights to flicker, shutters to open, and files to be accessed without the user’s consent.

    What began as a controlled experiment quickly revealed a new frontier in cybersecurity risk, where AI systems are not just tools for attackers but potential targets in their own right. As AI becomes more autonomous, able to plan and act across digital and physical environments, the implications for security will be far-reaching.

    The line between human and machine agency is blurring, and the time needed to exploit vulnerabilities is shrinking. For businesses and governments, this signals a fundamental change in how digital risk must be managed.

    This shift from passive tools to autonomous agents is ongoing. Agentic systems are already deployed in banking, e-commerce and logistics to streamline operations, detect fraud and make real-time decisions.

    As these agents interact with enterprise systems, other agents and humans, the cybersecurity attack surface expands. Malicious agents can exploit the same interfaces as legitimate ones, using new threats such as impersonation attacks, prompt injections and data exfiltration (theft). Safeguarding agentic AI in enterprise systems is therefore emerging as a defining cybersecurity challenge.

    Cybersecurity as strategic enabler

    Governments and enterprises are now seeking ways to capture the benefits of AI innovation while managing the growing spectrum of risk it creates. The discussion is increasingly on how to deploy it securely and responsibly.

    Traditional cybersecurity frameworks were designed for systems with predictable behaviours. Agentic AI breaks that predictability. It learns, adapts and operates with varying degrees of autonomy, creating new layers of uncertainty that static defences cannot contain.

    For governments and large enterprises operating critical infrastructure, this shift requires a fundamental change in mindset. As agentic AI becomes embedded in decision-making, operations and citizen services, cybersecurity must evolve from a defensive function to a strategic enabler of trusted autonomy.

    This demands a shift to adaptive, context-aware security with clear human oversight and escalation management, moving beyond static defences to maintain the trustworthiness of systems that influence decisions at a national scale.

    Foundational concepts in cybersecurity, such as identity, data and attack surfaces, are taking on new and evolving dimensions. Even established frameworks like “zero trust” are being re-examined as the rise of AI exposes contradictions that demand rethinking and adaptation.

    Reframing digital risk governance

    Indeed, governance frameworks must evolve alongside technology. Two issues are becoming urgent.

    First, the spectrum of autonomy must be understood. Agentic behaviour is not a binary state. Treating a basic automation script as equivalent to a self-directing system results in misplaced controls and uneven risk management. Oversight and safeguards should correspond to degrees of autonomy, not broad labels.

    Second, accountability must be redefined. If an agentic AI system executes an action that is harmful, who should bear responsibility? Without clear boundaries, legal and ethical gaps will persist, and adversaries may exploit them. Boards, chief information security officers and regulators need shared accountability models that reflect how agentic AI systems work.

    These questions are already visible in data governance disputes, algorithmic bias cases and AI incidents where AI systems have behaved in unexpected ways. Unless accountability frameworks get better defined, accountability gaps will widen.

    Securing agentic AI in critical infrastructure

    Agentic AI deployment in critical infrastructure entities raises unique risks. These systems promise gains in efficiency and resilience, but their vulnerabilities could cause cascading disruptions if compromised. Protecting them requires new approaches to securing AI apps and agents. It is essential therefore, that critical infrastructure entities retain control as they adopt more autonomous AI-driven systems.

    The focus must then be on detection and stopping of attacks on AI models, apps and agentic-AI workflows. Policy control for AI use, including blocking risky requests, preventing data leaks in apps and detecting unsanctioned AI agents, are also essential.

    Equally important is ensuring resilience by governing the non-human identities (NHIs), the digital identities backbone of agentic AI. Enterprises will need to exercise proper oversight of NHIs through access control, guardrails and traceability.

    Convening for resilience in agentic AI

    Trust will not be built by algorithms alone; technology is only as trustworthy as the intent and integrity of the people who create and govern it. The rise of agentic AI exposes the limitations of current frameworks and demands new approaches grounded in foresight, accountability and collaboration. Businesses that recognise this shift will be better protected and positioned to lead in the next chapter of digital transformation.

    Asha Hemrajani is senior fellow at the S Rajaratnam School of International Studies, Nanyang Technological University. Ian Monteiro is the chief executive officer and founder of Image Engine, organiser of the GovWare Conference and Exhibition 2025.

    Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.

    Copyright SPH Media. All rights reserved.