The parrot argument about AI is dead. The question now is permission

Adopting AI does not guarantee safety. But not adopting it guarantees exposure

    • For many decision-makers, the beliefs on AI capabilities will not matter.
    • For many decision-makers, the beliefs on AI capabilities will not matter. IMAGE: PIXABAY
    Published Thu, Apr 30, 2026 · 01:08 PM

    SIX months ago, the conversation was whether artificial intelligence mattered at all or had any real use. Now, it is whether anyone can afford not to have it at almost any price.

    That is a strong claim. It needs unpacking because the implications reach beyond the technology debate itself. They reach into how companies invest, how governments regulate, and how nations protect their critical infrastructure.

    Remember when serious people said AI was just autocomplete. A stochastic parrot, rearranging tokens by probability, dressed up in enough confidence to fool the gullible. That argument had a moment. It survived for a while on the strength of being unfalsifiable.

    Point to a capability, the parrot-defenders said it was pattern matching. Point to a benchmark, they said it was data contamination. Point to a novel output, they said the model had seen something close enough.

    The position was load-bearing for a particular kind of commentator. Without it, they would have had to update.

    Claude Code, released a few months ago, was the quiet end of that debate. A parrot does not write production code that runs, debug it when it fails, refactor it when asked, and ship a working system. When programmers themselves started shipping AI-generated code as a normal part of their workflow, the parrot position retreated to seminars.

    DECODING ASIA

    Navigate Asia in
    a new global order

    Get the insights delivered to your inbox.

    What followed has been faster than any reasonable forecast. The latest frontier models, released into closed partnerships, autonomously identified novel vulnerabilities in widely audited codebases that humans had examined for decades. They found bugs nobody was looking for.

    The cybersecurity capability was not designed. It emerged. The models were scaled up for general code reasoning, and a specialist fell out the other side.

    What the parrot crowd had right

    That AI is nothing but a statistical arrangement is actually a very valid argument. It is worth saying out loud. These are equations. Matrices of numbers, trained by gradient descent, running on silicon. There is no ghost.

    That concession is supposed to be the parrot-defender’s trump card. It is actually the opposite.

    Equations have a property that humans do not. They can be copied. A Feynman or an Einstein was non-replicable. If one country had Oppenheimer, another country could not simply hire a similar mind next quarter.

    Genius clustered slowly, and strategic advantages built on genius decayed slowly. That was the quiet architecture of technological competition for most of the twentieth century.

    Equations do not respect that scaffolding. A training pipeline that produces a frontier model in one lab will produce functionally equivalent capability elsewhere within months. Not as a possibility. As a diffusion curve that ignores lab boundaries.

    Einstein did not scale. Models do.

    Built stronger, without a purpose

    Unlike most products, every AI model is built to be bigger or more efficient than the previous, but even its makers have little idea what it may do. You cannot specify these systems as you plan them. Often, you cannot specify them even after they are built.

    For most engineering, the design is the explanation. For AI models, only the behaviour observed after everything is done is the explanation. Scale the parameters by ten per cent, and capabilities appear that nobody was training for.

    The capability curve does not just go up. It produces new capabilities at each step, and the steps are not on anyone’s roadmap.

    So any defensive posture pegged to today’s capability becomes obsolete by the time it is deployed. You are not planning against a known threat. You are planning against a frontier that keeps producing threats nobody trained it to produce.

    When modelmakers turn into arbiters

    Realising the powers unleashed, frontier labs have begun providing restricted access to a select group of partners for preparation. Major banks, cloud providers, critical infrastructure operators. They are using these tools defensively, to patch critical software before equivalent capability appears in less careful hands.

    This is responsible. It is also the source of the next problem.

    There is no law of nature that says the next lab will be as careful. What is given with near certainty is that soon what Anthropic’s models do will be done by many others. The next lab might be in a different jurisdiction, owned by a government that wants the capability for offence. The containment strategy works if it is universal. It will not be universal.

    Every containment strategy exposes itself in what it excludes.

    Every bank not on the partner list, every hospital network, every utility, every mid-sized enterprise running legacy systems, is in a position where its vulnerability is known to the defenders inside the fence but not to those outside it. That is an uneven playing field created by a safety decision.

    The geopolitical version is larger.

    Access is effectively decided by a small number of American companies operating under American law. A government that finds itself outside the fence, whether a European ministry, an Asian central bank, or a Latin American regulator, has uncomfortable options. It can wait. It can lobby. Or it can commission its own lab to build an equivalent capability at whatever speed the national balance sheet allows.

    A Chinese lab was going to scale regardless. A European lab may now scale because the closed partnership did not include Brussels. An Indian lab may now scale because it did not include Delhi. The containment that looks responsible from inside the fence looks, from outside, like a reason to build the capability at double speed.

    National governments will not leave this decision with the labs for much longer. The first discussions on export controls, or for that matter nationalisations, may not be too far away.

    Iron cuts iron

    There is an old Hindi phrase: lohe ko kat-ta loha. Iron cuts iron. It captures the structural point.

    Once a frontier model can generate attacks at a scale no human team can match, the only thing that defends against AI-native offence is AI-native defence. Architecture still matters. Zero-trust networks still matter. None survives on its own against a model that can autonomously find zero-days in the authentication library itself.

    When matching an AI-powered offence requires deploying an AI-powered defence at the same frontier, AI has crossed from competitive advantage into existential need. Parity itself has become expensive.

    The chief financial officer of a mid-sized bank does not need AI to beat competitors. They need AI so that their bank is still operating in eighteen months without all client information being leaked. The head of IT at a hospital network does not need AI to win procurement awards. They need AI so that patient records are not encrypted and held for ransom.

    The want-to-need transition is forced, not chosen. Adopting AI does not guarantee safety. But not adopting it guarantees exposure.

    The standard of care is being rewritten

    The standard of corporate care is being silently rewritten. Gross negligence in 2026 looks exactly like prudent management in 2024.

    When a financial institution is gutted by an automated exploit that a frontier defensive model would have caught, the subsequent lawsuits will not debate the IT budget. They will debate whether the board discharged its duty of care in a world where defensive AI had crossed into the category of mandatory.

    For many decision-makers, the beliefs on AI capabilities will not matter. They no longer need a plan for potential achievements or benefits. They will look less at surveys that claim AI is hype while making their decisions in the months ahead.

    Those who relied on the parrot argument, or the scaling laws, or the South Sea bubble analogy to understand what AI will not do, were never really arguing about AI. Their argument was about permission. Permission to not update, not reallocate, not think carefully about what was coming.

    That permission has been revoked. The question is no longer whether to update. It is whether the update happens by choice or by consequence.

    The writer is chief executive officer of Singapore-based, global innovation investment company, GenInnov

    This is an adaptation of an article published on https://www.geninnov.ai/blog

    Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.

    Share with us your feedback on BT's products and services