SUBSCRIBERS

Why it’s hard for humans to have the final say over AI

We need to learn from the mistakes we have made in the past

    • The illusion of human control can be more dangerous than its clear absence, says the writer.
    • The illusion of human control can be more dangerous than its clear absence, says the writer. PHOTO: REUTERS

    DeeperDive is a beta AI feature. Refer to full articles for the facts.

    Published Wed, Mar 18, 2026 · 06:30 AM

    THE riskier the setting in which powerful artificial intelligence (AI) systems are deployed, the more we seem to reach for an intuitive solution: that humans should always be the ones to make the final decisions.

    In the context of war, the public and regulatory debate (and one source of the recent row between Anthropic and the US government) has focused on the seemingly binary distinction between fully autonomous weapons and those that are subject to “human control”.

    In the corporate world, too, the deployment of semi-autonomous agents has led companies to turn to experienced humans as the ultimate decision-makers. Amazon, for instance, has reportedly said that junior and mid-level software engineers require more-senior engineers to sign off on AI-assisted changes.

    Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.

    Share with us your feedback on BT's products and services