Why it’s hard for humans to have the final say over AI
We need to learn from the mistakes we have made in the past
THE riskier the setting in which powerful artificial intelligence (AI) systems are deployed, the more we seem to reach for an intuitive solution: that humans should always be the ones to make the final decisions.
In the context of war, the public and regulatory debate (and one source of the recent row between Anthropic and the US government) has focused on the seemingly binary distinction between fully autonomous weapons and those that are subject to “human control”.
In the corporate world, too, the deployment of semi-autonomous agents has led companies to turn to experienced humans as the ultimate decision-makers. Amazon, for instance, has reportedly said that junior and mid-level software engineers require more-senior engineers to sign off on AI-assisted changes.
TRENDING NOW
On the board but frozen out: The Taib family feud tearing Sarawak construction giant apart
Thai and Vietnamese farmers may stop planting rice because of the Iran war. Here’s why
MAS convenes bank CEOs over AI cyberthreats; boards told to own risks, not leave to IT teams
Is it time to scrap COE categories for cars?