Instead of pausing AI development, aim to mitigate risks
Singapore’s Model AI Governance Framework provides helpful principles that are applicable across borders
ALMOST a decade ago, theoretical physicist Stephen Hawking warned that the development of artificial intelligence (AI) could jeopardise humanity’s existence, saying: “It would take off on its own and re-design itself at an ever-increasing rate… it’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.”
More recently, Tesla chief executive officer Elon Musk said that AI poses greater risks than nuclear weapons. ChaosGPT, a modified version of OpenAI’s chatbot, identified nuclear armageddon as the most efficient way to bring an end to humanity.
Bill Gates and other influential figures, including Apple co-founder Steve Wozniak, have signed a petition to stop the development of AI.
TRENDING NOW
On the board but frozen out: The Taib family feud tearing Sarawak construction giant apart
OCBC consumer banking chief Sunny Quek aims to double wealth business by 2029
‘We’re not a bubble tea brand’: Chagee aims to double Asia-Pacific footprint to 600 stores by 2027
UMS Integration closes 10.2% higher after posting ‘strong’ double-digit sales growth in Q1