SUBSCRIBERS

Instead of pausing AI development, aim to mitigate risks

Singapore’s Model AI Governance Framework provides helpful principles that are applicable across borders

Shaun Leong
Published Sat, Jul 15, 2023 · 05:00 AM

ALMOST a decade ago, theoretical physicist Stephen Hawking warned that the development of artificial intelligence (AI) could jeopardise humanity’s existence, saying: “It would take off on its own and re-design itself at an ever-increasing rate… it’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.”

More recently, Tesla chief executive officer Elon Musk said that AI poses greater risks than nuclear weapons. ChaosGPT, a modified version of OpenAI’s chatbot, identified nuclear armageddon as the most efficient way to bring an end to humanity.

Bill Gates and other influential figures, including Apple co-founder Steve Wozniak, have signed a petition to stop the development of AI.

READ MORE

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to  t.me/BizTimes

Opinion & Features

SUPPORT SOUTH-EAST ASIA'S LEADING FINANCIAL DAILY

Get the latest coverage and full access to all BT premium content.

SUBSCRIBE NOW

Browse corporate subscription here