The messy reality of AI safety
Should AI accelerate? Decelerate? The answer is both
THE near-implosion of OpenAI, a world leader in the burgeoning field of artificial intelligence, surfaced a conflict within the organisation and the broader community about the speed with which the technology should continue, and also if slowing it down would aid in making it more safe.
As a professor of both AI and AI ethics, I think this framing of the problem omits the critical question of the kind of AI that we accelerate or decelerate.
In my 40 years of AI research in natural language processing and computational creativity, I pioneered a series of machine learning advances that let me build the world’s first large-scale online language translator, which quickly spawned the likes of Google Translate and Microsoft’s Bing Translator. You’d be hard-pressed to find any arguments against developing translation AIs. Reducing misunderstanding between cultures is probably one of the most important things humanity can do to survive the escalating geopolitical polarisation.
TRENDING NOW
On the board but frozen out: The Taib family feud tearing Sarawak construction giant apart
Thai and Vietnamese farmers may stop planting rice because of the Iran war. Here’s why
UOB aims to double wealth income to at least S$2.5 billion by 2030; Q1 profit slips 4%
Sony, Singapore’s GIC to pay almost US$4 billion for Bieber, Neil Young catalogue