You are here

THE BROAD VIEW

Why Elon Musk fears artificial intelligence

As AI gets much smarter, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger, he says

BT_20181110_MLAI_3613182.jpg
Speaking at MIT in 2014, the man behind Tesla - which is known for its self-driving technology - called AI humanity's "biggest existential threat" and compared it to "summoning the demon".

ELON Musk is usually far from a technological pessimist. From electric cars to Mars colonies, he's made his name by insisting that the future can get here faster.

But when it comes to artificial intelligence, he sounds very different. Speaking at MIT in 2014, he called AI humanity's "biggest existential threat" and compared it to "summoning the demon".

He reiterated those fears in an interview with Kara Swisher of Recode, though with a little less apocalyptic rhetoric. "As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger," he told Swisher. "I do think we need to be very careful about the advancement of AI."

To many people - even many machine learning researchers - an AI that surpasses humans by as much as we surpass cats sounds like a distant dream. We're still struggling to solve even simple-seeming problems with machine learning.

sentifi.com

Market voices on:

Self-driving cars have an extremely hard time under unusual conditions because many things that come instinctively to humans - anticipating the movements of a biker, identifying a plastic bag flapping in the wind on the road - are very difficult to teach a computer. Greater-than-human capabilities seem a long way away.

Mr Musk is hardly alone in sounding the alarm, though. AI scientists at Oxford and at UC Berkeley, luminaries like Stephen Hawking, and many of the researchers publishing groundbreaking results agree with Mr Musk that AI could be very dangerous. They are concerned that we're eagerly working toward deploying powerful AI systems, and that we might do so under conditions that are ripe for dangerous mistakes.

If we take these concerns seriously, what should we be doing? People concerned with AI risk vary enormously in the details of their approaches, but agree on one thing: We should be doing more research.

Mr Musk wants the US government to spend a year or two understanding the problem before they consider how to solve it.

From his perspective, here's what is going on: researchers - especially at Alphabet's Google Deep Mind, the AI research organisation that developed AlphaGo and AlphaZero - are eagerly working toward complex and powerful AI systems. Since some people aren't convinced that AI is dangerous, they're not holding the organisations working on it to high enough standards of accountability and caution.

"We don't want to learn from our mistakes" with AI. Max Tegmark, a physics professor at MIT, expressed many of the same sentiments in a conversation last year with journalist Maureen Dowd for Vanity Fair: "When we got fire and messed up with it, we invented the fire extinguisher. When we got cars and messed up, we invented the seat belt, airbag, and traffic light. But with nuclear weapons and AI, we don't want to learn from our mistakes. We want to plan ahead."

In fact, if AI is powerful enough, we might need to plan ahead. Nick Bostrom at Oxford made the case in his 2014 book Superintelligence that a badly designed AI system will be impossible to correct once deployed: "Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed."

In that respect, AI deployment is like a rocket launch: everything has to be done exactly right before we hit "go", as we can't rely on our ability to make even tiny corrections later.

Professor Bostrom makes the case in Superintelligence that AI systems could rapidly develop unexpected capabilities - for example, an AI system that is as good as a human at inventing new machine-learning algorithms and automating the process of machine-learning work could quickly become much better than a human.

That has many people in the AI field thinking that the stakes could be enormous. In a conversation with Mr Musk and Dowd for Vanity Fair, Y Combinator's Sam Altman said: "In the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonising the universe."

In context, then, Mr Musk's AI concerns are not an out-of-character streak of technological pessimism. They stem from optimism - a belief in the exceptional transformative potential of AI. It's precisely the people who expect AI to make the biggest splash who've concluded that working to get ahead of it should be one of our urgent priorities. THE CONVERSATION