You are here

The AI maestro

Should artificial intelligence "live" alongside humans and learn our values? World-leading expert Ben Goertzel believes so.

"The fact that humanity is doing things it has no comprehension of and with an unpredictable outcome? That's not new." - Ben Goertzel.

IN THE 1942 short story Runaround, Isaac Asimov outlined a set of rules that governed the actions of robots. These rules, known as the Three Laws of Robotics, would go on to form the basis of many science fiction stories that followed.

In a nutshell, the laws state that a robot may not injure human beings or allow them to be harmed. It must also obey a human being's orders unless the orders conflict with the first law, and must protect its own existence unless that conflicts with the first two laws. After all, robots don't come cheap or easy.

Almost 80 years on, the concept behind these principles of behaviour lives on as the world sees rapid advancements in artificial intelligence (AI) and robotics. As we inch ever closer to the technological singularity - the moment in time where AI becomes smarter than humans - can we similarly put in place rules to protect humanity?

Renowned AI scientist Ben Goertzel shoots the thought down without a moment's hesitation.

"The Three Laws of Robotics were designed to fail. They were designed to make entertaining fiction. The whole premise of Isaac Asimov's stories that involved the laws was that in places where the three laws broke down, that's what made the stories interesting. If they worked, then the stories would be very boring," he points out.

Your feedback is important to us

Tell us what you think. Email us at

"I don't think you can come up with a list of fixed ethical rules like that. Similarly, with human beings. If you give a bunch of kids exact rules to obey, they'll just find loopholes in the rules and find cheeky things to do. I mean, just like the tax code - people always find a way to avoid paying taxes and work around the rules."

Therein lies the importance of the spirit of the law, as opposed to the letter of the law, he says.

"If you can't give the AI the spirit of human ethics and human culture, whatever rules you write down are not going to matter. Because that's like having a bunch of dumb adults make a rule and the kids are a thousand times smarter than them," he says matter-of-factly, no sharpness in his tone despite the razor-edged words. "They're going to find a way to work around those rules that the adults didn't even imagine."

A champion for AI among humans

Dr Goertzel, who has been researching AI for more than three decades, believes that the way to a future where humans can co-exist safely with AI smarter than them is to have AI live, work and play alongside people. By entering shared situations and experiences with human beings, AI can learn human values stronger than any kind of rule its creator attempted to impose on them.

In an age where the fear of job-replacing automation and unpredictable technology is mounting almost as fast as the optimism surrounding them, this idea appears almost radical.

He explains: "I mean, that's how you get the spirit of your own ethics across to your own human children. It's by going through real world situations along with them, and when they experience the situations together with you, they understand how you're reacting to them, what you're doing, and why."

It's obvious Dr Goertzel has spent a great deal of time ruminating on the future. The bespectacled 52-year-old AI architect, who was in Singapore to speak at tech summit ConnecTechAsia in June, has published nearly 20 scientific books and over 140 research papers on AI.

He is also the founder and chief executive of SingularityNET, a blockchain-based marketplace for AI algorithms.

For someone whose mind moves at a million miles per hour, Dr Goertzel speaks at a leisurely pace. The world-leading expert skips right past the jargon and goes straight to real-life examples that resonate with most people.

As he lounges in his armchair in a meeting room at the Marina Bay Sands, Dr Goertzel breezes through topics such as benevolent AI, science fiction, blockchain and fear.

So, how far away is humanity from having AI live alongside us?

It shouldn't be too inconceivable, considering there's currently a humanoid robot called Sophia who has spoken at hundreds of conferences around the world, appeared on talk shows such as The Tonight Show in the US, and was even declared a citizen of Saudi Arabia and an innovation champion of the United Nations Development Programme.

Sophia was created by Hong Kong-based robotics firm Hanson Robotics, where Dr Goertzel held the position of chief scientist up till this year.

She displays more than 50 facial expressions and uses visual data processing and artificial intelligence to interact with people.

Sophia isn't close to human-level intelligence, but she was built to serve as a warm, compassionate interface between humans and AI, says Dr Goertzel.

David Hanson, the founder and CEO of Hanson Robotics, wanted to put a friendly smiling face on the singularity.

Dr Goertzel, too, had ideas of his own about what face he would put on the singularity. "I remember, when we were discussing doing a robot for Walt Disney, I kept trying to get them to do the little old man from the movie Up," he lets on, chuckling at the memory.

"I thought that would work fine because robots don't walk that well. They actually walk a lot like an old guy with a cane. But hey, that's not what the world seems to want."

The business of AI

Dr Goertzel was born in 1966 in Rio de Janeiro to American parents. His AI journey began after he graduated with a bachelor's degree at age 18 from Bard College at Simon's Rock in the US. While living in New York City, where he attended graduate school in applied math, he started doing serious research in cognitive science and AI during his spare time.

Today's state of AI is complicated, says Dr Goertzel, who now resides in Hong Kong. What we often encounter is called narrow AI, which is very good at doing specific tasks, such as playing chess.

Then there's the holy grail of artificial general intelligence (AGI), which refers to AI that can adapt to new situations and new skills the way that a person can.

There isn't currently an AI robot that can walk across the streets of New York or Mumbai without getting hit by a car, Dr Goertzel points out.

But he expects to see a transition to AGI in five to 20 years. As this happens, it becomes increasingly important to evaluate how society uses AI.

"If you look now, what is AI used for? I summarise it as: selling, spying, killing and gambling, basically. It's advertising, it's surveillance, military, then the stock market. So if selling, spying, killing and gambling are the primary things in the mind of the first AGI that emerges, what kind of AGI are we creating?

"So if you have education robots and elder care robots, and if you have AIs that are helping scientists discover things and discover cures for disease and so on, these are probably better things to put into the mind of the first general intelligence, which is probably going to emerge gradually from the narrow AIs that we have now."

Dr Goertzel is under no illusion about what it would take to make this happen.

"If you want to roll out millions of loving, kind, helpful robots, there has to be some business model attached to it, unless some government or some multi-billionaire decided to fund it just for fun," he says with a wry smile.

"So then it becomes, how do you combine these sort of higher aspirations with a workable business model? And that complicates things, but I don't think it's impossible."

Decentralised network

The non-profit SingularityNET project, which Dr Goertzel is leading, uses a decentralised network that allows anyone to source, share and monetise AI services.

In September, networking giant Cisco Systems announced it would host its decentralised AGI project on the blockchain-based platform developed by SingularityNET.

"You don't want a single company in charge of it, because that always leads, eventually, to some problems. But then having said that, in order to grow that in practice, you still need a business model," Dr Goertzel says.

"So we started a separate company called Singularity Studio, which is building business software applications. But the AI for these applications goes into this decentralised network.

"For example, we're working with Domino's Pizza in Kuala Lumpur to use AI to optimise the pizza delivery, because KL has very bad traffic, and the traffic often changes while the pizza delivery is in progress."

One of the greatest bottlenecks of applying AI in practical domains is finding people who can translate a real world situation into an AI problem.

"The hardest type of person to hire is someone who spans the practical world and the world of AI algorithms. I mean, there are more people who can code AI and do the math of AI algorithms than there are people who can look into real-world problems and figure out how to match that with the AI algorithms," says Dr Goertzel.

Many would be familiar with the frustration of dealing with an AI chatbot from the telecoms business.

"Those are very annoying, actually. They're bad. They don't understand what you want."

Improving the system

Hence, the question is, how can one use existing AI tools to improve the current system?

Dr Goertzel says: "So, if you ask a new graduate with a degree in AI, they probably can't solve that problem. But if you ask it to an expert on customer support, they probably can't solve that problem, either. You need to be able to think about the problem, which is how to better respond to customers. And you need to understand AI algorithms, and you need to connect the two together, right?"

The answer could be that there is a bunch of databases about different services the telco offers, and there's history of what different customers have complained about and what the answers were.

Perhaps one needs to connect an AI that can delve into the history of previous customers' problems, and then use that to guide what the chatbot says to the customer.

"But that's like a problem of connecting the real-world problem with the AI learning and reasoning tools, and that's sort of the bottleneck right now, because there's a lot of AI that can learn and reason and think, if you give it the right sort of problem organised in the right sort of way. But then, the world isn't organised that way."

Of course, once AGI comes along sometime in the future, that problem will go away. But as agonising as dimwitted chatbots are, do we really want a technology that could potentially be smarter than all of humanity?

"If people enjoy being afraid, they are welcome to," says Dr Goertzel. The singularity is inevitable, because there's so much economic value in AI as it develops, he stresses.

At this point, he pauses longer than usual. "I mean, it's, I think…" He pauses, then starts again.

"I mean, fear, as an emotion, is not very necessary. I think we should recognise that there's great uncertainty in what these technologies are going to bring. And certainly, it's worth paying attention to the negative as well as positive aspects.

"The thing is, if you eliminated AI from the scene, there are a lot of other technologies that are rapidly developing. And these technologies (could be controlled by malevolent people in power)... even if you don't have AI, humanity will probably exterminate itself with stupid use of other technologies."

He points out that human beings have been leaping into the partial unknown since the beginning.

"When we created agriculture, and fire, machinery and tools, we also could not predict what's going to happen next. The only difference is that now it's faster.

"Then, the unpredictable dynamics unfolded over many generations. Now, it happens within a single person's lifetime. So the speed is different. But the fact that humanity is doing things it has no comprehension of and with an unpredictable outcome? That's not new."

So things might not end up like in The Matrix, after all?

He says: "The Matrix is funny - the idea that they kept around human beings to use as batteries doesn't make a lot of sense. Surely if you can build the Matrix, you can build a better battery than the person!"


Founder and chief executive officer SingularityNET

1966: Born in Rio de Janeiro, Brazil


1985: BA, Mathematics, Bard College at Simon's Rock

1988: PhD, Mathematics, Temple University


1998-2001: Chief Technological Officer, Webmind/ Intelligenesis

2001-2011: Chief Executive Officer, Novamente

2002-2011: Chief Executive Officer, Biomind LLC

2010-2011: Chief Technological Officer, Genescient Corp.

2011-2016: Chief Science Officer, Aidyia Limited

2015-2019: Chief Scientist, Hanson Robotics

2008-Present Chairman, OpenCog Foundation

2010-Present Chairman, Artificial General Intelligence Society

2016-Present Chief Scientist, Mozi Health

2017-Present Chief Executive Officer, SingularityNET

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to