Society will need to ‘co-evolve’ with artificial intelligence, says OpenAI’s Sam Altman

Yong Jun Yuan

Yong Jun Yuan

Published Tue, Jun 13, 2023 · 06:39 PM
    • CEO of ChatGPT creator OpenAI Sam Altman and OpenAI’s member of technical staff Rachel Lim speaking at a panel discussion at Singapore Management University on Jun 13.
    • CEO of ChatGPT creator OpenAI Sam Altman and OpenAI’s member of technical staff Rachel Lim speaking at a panel discussion at Singapore Management University on Jun 13. PHOTO: KEVIN LIM, ST

    IN ORDER to mitigate harm that could be posed by artificial intelligence (AI) on society, ChatGPT had to be released to the broader public so that it could be further improved, said Sam Altman, chief executive of OpenAI, the company behind the application.

    “We want to minimise (the harm) as much as possible, but we realised that no matter how much testing and red teaming and auditing… people will use things in ways that we didn’t think about – and that is the case with any new technology,” he said at a fireside chat held at the Singapore Management University on Tuesday (Jun 13) as part of the AI research company’s global tour.

    “We really believe that iterative deployment is the only way to do this because if you don’t deploy along the way and you just go build an artificial general intelligence in secret in the lab, and then you drop it on the world all at once, society doesn’t get that time to co-evolve,” he added.

    ChatGPT has been adopted by many businesses in a bid to improve their productivity, but it has also courted controversy because of its tendency to “hallucinate, “or provide false answers.

    Altman said that the company expects to see “continual progress” on hallucinations, and it will be less talked about in one to two years from now.

    It should also become cheaper and more efficient to build and run AI models as well.

    BT in your inbox

    Start and end each day with the latest news stories and analyses delivered straight to your inbox.

    He noted that over the last 18 months, the company has cut the cost to do inference on AI models by three times and 10 times with two separate research breakthroughs.

    Inference refers to the process of using AI models to make predictions using live data and produce usable results.

    “We want to make this stuff so cheap, you don’t even think about using it,” he said.

    In response to a question about how banks can better leverage AI, Altman noted that the current AI models are better at being creative than at being “robust in every instance”. This means that they would require some additional work to use in contexts that require more rigour and reliability.

    He noted that others in the field have been able to get AI to work in a more robust manner, even if it requires “hacks on top of hacks”.

    From a regulatory and policy perspective, he urged regulators to understand the limits of AI first.

    “Understand the limits of the technology before trying to set the policy to make sure that what you want to do is something that is actually possible,” he said.

    Copyright SPH Media. All rights reserved.