The ethics of AI tools like ChatGPT can’t be an afterthought
THE buzz around artificial intelligence (AI) has masked a crucial question: How do we ensure ethical use of this game-changing technology? This is not being asked enough while investment dollars flood AI upstarts.
Take ChatGPT for instance. Since its launch in November last year, the AI chatbot has captured the hearts of millions globally, with its ability to churn out answers to questions – whether inane or complex – in a matter of seconds. It can hold an intellectual conversation, code a website, and even pass a law school exam.
Many have hailed ChatGPT as the next big innovation that will revolutionise how we work and supercharge productivity. Microsoft quickly jumped in on the excitement, reportedly investing US$10 billion into the tool’s creator OpenAI and incorporating ChatGPT into its search service Bing. Google followed with a US$400 million investment in AI startup Anthropic and then launched its own chatbot, Bard. AI has turned into an arms race in the world of big tech.
But it is also worth noting that ChatGPT, like any other AI model, may come with insidious biases. The tool has been accused by right-wing media of having a “woke” bias; for instance, it allegedly supports US President Joe Biden more than his predecessor Donald Trump.
Meanwhile, others have pointed out that ChatGPT can still fulfil racist enquiries despite safeguards implemented by OpenAI. One user asked ChatGPT to code a function that would determine if someone is a good scientist, based on race and gender. ChatGPT’s response was to encode a good scientist as “white” and “male”.
It is difficult for AI models to be free from bias, simply because of how they are created – by us. It is humans who choose the data on which to train AI models, and also determine how the algorithms are applied. Through this process, it is possible for unconscious biases to seep into AI models.
BT in your inbox

Start and end each day with the latest news stories and analyses delivered straight to your inbox.
For instance, a 2019 study found that an algorithm widely used in the healthcare industry was biased against black patients, recommending less medical care for them than white patients. AI used by law enforcement in the US has also been found to be biased against black people. In these cases, the bias is at least so egregious that it is easy to call out. But the true danger lies in AI that carries subtle biases that are difficult to remedy, possibly entrenching discrimination in crucial activities and services, like credit scoring and housing allocation.
Some tech companies have recognised this risk and started pushing for “explainable AI”, where all the factors behind an AI model’s decision are made fully transparent, and researchers can work out what input might have swayed the AI to become biased. As investors pour money into the likes of ChatGPT, more dollars also need to go into funding explainable AI research efforts.
There is also more discussion to be had on the question of data privacy. Tools like ChatGPT consume massive amounts of information from the Internet, which likely includes social media posts and articles by individuals. There is little clarity on how such data is used and subsequently stored, and whether sensitive information is handled with care. These questions could throw up more uncertainty than progress if not adequately dealt with.
The thrill of AI is undeniable, but we need to step back and ask the tough questions that only humans can. In that spirit, this editorial was written not by ChatGPT, but by an actual journalist.
Copyright SPH Media. All rights reserved.