COMMENTARY

Businesses must adopt AI to survive, and change the ways they hire and train too

Stefanie Yuen Thio
Published Wed, Jun 7, 2023 · 05:50 AM

IT had to happen – but how embarrassing that it was a lawyer who did it.

A New York lawyer is facing disciplinary proceedings for using generative artificial intelligence (AI) to do legal research that threw up bogus cases. Steven Schwartz of Levidow, Levidow & Oberman admitted to using ChatGPT for a brief citing six non-existent court decisions in a personal injury case against Avianca Airlines.

The attorney’s nightmare was created by a phenomenon known as “hallucinations”, in which AI comes to conclusions based on data it dreamed up.

It is easy to blame the lawyer for cutting corners – he clearly failed to live up to his professional responsibilities. Yet, there is also a compelling case for businesses to increasingly rely on AI’s faster and – once the hallucinations are resolved – presumably more accurate results.

Businesses that do not learn to reap its benefits will become obsolete, much like a courier company with a sentimental attachment to horse-drawn carriages.

The day is not far off when AI will attain at least as high a rate of accuracy as the average second-year associate in a services company, but with fewer spelling errors and zero complaints about work-life balance. Even if it does not, another software will be developed that can correct the work of the first chatbot. That is the inexorable march of technology.

GET BT IN YOUR INBOX DAILY

Start and end each day with the latest news stories and analyses delivered straight to your inbox.

VIEW ALL

As AI evolves, human workers will have to as well.

Backbreaking tasks such as trawling voluminous data will sit squarely in technology’s domain. It will outperform humans in assignments informed by historical information, be it research brief, opinion, marketing or even news copy.

Where technology is not yet adept is in interfacing with people – innately complex, unpredictable and not always rational – and intelligently forecasting future human behaviour.

For businesses, in particular service providers, productivity savings from AI can be applied to equipping human workers with better people skills and enhanced critical thinking.

Because technology looks backwards, humans will have to be forward thinkers, using validated AI-generated data while integrating an understanding of how human beings think and respond, to deliver a more sophisticated future-facing solution.

This is going to be an especially big challenge for the Singaporean worker – product of an education system that tests accuracy of answers rather than analysis or persuasion.

It is easier to score well in science-based topics than in the humanities, so our top students tend to be science-trained. Many of our C-suite leaders will therefore have the same skills that technology is best at. And because human psychology predisposes the decision-maker to promote and value their own abilities, a vicious circle results.

So, the way we train our future workforce must change. Without detracting from core subjects, vocational education will have to equip students with skills that enable leveraging AI but focus on what it cannot do well.

Journalists’ value will be in their commentary, not straight news reporting; lawyers’ in their solution creation rather than expounding the law. Problem-solving, critical thinking and people skills should be prized by employers.

At the same time, AI poses some serious dangers. Disinformation is one. As AI becomes increasingly relied upon, bad actors will have the opportunity to spread false narratives and propaganda.

Historian and bestselling author Yuval Noah Harari warned in a recent essay in The Economist that “people may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion”, referring to fear and loathing created and nurtured by machines.

A decade ago, the Arab Spring uprising was facilitated by viral social media posts. Imagine if those posts contained inaccurate information or, worse, had been planted by bad actors intending to foment revolution.

AI also has the potential to do great harm to individuals. AI-generated fake videos are becoming more common and convincing.

Deepfakes have been used to put women’s faces, without their consent, into pornographic videos. The technology will soon allow anyone with a computer to create deepfakes with a few photos. The potential for harm is limitless, from discrediting politicians to influence an election, to extortion and simple revenge porn.

The harms now overwhelmingly target females, meaning gender equality could be set back by generations. If the physical world is an increasingly equal place for women, the Internet represents the Dark Ages for gender equity.

Hidden bias is another concern. AI trawls the Internet and produces commentary based on available information. Whereas search engines give the user a chance to evaluate the veracity of articles and posts, ChatGPT confidently presents its commentary as fact.

Bias, conscious or not, could enter the system in two ways: first, in the programming of the large language model; and second, through the data relied upon.

AI could steadily destroy what we think of as truth and fact. Experts are concerned that people will rely on these systems for medical advice, emotional support and the raw information they use to make decisions.

With these varied and serious risks presented by AI, coordinated and decisive action must be taken at governmental level to address them. The actions must be taken at a global level, to the extent possible, due to the cross-jurisdictional accessibility of the Internet.

Such boundaries could include limiting AI companies’ access to computing power, withholding sensitive information, and licensing developers.

Large businesses are already requiring vendors to confirm how AI is used in the provision of goods and services. Governments and regulators should act, too. Even Sam Altman, chief executive of ChatGPT’s creator, said in his testimony before the US Senate that the risks are serious enough to warrant government intervention.

The possible benefits of AI are legion, but its potential for harm in all areas of human existence is incalculable. The problem with technology is that human beings have come to accept its advancement as inevitable. While we still have the ability to control the development of AI, let us act to harness it and set some boundaries, before we no longer can.

The writer is joint managing partner at TSMP Law Corporation

KEYWORDS IN THIS ARTICLE

READ MORE

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to  t.me/BizTimes

Opinion & Features

SUPPORT SOUTH-EAST ASIA'S LEADING FINANCIAL DAILY

Get the latest coverage and full access to all BT premium content.

SUBSCRIBE NOW

Browse corporate subscription here