Rise of the machines

We still have trust issues with AI. How should we regulate it?

IT looks like boom time for artificial intelligence (AI) in Singapore. There are two main economic drivers for AI in Singapore, Mark Findlay, deputy director of the Singapore Management University's Centre for AI and Data Governance, tells The Business Times: the use of automation to supplement labour, and a worldwide push for AI to grow the economy.

Last year, the Republic topped the charts in the maiden Global Cities AI Disruption Index, and the latest Government Artificial Intelligence Readiness Index from Canada's International Development Research Centre. But doubts still linger. An EY poll last month found a regional

"AI trust crisis", with three-quarters of Asia-Pacific respondents citing transparency, bias or explainability as barriers to their confidence in AI technology.

Pierre Robinet, an Ogilvy Consulting senior partner who co-founded Singapore think tank Live With AI, also tells BT that concern can arise when "there is a lack of transparency, there is a lack of explainability, and people more and more want trust in the AI reasoning and outcomes". As a wary public recalls the plot of Minority Report - where the police crack down on predicted "pre-crimes" that have not yet taken place - AI advocates note that balance must be struck between useful, innovative solutions, and regulatory safeguards.

Human touch

With the prospect of faster 5G networks around the corner, Singapore is already chugging ahead with AI. "We're seeing an uptake of AI in the transportation and logistics, banking and financial services and public sector in Singapore," says Asheesh Mehra, co-founder of AI solutions startup AntWorks, citing real-time cargo management and round-the-clock price comparisons as examples.

With clients like Changi Airport, homegrown video analytics and AI startup Xjera Labs already targets segments such as security, transport, and smart buildings. Its chief executive, Ethan Chu, expects more aggressive roll-outs in these areas, as well as medical and financial technology.

James Chappell, who heads AI strategy at multinational software firm Aveva, notes that industrial AI can do four key tasks: recognise patterns, detect problems, suggest optimisation, and predict future events.

While such "predictive maintenance" has become a common tool in smart factories worldwide, the concept can also apply to humans.

For one, "an increasingly sophisticated technology, AI could support preventive policing to bring about a safer community", according to an article in the Singapore Civil Service College's Ethos newsletter last year.

But Raymond Chan, senior data scientist at a Singapore tech company and chapter co-leader of non-profit group DataKind, argues that "human oversight should always be present". While humans may not make every decision, they "should be responsible for the process and be able to monitor and control decisions made by the system", he tells BT.

Reed Smith counsel Charmian Aw, who specialises in data and tech issues, adds that - even with AI-driven predictions - there should still be a human policy-maker in the picture. "Just because AI can help detect contagious disease or assess security risk in a person, ultimately the applicable criteria and thresholds to deny entry - and any appeals process that follows - needs to be determined by a human policy-making agent."

The second edition of Singapore's Model AI Governance Framework, which was launched at the World Economic Forum in Davos in January, includes a tool to rank use cases by high or low probability of harm, and level of severity of harm, to assess whether and how much human oversight is needed over the AI's actions.

"We will want the AI to be explainable," says Xjera's Dr Chu. "Let's say we need to allocate a lot of police resources to Tiong Bahru. We cannot just trust the AI blindly, saying: 'Oh, just deploy more force'. We need to ask the AI to explain why - is it based on historical data, or what."

Mr Mehra, from AntWorks, also explains that the rules cannot be one-size-fits-all: "The application requirements for AI in healthcare are different from banking requirements... Governments and policy-makers will need to work closely with professional bodies from each industry to better advise the decision-makers with regard to what the technology is needed for, how it will work, and even how it may impact the workforce."

Data safety

But accurate AI is not possible without huge swathes of data - and that reliance throws up fresh issues, with Ms Aw naming privacy of personal, identifiable data as one of the two key legal concerns around AI, alongside bias, which AI can amplify. As Benjamin Low, regional vice-president at video software vendor Milestone Systems, puts it: "If you don't have enough data to do correct analytics, you will not be able to get the prediction right, no matter how strong your AI tool or AI engine is."

One reason China has become a leader in developing facial recognition is because cities are allowed to collect a wide pool of images, he says.

While facial images and other biometrics are protected by the 2012 Personal Data Protection Act (PDPA), regulators do not yet have guidelines on how this data can be analysed. Official guides will be released this year.

Xjera's Dr Chu also suggests letting private-sector companies access more data if they can tap the latest in tech tools that confer anonymity.

Since sensitive data may be a must for some types of machine learning - "if you blur the face out, it avoids the PDPA issues and is compliant, but how do you have facial recognition?" - he says that Xjera has turned to "federated databases", where the information comes from disparate sources rather than being stored in-house.

Minister for Communications and Information S Iswaran noted in Parliament just last week that, "as AI technology is still nascent, (the ministry) does not have immediate plans to introduce new laws to regulate AI".

But the PDPA has been touted as a key tool in the government's arsenal, alongside guidelines like the updated Model AI Governance Framework and the Trusted Data Sharing Framework.

Yeong Zee Kin, assistant chief executive at the Infocomm Media Development Authority of Singapore, tells BT: "Notwithstanding historical or cultural differences, there are a few core principles that are unique to AI: fairness, explainability, transparency and human-centricity. These have since become the core principles embedded in our Model Framework."

Key in Singapore's policies is the principle that organisations must be accountable for the AI systems they build and use, says lawyer Ken Chia, associate principal at Wong & Leow.

But Nanyang Technological University philosophy professor Andrés Carlos Luco and student Kathryn Muyskens said in a Live With AI white paper, put out last year, that "ethical guidelines for AI technologies need to be more specific and concrete". They wrote: "Rather than a vague promise to 'incorporate privacy design principles', companies and institutions should specify what reasons constitute legitimate or illegitimate invasions of an individual's privacy."

Their argument bears out Dr Findlay's view that "if we believe these principles are important, we must know what they mean" when rolling out ethics-based approaches to AI. He calls for a "bottom-up approach to AI regulation", and not just guidelines tailored to "top-down, end-user, big-corporation" audiences.

In other words, "there's always a human actor, so we need to hold the human actors accountable", Ms Aw tells BT. "We need to hold the AI developers, the AI users accountable."

For instance, AntWorks - which "has a zero-tolerance policy for the unethical use of AI" - decides on deals only after asking whether the algorithm is used to malicious ends; whether the software is ethically transparent or can be audited; and what the environmental impact of the technology is, says Mr Mehra.

Here for good

Ms Aw notes that AI-enabled predictions sit on "a sliding scale", from useful, like spam filters, to "pesky", like unsolicited targeted Web ads. The end of the spectrum is "where you have predictions that you want to avoid, so the law has to come in before you reach that stage", she adds, citing "socially unjust assessments" based on prejudices or biased data.

Meanwhile, "people working in AI governance should reach out to the people working in the social sector, not just technologists, as they are the ones much closer to areas of societal concern", says DataKind's Dr Chan.

To get back to the trust deficit that EY found, Christina Larkin, the firm's regional assurance digital trust leader in Sydney, tells BT that a mix of regulatory and non-regulatory tools are needed to roll out trusted AI. Besides updating regulations "to accommodate the specific traits of AI", voluntary standards and independent certification will also give companies an edge, she suggests.

Meanwhile, Mr Chia notes that "a thoughtful and measured regulator that is able to demonstrate an understanding of new technologies, meaningfully engage with industry and communicate its views" will keep the economy digitally competitive. "Singapore may be ahead of many of its regional peers. I think our regulators have actually done very well to meet these challenges."

AI will help to improve access to goods and services, such as finance and healthcare, says Mr Robinet. "It's our responsibility to focus on the good that AI brings, because there is much more good than bad."


Accept or decline?

SOCIETAL acceptance will guide the use of artificial intelligence (AI) in society - with the goal of tech always to improve Singaporeans' lives, regulators here tell The Business Times.

The public sector last year unveiled plans for National AI Projects as part of a national strategy, with a chatbot to report issues with municipal services and AI-assisted risk assessment for incoming travellers at the border among the initial tranche. These build on earlier efforts, such as the roll-out of iris scans to enhance camera facial recognition at Woodlands Checkpoint in mid-2018.

When it comes to governing AI - including in private-sector applications - the focus of the Infocomm Media Development Authority (IMDA) "is to promote innovation by safeguarding societal trust", according to Yeong Zee Kin, IMDA assistant chief executive for data innovation and protection. He adds: "While this objective exists for any technology, this is especially important with AI as its ability to offer hyper-personalisation can touch consumers in hitherto unexpected ways."

The ability to deliver personalised public services is one reason Singapore's government wants to use AI, says Chng Zhenzhi, director of the National AI Office at the Smart Nation and Digital Government Office.

Better decision-making that comes from data, as well as the efficiency and productivity from automated processes, were the other two reasons. Dr Chng adds: "The thinking behind the strategy, when it first started, is really to demonstrate benefit and also, in order to do so, we need societal acceptance, so of course we want to be able to use technologies that we know can be more accepted by society, rather than something that is controversial."

For example, other countries have seen pushback on technologies such as facial recognition, as well as concerns over the use of AI to make decisions on issues such as judicial sentences or eligibility for social support.

"We also are not thinking about areas where the harm will outweigh the benefits," Dr Chng tells BT, even while noting that "we're also aware of some of these concerns where AI is applied in different countries".

She explains: "For example, using AI to predict who should be sentenced and how long the sentence should be - I think that, for now, isn't even on our radar."

Mr Yeong, who is also the deputy commissioner of the Personal Data Protection Commission (PDPC), notes: "Technologies are neutral. The risks lie in how they are used and for what purpose." He points to sections in the Model AI Governance Framework that address risks such as "unintended discriminatory decisions".

"Don't forget that we already have laws in place," says Reed Smith lawyer Charmian Aw, pointing to the Unfair Contract Terms Act and the Consumer Protection (Fair Trading) Act. "I think it's important to get it right. You don't want to - for the sake of legislating - legislate the use of AI now, but without actually understanding where the scale needs to slide to."

Former minister Yaacob Ibrahim recently said in Parliament that the everyday use of technology is raising questions such as "Can we trust our government to use our personal data for the benefit of all Singaporeans and not some political agenda?"

But Eleonore Ferreyrol-Alesi, client solutions manager at data protection startup Dathena and co-founder of think tank Live With AI, tells BT that there is less concern here over public-sector use of AI, compared with societies such as France, "because the government is putting a lot of effort into explaining why they are using AI".

"I think IMDA and PDPC have done a very good job with the Model AI Governance Framework, because they consulted widely and what they have is really what is practical and suitable for Singapore," Dr Chng remarks. "Some of these Western countries are more particular and more stringent, compared with countries like China. They established something that is stringent enough for Singapore, yet does not impede innovation."

Noting that factors like use cases and cultural context affect the degree of societal acceptance, Mr Yeong adds: "Ultimately, each country will need to find its level in its society's trust towards AI adoption, and understand how to maintain that trust while advancing its vision of AI adoption."

READ MORE: Getting ahead of ourselves, and our crimes

KEYWORDS IN THIS ARTICLE

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to t.me/BizTimes