The Business Times

Ethical use of AI can augment, not kill jobs

Robots and machines should enable people to make smarter decisions; they do not make decisions in place of us.

Published Wed, Jul 18, 2018 · 09:50 PM

THE fourth season of Netflix's Black Mirror gave a sneak peek into a very possible future altered by technology advancement with many episodes based on how humans interact with artificial intelligence and the issues that arise as a result.

SPOILER ALERT: In one of the most spine-chilling episodes, a group of human beings encountered a relentless pack of killing machines in the form of an autonomous robot "dog" called Metalhead. A cat-and-mouse game ensued and eventually - you guessed it - the "dogs" killed them all.

While there has been no record of such catastrophes so far, last year, over 100 robotics and AI technology leaders, including Elon Musk and Google's DeepMind co-founder Mustafa Suleyman, issued a warning about the risks posed by super-intelligent machines.

In their open letter to the UN Convention on Certain Conventional Weapons, the tech leaders said : "These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora's Box is opened, it will be hard to close."

Closer to home, the Singapore government has announced the formation of an advisory council to delve into the ethical use of AI. The council will assist the government to develop ethics standards and reference governance frameworks, issue advisory guidelines, practical guidance and codes of practice for voluntary adoption by businesses.

AI IS PART OF OUR DNA

In the last three years, OCBC Bank has been driving AI adoption, deploying services to deliver greater value to not only our customers but staff as well. One initiative is our in-house developed HR in your Pocket (HIP) app. The app is equipped with an AI-powered chatbot that can answer HR-related queries such as the status of reimbursement of expense claims and annual leave balances.

Earlier this year, the bank launched an AI unit, armed with an initial investment budget of S$10 million over three years, to strategically develop in-house capabilities.

While we aim to elevate the application of AI in our business, we recognise the importance of building trust with our stakeholders through the ethical use of this technology from the onset.

Broadly, here are our four key principles that have steered our AI development :

AUGMENT JOBS, DON'T KILL JOBS

Speaking at the World Economic Forum in Davos in January, high-profile technocrat Jack Ma declared that AI and robots will kill a lot of jobs. He is not alone in his assessment. IT research firm Gartner estimates that by 2025, a whopping one third of jobs will be replaced by robots and smart machines.

But we think otherwise.

When developing digital capabilities, we see the role of fast-developing technologies (like AI) augmenting and not killing our jobs; value-adding to what our people are already doing. Robots and machines enable us to make smarter decisions; they do not make decisions in place of us.

We see economic benefits and job creation through people and machines working in collaboration. Take our home and renovation loan chatbot Emma for example. It was developed with the intention to complement the efforts of our mortgage sales teams, and not to replace them.

Specifically, it caters to the growing segment of self-serve consumers who prefer the D-I-Y way, and be able to avail themselves of loan services regardless of the time of the day and place. Emma helped to close more than S$70 million in home loans in less than a year.

FLOURISH ALONGSIDE AI, LEAVE NO ONE BEHIND

While we look to AI augmenting jobs, we recognise the inevitable - the skillsets of our people need to keep pace with the technology so as to flourish alongside it to cater to the evolving needs of our customers.

This is akin to a car mechanic who started plying his trade in the 1980s and continues to keep abreast with the latest technological advancements to service the new makes of cars his customers come to him with.

Today, he needs to learn to fix a complicated hybrid car model with auto start-stop systems, a far cry from the conventional fuel injection engines he was used to decades ago. Within the next three years, he may have to upgrade himself again - so as to repair flying cars (who knows?). The learning journey never ends.

In the same vein, we are conscientiously taking steps to ensure that our people have the competencies to thrive and create a strong learning culture that encourages a mindset that's receptive to learning, unlearning and relearning.

Our S$20 million Future Smart programme, launched in May this year, is a testament to this principle - to develop the digital skills of all 29,000 employees of the OCBC Group globally.

KEEP AI FAIR, MINIMISE BIAS

American mathematician Cathy O'Neil shared her concerns on the possibility of AI algorithms - based on past learnings - increasingly reinforcing pre-existing inequality in her book Weapons of Math Destruction.

For example, if a poor student can't get a loan because a lending model deems him too risky (by virtue of his zip code), he's then cut off from an education that could pull him out of poverty, and a vicious circle ensues.

There is a high risk that blindly adopting AI decision-making black boxes will make the world even more unfair than it now is and widen the social gap between rich and poor.

We want to avoid this.

Our colleagues, subject experts in particular areas, are as equally involved as the AI scientists with whom we collaborate in developing our initiatives. They are our AI trainer-equivalents, who are continuously validating and providing feedback on the algorithms even after the AI product has been launched.

It may not be a sexy job but it's certainly a meaningful one - to make us better as an inclusive organisation.

PROTECT CUSTOMER DATA, UPHOLD TRUST AND INTEGRITY

AI thrives in businesses when there is big data, benefiting organisations and consumers in an exponential way. However, it can go badly wrong too when data is misused; the recent Cambridge Analytica scandal comes to mind.

How companies manage, secure and share consumer data is fast becoming a key factor in the relationship they have with their customers. When done well, companies will be in a good position to capture the most valuable element of the relationship - trust - to yield the maximum benefits of AI and digital. When poorly managed, not only customers are lost but reputation as well.

Our journey with big data started some 15 years ago - way before "big data" became a buzzword - to better serve our customers. Much has been invested in protecting as well as in how we use our customers' data. For example, access to the data is tightly controlled. We strive to use the data to provide customers with products and services that are of relevance to them, and not spam them.

To us, this is not merely a case of fulfilling our fiduciary obligations. It is about our values; upholding the highest level of integrity in everything we do - treating our customers with respect and integrity, and consistently dealing with them in a fair and professional manner.

While we strive to advance our capabilities, it will not be done at the expense of our customers and the public in general. A long-term relationship is what we endeavour to build on the basis of integrity and fair dealing.

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to  t.me/BizTimes

Columns

SUPPORT SOUTH-EAST ASIA'S LEADING FINANCIAL DAILY

Get the latest coverage and full access to all BT premium content.

SUBSCRIBE NOW

Browse corporate subscription here