The Business Times
BRANDED CONTENT

Ethical AI: How to leverage tech yet safeguard business brand safety and customer trust

As artificial intelligence becomes an increasingly critical tool, it's become all the more important for firms to be responsible, transparent and compliant

Published Wed, Jul 26, 2023 · 02:25 AM

WE have seen the headlines: Opaque or biased artificial intelligence (AI) models leading to public backlash and regulatory scrutiny. Recently, a US bank even experienced revenue loss from attrition among its private wealth clients, after ethical concerns about their AI tool surfaced.

As the world increasingly harnesses AI to solve complex problems, trust in AI is critical not just for financial institutions or tech giants, but businesses across all sectors. Whether assisting in patient diagnoses in healthcare, enabling personalised learning in education, or identifying defects in manufacturing, the need for confidence in AI technology has never been more pressing.

"To earn user trust, companies need to manage the risks associated with AI usage, beyond just training AI models," says Mr Colin Tan, general manager and technology leader of IBM Singapore. "Businesses need to ensure that their deployed systems comply with relevant guidelines and regulations, while proactively addressing ethical and fairness issues."

Although AI research has been ongoing for decades, most organisations are new to deploying AI models operationally. Unlike traditional IT systems such as database servers or network firewalls, AI systems can make incorrect recommendations. Consequently, businesses must fully understand these systems before implementing them.

"Clear, explainable processes and results are vital to avoid inadvertent bias related to race, gender, age, or other factors. Businesses need to understand the rationale behind an AI algorithm's output, which becomes critical when responding to compliance auditors or customer queries about specific AI recommendations," adds Mr Tan.

Breaking new ground can be risky. AI can be just plain wrong, hallucinating or producing toxic results, especially in business settings.

GET BT IN YOUR INBOX DAILY

Start and end each day with the latest news stories and analyses delivered straight to your inbox.

VIEW ALL

Mr Tan identifies notable AI risks: maintaining confidentiality of data used for training AI models, securing AI algorithms from unauthorised modifications, and ensuring only accurate and non-proprietary data are used for training.

"The potential misuse or misinterpretation of AI is a genuine concern. Unchecked, it can lead to disastrous outcomes, particularly in fields like healthcare, finance, and justice," he adds.

Every company's AI journey begins with an ethical and trustworthy approach to harness the technology's potential. PHOTO: IBM

The risks inherent in AI adoption call for a regulatory framework and governance principles to guide its proper deployment. Trustworthy AI begins with AI ethics, a discipline that aims to maximise AI's beneficial impact by prioritising human agency and well-being and mitigating adverse outcomes.

Ethics should be central to AI, even if it means challenging corporate expediency. A recent IBM Institute for Business Value survey, conducted with Oxford Economics and involving 1,200 executives in 22 countries, found a disconnect between intention and implementation of AI ethics.

"Although more than half of the organizations surveyed have begun to integrate AI ethics into their business ethics approach, the urgency to meet market expectations means that they are not implementing these intentions fast enough," says Mr Tan.

With the European Commission proposing a regulatory framework potentially impacting the industry similarly to the General Data Protection Regulation (GDPR), it's crucial for organisations to embrace ethical principles in their AI development and usage today. This foresight will help them comply with future regulations and potentially avoid the cost of redesigning or recreating models created without AI ethics principles and human values in mind.

As indicated by technology advancements over the last decade, today's impressive AI capabilities will soon be outstripped by even greater advancements. AI is already being integrated into business applications, delivering increased insights and productivity.

"To fully realise AI's potential, we must build it on a foundation of trust and transparency," says Mr Tan. Enter solutions like IBM's watsonx.governance, a comprehensive toolkit designed to help businesses manage risk and ethics concerns, paving the way for responsible AI.

"Regardless of where they are on their AI journey, organisations can rely on watsonx-governance to operationalise AI governance, accelerate model building, manage risk, and ensure compliance by automatically documenting model lineage and metadata."

According to Mr Tan, watsonx.governance also aids regulatory compliance by incorporating the necessary validations to help ensure models are fair, transparent, and compliant.

Businesses adopting AI technology stand to gain a significant edge over those that do not. Armed with the ability to establish responsible, transparent, and explainable AI workflows, they need not delay their AI journey.

He adds: "AI offers an optimistic future. With tools like IBM's watsonx.governance, we can navigate the challenges and make the most of the opportunities presented."

To discover how watsonx.governance can help shape a future of ethical AI in your business, click here.

Brought to you by:

KEYWORDS IN THIS ARTICLE

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to  t.me/BizTimes

Technology

SUPPORT SOUTH-EAST ASIA'S LEADING FINANCIAL DAILY

Get the latest coverage and full access to all BT premium content.

SUBSCRIBE NOW

Browse corporate subscription here