From policy to practice: Turning Singapore’s AI ambition into reality
Three business leaders from banking, customer experience and data centre infrastructure share what it actually takes to make AI work reliably, responsibly and at scale
DeeperDive is a beta AI feature. Refer to full articles for the facts.
[SINGAPORE] In his Budget 2026 speech on artificial intelligence, Prime Minister Lawrence Wong made it clear that for companies in Singapore, AI is no longer a side experiment, but a core lever of competitiveness.
The emphasis is now on execution and how firms translate access to AI into productivity gains, new revenue streams and sustained advantage. That shift is already being felt in the private sector.
After two years of pilots and proof of concepts, management teams are under growing pressure to justify spend, demonstrate returns and move AI out of sandbox environments into day-to-day operations.
The conversation has evolved from possibility to performance. What works, what scales and what delivers measurable impact will matter.
It is against this backdrop that The Business Times convenes this roundtable, bringing together three leaders at the frontlines of that transition.
Bobby Wee, founder and CEO of Racks Central, sits at the infrastructure layer powering AI adoption. Thomas Laboulle, founder and CEO of Toku, works closely with enterprises deploying AI in customer experience. And Praveen Raina, head of group operations and technology at OCBC, brings the perspective of a highly regulated industry where scale, trust and governance are paramount.
Navigate Asia in
a new global order
Get the insights delivered to your inbox.
Together, their perspectives offer a ground-level view of where AI is actually delivering value today and where the gaps between ambition and reality still remain.
PARTICIPANTS:
- Bobby Wee, founder and CEO, Racks Central
- Thomas Laboulle, founder and CEO, Toku
- Praveen Raina, head of group operations and technology, OCBC
MODERATOR: Dylan Tan, senior correspondent, BT
Businesses have spent the last two years piloting AI. The C-suite is now demanding hard return on investment (ROI). Where are you seeing the most tangible economic value right now – cost reduction or revenue generation – and has that answer changed since last year?
Bobby Wee (BW): Right now, the most consistent, provable ROI is still cost and productivity, but the mix is shifting.
On the cost reduction and efficiency side, the “first dividend” is coming from automating Tier-1 support, accelerating software delivery, reducing manual compliance effort, improving incident response, and optimising supply chain and forecasting. These gains are measurable in weeks: faster cycle times, fewer escalations, lower cost to serve.
The “second dividend”, revenue generation, is emerging through AI-enabled product features, personalisation at scale, faster time to market, and new premium tiers such as AI copilots bundled into enterprise services. Industry-specific models that create defensible intellectual property are also beginning to appear.
What has changed since last year is confidence. In 2024 to 2025, many pilots proved technical feasibility. In 2026, boards want unit economics: cost per task, cost per resolved ticket, cost per code change, conversion uplift – hard numbers.
The positive macro signal is that Singapore is doubling down on capability building, with major public investment into AI research announced through 2030, including support for responsible and resource-efficient AI and talent development. That kind of long-horizon commitment tends to unlock private-sector adoption because companies know the ecosystem will be there.
Thomas Laboulle (TL): The most tangible value in customer experience today is still efficiency, but the context has evolved.
Last year, many pilots focused on demonstrating what AI could do. This year, the focus has shifted to whether those capabilities can be sustained in production. Boards and executive teams are asking a more fundamental question: does this system reliably reduce operational load without introducing risk?
In enterprise customer experience operations, efficiency gains remain the most immediate and measurable source of ROI. Improvements in automation rates, first-contact resolution and after-call work show up quickly in operational metrics. Importantly, these gains only matter if they persist beyond pilot environments.
Revenue impact absolutely matters, and it is emerging through higher retention, faster resolution and more consistent service quality. However, those benefits tend to compound over time. Right now, enterprises are prioritising solutions that can move from experimentation to dependable, day-to-day operations.
Praveen Raina (PR): Banking is both a business of scale and trust, and our use of AI is delivering value on both fronts. For our customers, AI is enabling a new level of hyper-personalisation.
We are shifting from reactive servicing to proactive, value-adding engagement and product customisation that deepens relationships and strengthens customer trust. That increased relevance is already translating into measurable economic value.
The operational returns are equally pronounced. Intelligent document processing is shortening turnaround times, while AI-enabled engineering tools have reduced coding and testing effort by 20 to 30 per cent, enabling us to deliver faster and more consistently.
AI-driven anomaly detection and accelerated response times are also strengthening system resilience, ensuring our platforms remain stable and reliable.
The buzzword for 2026 is “Agentic AI” – systems that don’t just summarise text but actively execute workflows. From each of your specific vantage point, are we actually seeing this shift in deployment yet, or are enterprises still largely stuck in the “chatbot” phase?
TL: We are seeing a shift, but it is more subtle than the term “agentic AI” suggests. In customer experience, most enterprises have moved beyond basic chatbots conceptually. The challenge has been translating that ambition into production systems that can operate reliably at scale.
What we see today is not widespread deployment of fully autonomous agents, but rather carefully scoped systems that can execute specific tasks within defined boundaries.
The core issue is not intelligence, but control. Many agentic systems prioritise flexibility and autonomy, assuming that better prompting or reasoning will keep AI aligned. In enterprise customer experience, that assumption breaks down quickly.
When AI is allowed to interpret processes instead of follow them, it introduces what we call “process hallucinations”: skipping mandatory steps, deviating from approved workflows or exceeding authority. This is distinct from the well-known problem of AI generating incorrect text.
Process hallucination occurs when an AI agent confidently executes the wrong sequence of actions, or claims to have completed a step it never performed. In multi-step workflows, even small errors compound rapidly.
In regulated environments, enterprises and government agencies cannot afford AI systems that interpret processes freely. As a result, the most successful deployments today are those where autonomy is introduced incrementally, with clear guardrails and escalation paths.
So while the direction is clear, the reality is that enterprise customer experience is progressing through controlled, supervised autonomy rather than a sudden leap to fully agentic systems.
There is also a significant amount of what the industry is beginning to call “agent washing”, where existing chatbots and robotic process automation tools are rebranded as agentic AI without any meaningful change in capability. Enterprises should look past the labels and ask a simple question: does this system follow governed processes, or does it improvise?
PR: We are already seeing the shift. Most banks are moving well beyond the “chatbot” phase to piloting or deploying AI that orchestrates and executes multi-step workflows.
At OCBC, we treat agentic AI not as a buzzword, but with a deliberate and disciplined approach to embedding it across our technology stack.
Autonomous agents are already supporting areas such as automated Know-Your-Customer (KYC) due diligence, where the system actively assesses the legitimacy of clients’ wealth and transactions, and relationship managers review and refine the final output.
As we move from passive assistance to active execution, success hinges not on plug-and-play solutions but on deep integration with enterprise systems, strong governance and coherent orchestration of AI, digital capabilities and data with our people.
This is how we enhance customer experience, scale customer acquisition and ensure our customers remain protected.
BW: We’re absolutely seeing the shift, but it’s uneven. The “chatbot phase” was about answering; the agentic phase is about doing – drafting a proposal, pulling data from systems, creating tickets, running checks, pushing a change end to end.
In production today, the best deployments are bounded agents – narrow scopes, clear permissions, strong identity controls and an auditable trail.
Enterprises are moving fastest in rules-heavy repetitive workflows: IT operations, customer support triage, finance operations, developer workflows and network operations. The reason is simple: you can constrain the agent, measure outcomes and roll back safely.
From Singapore’s ecosystem standpoint, what’s exciting is that governance is starting to catch up to capability. The Infocomm Media Development Authority (IMDA) recently launched a Model AI Governance Framework for Agentic AI, and that is a strong signal that we’re moving from “wow demos” to responsible deployment at scale.
As AI models become more autonomous, the “black box” problem grows. How do we balance the need for advanced, autonomous AI with the strict requirement for explainability – especially in highly regulated sectors like finance and telecoms?
BW: We shouldn’t treat explainability as “all or nothing”. The practical approach is risk-tiering. For low-risk use cases such as marketing copy drafts and internal knowledge search, allow higher autonomy with monitoring.
For medium-risk applications such as service recommendations and operational decisions, policy constraints, evaluation and human review thresholds are required.
For high-risk decisions involving credit, fraud, KYC or telco critical network changes, the requirement must be full traceability, documentation and human accountability, with human-in-the-loop approval at the final stage.
In finance, Singapore has already put strong principles in place. The Monetary Authority of Singapore’s Feat Principles are a good reference point: fairness, ethics, accountability and transparency are exactly the scaffolding you need when models get more powerful.
On the enterprise side, assurance tooling is maturing too. AI Verify Foundation and IMDA’s work around AI testing and assurance – including pilots aimed at codifying norms for technical testing – helps move governance from “policy statements” to repeatable engineering practice.
My view is that the winners will be those who treat governance like cybersecurity: designed in, continuously tested and operationally owned.
TL: The key is to stop treating explainability as a model problem and start treating it as a systems problem.
In regulated environments, explainability does not come from exposing an AI’s internal reasoning, but from ensuring that every action follows a deterministic, auditable process. From an enterprise perspective, what matters is knowing which steps were followed, which rules applied, and why an action was permitted or escalated.
This is why flexibility-first AI frameworks struggle in production. When process control is handled through prompts or emergent behaviour, outcomes cannot be guaranteed or reproduced. In contrast, enterprise-grade systems must encode compliance, authority boundaries and mandatory steps directly into the architecture.
In customer experience, this means separating conversational flexibility from execution logic. AI can engage naturally with customers, but actions that carry risk must be governed by explicit process controls rather than inferred behaviour. When those controls are designed into the system, explainability becomes a property of the architecture, not the model.
This approach allows organisations to benefit from advanced AI capabilities while maintaining the level of accountability that regulators and customers expect.
The regulatory environment is reinforcing this direction.
In Singapore, MAS published its Consultation Paper on Guidelines on AI Risk Management in November 2025, setting expectations for financial institutions to ensure transparency and explainability proportionate to each system’s risk and impact. IMDA launched its new Model AI Governance Framework for Agentic AI in January 2026.
In Europe, full enforcement of the EU AI Act’s high-risk provisions for new systems begins on Aug 2. And ISO/IEC 42001 is established as a certifiable global standard for AI management systems.
For enterprises like Toku operating across multiple jurisdictions, the question is no longer whether to build governance into AI systems, but how quickly they can do so.
PR: In a highly regulated financial services sector, “trusted autonomy” must be a prerequisite, not an afterthought. As AI agents take on longer and more complex workflows, the risk of opacity naturally increases.
We address this by embedding explicit validation checkpoints with strong guardrails directly into the design of every AI workflow. Every autonomous action is underpinned by OCBC’s AI governance framework, ensuring that even the most advanced models remain auditable, explainable and compliant with financial standards.
OCBC has set ambitious targets for employee AI augmentation by 2027. As it moves from internal tools to customer-facing agentic tools, how does the bank manage the risk of an AI agent making a financial decision or recommendation without human oversight?
PR: Our philosophy is that AI should augment human judgement, not replace accountability. Even as we scale AI across the organisation, human-in-the-loop architecture remains firmly embedded in any decision affecting customers’ financial outcomes.
AI provides the speed, scale and intelligence to surface insights, but the final ethical and financial mandate rests with our people. This balance preserves the standards of care and trust that are fundamental to us as a bank.
We see a tension between “Green AI” (sustainability) and the massive compute power required for generative AI. As a bank committed to sustainability goals, how does OCBC reconcile the energy footprint of training or running these massive models with your ESG commitments?
PR: As AI adoption accelerates, the focus must shift from scaling to deploying AI responsibly and efficiently. At OCBC, we approach this through enhancing the energy efficiency of our infrastructure and using technology itself to further optimise energy consumption.
Our data centres currently run on innovative cooling technologies at the server level, which drastically lower the energy footprint of our models. In parallel, we are partnering institutes of higher learning to explore how AI, machine learning and Internet of Things can dynamically monitor and optimise energy consumption.
For us, digital innovation must advance hand in hand with our sustainability targets, not come at their expense.
AI racks are now pushing power densities of 50 to 100 kilowatt, far beyond traditional limits. With the Singapore-Batam digital corridor becoming critical, how is the physical infrastructure evolving to handle this heat – and are we moving fast enough towards liquid cooling?
BW: High-density AI is forcing a complete redesign of the data centre as a thermal and electrical machine asset, not merely real estate.
At 50 to 100 kW per rack, air cooling alone becomes increasingly inefficient or space-intensive. The industry is moving towards a spectrum: enhanced air plus containment for moderate densities, rear-door heat exchangers, direct liquid cooling for high density and immersion for specialised deployments.
Are we moving fast enough? We’re moving because we must. The Singapore-Johor-Batam digital corridor, supported by emerging Special Economic Zone frameworks, is becoming strategically critical for the region.
It allows operators to combine Singapore’s network density, enterprise demand and regulatory maturity with Johor and Batam’s expansion runway – land, power, water availability and scale – while remaining tightly interconnected from a latency, operations and governance perspective.
On the Singapore side, policy is also pushing efficiency harder. IMDA’s Green Data Centre Roadmap and evolving requirements around new capacity are raising the bar on power usage effectiveness and sustainability outcomes. That is exactly what you want when AI demand is growing.
The bottom line is liquid cooling is no longer a nice-to-have. It is a core competency, and the corridor strategy is how we scale responsibly.
AI chips become obsolete much faster than standard servers. Does this create a new challenge for data centres in terms of frequent retrofitting or e-waste management?
BW: Yes, graphics processing unit (GPU) obsolescence creates a challenge, but also an opportunity to modernise how data centres think about lifecycle.
AI clusters have shorter refresh cycles than traditional servers, and the answer is designing for modularity and circularity. That means modular power and cooling infrastructure so upgrades do not require ripping out entire rows, quick-connect liquid loops that make hardware swaps safer and faster, standardised mechanical and electrical “rails”.
New GPU generations can drop in with minimal downtime, and certified reuse and resale pathways (accompany) secure sanitisation so hardware earns a second life rather than becoming waste. The market will increasingly reward operators who can offer compute refresh without disruption and who can demonstrate a credible e-waste and carbon accounting story.
The data centre of the AI era is not just built for uptime; it is built for upgrade velocity and responsible decommissioning.
Toku has long emphasised that Western AI models often struggle with the linguistic fragmentation of our region. As you roll out voice AI agents, how are you solving the “accent gap” and hallucination risks when dealing with diverse Singlish, Manglish or Bahasa nuances in real time?
TL: In practice, transcription accuracy is the foundation of everything that follows. In linguistically diverse regions, small errors at the speech recognition level can quickly cascade into incorrect intent detection, inappropriate responses or broken workflows. This is particularly true in real-time voice interactions, where there is little opportunity for correction.
Real-time voice AI in APAC must handle code-switching, local speech patterns, and telephony-grade audio without degradation. If the system mishears, no amount of downstream intelligence can recover safely. The reality is straightforward: if you are not testing for code-switching, your multilingual voice agent is not production-ready.
This is precisely why we invest in proprietary transcription technology trained specifically for the linguistic complexity and audio quality conditions typical of our target markets, rather than relying on generic global models.
Hallucination risk is managed in a similar way. Rather than relying on the model to infer responsibly, we constrain responses and actions through approved knowledge, defined workflows and automatic escalation when ambiguity arises. The goal is a correct, compliant customer outcome every time, even in messy, real-world conversations.
In the context of customer experience, where does Toku draw the line? When must a human agent intervene, and is that handoff becoming smoother or more complex as AI becomes more confident?
TL: Human intervention is essential whenever judgement, risk or emotional nuance exceeds what an automated system can safely handle. This includes scenarios involving financial decisions, identity verification, regulatory exceptions or heightened customer distress. Critically, these boundaries should be designed upfront and not discovered after deployment.
Industry best practice now centres on five escalation triggers: emotional cues detected through sentiment analysis; explicit customer requests; AI confidence falling below defined thresholds; business rules for high-value or sensitive interactions; and conversation loop detection after repeated failed attempts.
When done well, AI makes the handoff to human agents smoother by preserving context, summarising what has occurred, and routing the interaction appropriately. This allows human agents to focus on resolution rather than reconstruction.
Yet seamless transitions between channels remain the exception rather than the norm across most enterprise customer experience environments, which indicates both the difficulty and the scale of the opportunity ahead.
In mature customer experience deployments, AI does not replace human judgement but protects it by ensuring humans intervene exactly where they add the most value.
There is growing demand for AI-driven sentiment analysis to detect customer frustration. Is there a privacy line here? How does Toku balance the benefit of empathy-driven AI with the “creepy” factor of a machine analysing a customer’s emotional state?
TL: There is a clear line between using sentiment to improve service and using it in ways that undermine trust. In customer experience, sentiment analysis should function as a real-time indicator that an interaction may require additional support or escalation. It should not be used to categorise or profile customers beyond what is necessary to resolve the issue at hand.
Responsible use means minimising data, being transparent about analysis and embedding sentiment strictly within operational workflows.
The regulatory picture is becoming clearer, which is helpful for enterprises navigating this space. The EU AI Act prohibits the use of emotion recognition systems in workplaces and educational settings, with those prohibitions applying from February 2025 and potential penalties of up to 35 million euros (S$51.8 million) or 7 per cent of global annual turnover for breaches.
In Asia-Pacific, no jurisdiction has banned sentiment analysis. Singapore’s approach remains voluntary and principles-based, and IMDA is backing efforts like the MERaLiON Consortium to advance research in multilingual and empathetic AI, including emotion recognition across South-east Asian languages.
The key distinction for enterprises is purpose: sentiment used to improve service quality and trigger appropriate escalation is a very different proposition from sentiment used to profile or manipulate customers.
Trust is central to customer experience, and any use of AI must reinforce – not weaken – that trust.
Last but not least, if we convene this roundtable again in 18 months, what is the one AI topic we are currently obsessed with that will be obsolete?
PR: The conversation will increasingly shift from whether AI systems can autonomously complete tasks to how organisations orchestrate workflow and workforce around them.
This next phase is about redesigning the very fabric of the bank so that humans and autonomous agents operate in a secure, seamless tandem. At the same time, the pace of AI evolution is outstripping traditional governance.
The challenge will be to build agile governance and operating structures capable of keeping pace. Organisations will need to manage the ever-evolving autonomous systems in real time, capturing new value while maintaining a strong and consistent risk posture.
TL: I expect the debate around “chatbots versus agents” will feel dated, and agentic capability will increasingly be assumed. But I would go further: the notion that AI can wholesale replace human agents in customer service will also look dated.
Very few enterprises have actually reduced service headcount because of AI, and some that moved too aggressively are already reversing course. The vision of fully agentless customer service will be quietly retired.
What will replace it is a more grounded conversation about production readiness. The real work involves mapping processes, establishing governance and building operational foundations that rarely make headlines. That is not glamorous, but it is where lasting value is created.
In customer experience, the conversation will shift away from autonomy as a headline feature and towards production readiness: accuracy, control, compliance, and operational resilience. Those are the factors that ultimately determine whether AI delivers lasting value.
The companies that win will not be those with the most impressive demos, but those that can deploy reliably at scale, in production, in regulated environments, across multiple languages and explain to auditors exactly how their AI makes decisions.
BW: The topic that will feel obsolete is the obsession over “which model is best” as a standalone debate – model A versus model B, parameter counts, leaderboard chasing.
In 18 months, the real competitive edge will be orchestration and assurance: how enterprises manage fleets of models and agents; how they validate outputs continuously; how they prevent agent-driven security incidents; and how they govern autonomy across vendors, jurisdictions and critical systems.
The new challenge that replaces it is the reliability and security of autonomous workflows at scale (agentic cyber risk), permissioning, auditability and resilience, alongside the physical reality of power, cooling and grid integration to sustain AI growth.
If we get that right, Singapore can be more than a user of AI – we can be the region’s reference point for trusted AI infrastructure, supported by serious national investment and governance leadership.
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
Copyright SPH Media. All rights reserved.
TRENDING NOW
‘We’ve seen the worst-case scenario’: How Indonesia’s Cinema XXI navigated crisis and change
Malaysia tourism hit by fuel shock; tour prices may jump 50%
From 1MDB to ‘corporate mafia’: Is Malaysia facing a new governance test?
China pips the US if Asean is forced to choose, but analysts warn against reading it like a sports result