Chatbot glitches: When disclaimers won’t save you in court
As Singapore courts signal that unverified AI output is negligent, businesses face a new era of liability
THE conversation around artificial intelligence (AI) liability has moved from theoretical debates to real-world business risks. Cases about minor glitches, like a chatbot inventing a refund policy, have escalated into a crisis of product liability and professional negligence that is hitting closer to home.
A significant shift occurred in May 2025, when a US federal court allowed a wrongful death lawsuit against an AI business to proceed. The case involves a tragedy where a chatbot allegedly contributed to a teenager’s suicide. The legal argument is a wake-up call for tech deployers: the plaintiff argues the AI is not merely a service but a commercial product that was defectively designed.
Closer to home, the Singapore High Court delivered a sharp warning in September 2025. A lawyer was personally sanctioned for submitting fake case citations generated by an AI tool. The court didn’t just call it a mistake; it labelled the failure to verify the AI’s work as “improper, unreasonable and negligent”.
TRENDING NOW
On the board but frozen out: The Taib family feud tearing Sarawak construction giant apart
Thai and Vietnamese farmers may stop planting rice because of the Iran war. Here’s why
MAS convenes bank CEOs over AI cyberthreats; boards told to own risks, not leave to IT teams
Is it time to scrap COE categories for cars?