SUBSCRIBERS

Chatbot glitches: When disclaimers won’t save you in court

As Singapore courts signal that unverified AI output is negligent, businesses face a new era of liability

    • If your corporate chatbot “hallucinates” and misrepresents or misleads a consumer, your business could be held directly responsible as the publisher of that information.
    • If your corporate chatbot “hallucinates” and misrepresents or misleads a consumer, your business could be held directly responsible as the publisher of that information. PHOTO: REUTERS
    Published Thu, Dec 4, 2025 · 07:00 AM

    THE conversation around artificial intelligence (AI) liability has moved from theoretical debates to real-world business risks. Cases about minor glitches, like a chatbot inventing a refund policy, have escalated into a crisis of product liability and professional negligence that is hitting closer to home.

    A significant shift occurred in May 2025, when a US federal court allowed a wrongful death lawsuit against an AI business to proceed. The case involves a tragedy where a chatbot allegedly contributed to a teenager’s suicide. The legal argument is a wake-up call for tech deployers: the plaintiff argues the AI is not merely a service but a commercial product that was defectively designed.

    Closer to home, the Singapore High Court delivered a sharp warning in September 2025. A lawyer was personally sanctioned for submitting fake case citations generated by an AI tool. The court didn’t just call it a mistake; it labelled the failure to verify the AI’s work as “improper, unreasonable and negligent”.

    Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.

    Share with us your feedback on BT's products and services