There’s one easy solution to the AI porn problem
How do we responsibly test AI models for child sexual abuse material?
ON CHRISTMAS Eve in 2025, Elon Musk announced that Grok, the artificial intelligence (AI) chatbot offered by his company xAI, would now include an image and a video editing feature. Unfortunately, numerous X users have since asked Grok to edit photos of real women and even children by stripping them down to bikinis (or worse) – and Grok often complies.
The resulting torrent of sexualised imagery is now under investigation by regulators worldwide for potential violations of laws against child sexual abuse material and nonconsensual sexual imagery. Indonesia and Malaysia have chosen to temporarily block access to Grok.
Even if many of the generated images do not cross a legal line, they have still incited outrage. Though the chatbot began limiting some requests for AI-generated images to premium feature subscribers as at Thursday (Jan 8), Grok’s new feature remains available unchanged, in stark contrast to xAI’s swift intervention after Grok started referring to itself as “MechaHitler” last summer.
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
Share with us your feedback on BT's products and services