I’ve been deepfaked – and it can happen to you
As our society grapples with the growing scourge of online harms, how the community reacts is crucial
I WAS recently deepfaked.
Photographs and videos fabricated by artificial intelligence (AI) – deepfakes purporting to be me in suggestive poses – appeared on TikTok and showed up on the feed of a colleague, who alerted me immediately.
One synthetic video showed this AI-generated version of “me” coquettishly manoeuvring into a sitting position on the floor, barely hiding what was under the micro miniskirt. Readers who remember the iconic scene in the movie Basic Instinct, where Sharon Stone crosses and uncrosses her legs, will get the idea.
All the deepfake photos and videos were manipulated from authentic photos of me taken from TSMP’s corporate website or legitimate publications where I had appeared. I counted six offending images. On my son’s feed, he found 11. A screenshot of the offending TikTok account sent by my colleague showed yet more deepfaked images that did not appear on my device.
I’m a woman lawyer in her 50s. I am not active on TikTok, barely share anything on Instagram, have a disabled Telegram account, and mainly use Linkedin to discuss business and community issues. In short, I am not a “high-risk” Internet user; I do not indiscriminately share revealing images that would supposedly make someone a more likely target for such assaults.
If this can happen to me, it can happen to anyone with an online presence.
BT in your inbox

Start and end each day with the latest news stories and analyses delivered straight to your inbox.
The survivor’s reaction
I do volunteer work in the area of online harms. Three years ago, I founded SG Her Empowerment (SHE) which established Singapore’s – and Asia’s – first dedicated support centre for victims of online harms, providing counselling support, pro bono legal advice and direct assistance in reporting harms to Internet companies. SHE also conducts research into survivor experience. All this is to say that I am not unfamiliar with the deep psychological impact that such harms can have.
But that didn’t stop the feeling of dread when I learnt about the deepfakes of me circulating on the Internet. Or the slight catch of breath as I hit “play” on my phone and waited for the video to load. Then came palpable relief that there was no nudity.
The strongest, and most enduring, reaction was the feeling of being violated – violent and visceral. Who would do this? Is this some jerk’s idea of funny?
We tell survivors it is never their fault. They didn’t ask for this. But the self-blame followed quickly: “It’s your fault for putting yourself out there”.
And finally, the question of who. Was it someone who hates me? Or was it just a bot randomly picking photos off the Internet and using them for clickbait? As horrible as the thought was that someone out there bore me this degree of animus, that would almost be better than the alternative: my image pimped out by a machine to garner views.
What to do if it happens to you
Chances are the offending post has breached community guidelines of the social media platform.
To start the content removal process, go to the post and choose “Report”. On TikTok, pull-down menus asked for the reason. In my case, it was Misinformation. Do this for all the offending posts.
Get your friends to do the same to multiply the number of reports the platform receives. Internet companies use technology and algorithms to sift through the millions of in-app reports they get daily. The more reports they get on a post, the faster the reaction time.
Next, I sought the help of SHE, which is a “trusted flagger” with the major Internet companies and helps survivors make reports directly to them. Reports made by the SHECARES Centre on behalf of complainants will be dealt with by a human being at the Internet company, and not just an algorithm, speeding up the response.
Still, it took four days for the offending content to be taken down, and the user account to be disabled.
And that doesn’t wipe the images from the Internet. A search on Google still showed the images even after the TikTok account had been removed and pictures were no longer accessible there. Now you have to send a report to Google to ask for the offending images to be removed from search results.
Bystanders’ response
As our society grapples with the growing scourge of and surge in online harms, how the community reacts is crucial.
Thankfully, I had friends who knew how to offer support.
“This is awful. I’m so sorry it has happened to you. You didn’t ask for this and it’s not your fault. Here’s how to make an immediate in-app report to start the take-down process. I will also report on my end so that the platform gets multiple points of feedback that the image is deepfaked.”
A good friend who had reminded me how to start the take-down process sent me this message: “I hang my head in shame that my first instinct was to fix the problem and not ask how you are doing. I’m so sorry.”
First responders play an important part in framing the survivor’s mindset and rebuilding their self-confidence.
But sometimes, the response from others can make things worse.
I had written about the incident on Linkedin. Victims of online assaults report an overwhelming loss of agency. Writing about my experience, hoping it would help others, was a way to regain some of that control.
I received an e-mail a few days later. A local journalist had read my Linkedin post and posed some questions. The e-mail did not explain how the article would be crafted and ended with an instruction: “Please keep my colleagues copied in your reply”.
Perhaps the journalist was so direct because by posting about the experience on Linkedin, I had put myself “out there”. Or, there is a sense that online attacks are not as harmful as physical ones. Both are common bystander responses, and both are untrue. If this is what survivors go through, it would explain their hesitation to engage with the media.
Survivors’ remedies
Apart from the actions I took, there are no legal remedies currently available to someone who has been deepfaked. The law was written in a time before Internet harms had sprung into malignant life.
If TikTok had not removed the content, there is little I could have done about it. And even if there was an avenue of legal recourse, without knowing the identity of the person who had posted the content, I have no one to hold legally accountable.
But the law is changing.
The Online Safety (Relief and Accountability) Bill (OSRA) is currently making its way through Parliament. If passed, someone in my position will have new avenues of recourse.
First, if appropriate action is not taken by the Internet company in a timely manner, the survivor could then bring the case to the Online Safety Commission (OSC), a new government agency to be set up. The OSC has power to direct the Internet platform to take down the offending content within a certain time frame. Hopefully, this means that the four days my images stayed publicly accessible could be reduced. SHE’s studies show that speed of content removal is the most pressing concern of survivors.
The OSC can also order the Internet platform to provide details within its possession of the identity of the user behind the account where the content was uploaded, and make that information available to the survivor.
The new Act will create statutory torts, making certain online actions a civil wrong, for which the complainant can sue the perpetrator. These include intimate image abuse, image-based child abuse and online impersonation. In my case, I would have an action under the new tort of inauthentic material abuse and could seek civil remedies, applying to the OSC to obtain information about the person(s) behind the faceless user account.
This is only the beginning
What happened to me was not a one-off. On the same site, another woman professional from Singapore whom I know had a similarly doctored video of her.
According to a study by Italian cybersecurity company DeepTrace in 2019, 96 per cent of online deepfake videos were pornographic and nonconsensual. US cybersecurity firm, Home Security Heroes, reported in a 2023 study that deepfake porn constitutes 98 per cent of all deepfake videos online, with 99 per cent of them targeting women. The study found 95,820 deepfake videos online, a 550 per cent increase from 2019.
Deepfakes are a good business proposition for bad actors. How easy would it be to program a bot to trawl the net for images of individuals with means, and send them a pornographic video with a threat of posting it publicly unless an extortion payment is made? This can be done in seconds, with little human intervention, targeting multiple individuals. If the quantum is small, people might be scared into paying off the perpetrators. Leaders in business, government and the community might command higher ransoms.
Singapore’s forward-thinking laws are a welcome move to improve cyber accountability but policing the Internet is a quagmire. OSRA, progressive as it is, will be hard to enforce overseas or against anyone hiding behind a VPN (virtual private network). An Internet company with no Singapore operations may not comply with the OSC’s directions. For a long time, even governments could not contact the operators of a heavily used messenger app.
Countries adopting differing approaches to tackling online harms means fragmented enforcement. Also, legal enforcement necessarily happens after the harm has occurred, so victims still suffer the mental and emotional toll.
Having OSRA spell out the categories of wrongful online activity will help the community define where the line between acceptable and harmful is drawn, but clarity in moral standards will not deter intentional bad actors.
The Internet should be a great equaliser – providing access to information and learning for the less advantaged. But if harmful activity starts to pull people back from engaging on the Internet, that promise of progress suffers. Fixing this is a tall order but if we don’t try, the bad guys win.
The writer is joint managing partner, TSMP Law Corp, and chairperson of SG Her Empowerment
Share with us your feedback on BT's products and services