AI is coming for the past too
In our focus on protecting the present and future from artificial intelligence, we have forgotten about the urgent need to protect the past.
WE DON’T have to imagine a world where deepfakes can so believably imitate the voices of politicians that they can be used to gin up scandals that could sway elections. It’s already here. Fortunately, there are numerous reasons for optimism about society’s ability to identify fake media and maintain a shared understanding of current events.
While we have reason to believe the future may be safe, we worry that the past is not.
History can be a powerful tool for manipulation and malfeasance. The same generative artificial intelligence (AI) that can fake current events can also fake past ones. While new content may be secured through built-in systems, there is a world of content out there that has not been watermarked, which is done by adding imperceptible information to a digital file so that its provenance can be traced. Once watermarking at creation becomes widespread and people adapt to distrust content that is not watermarked, then everything produced before that point in time can be much more easily called into question.
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
Share with us your feedback on BT's products and services