AI whistleblowers are stepping up; it’s about time
The death of some gag orders and a sophisticated safety team in Britain are encouraging signs that artificial intelligence is being made more accountable
HERE’S an advancement in artificial intelligence (AI) that should benefit all of us: It’s getting easier for builders of AI to warn the world about the harms their algorithms can cause – from spreading misinformation and displacing jobs, to hallucinating and providing a new form of surveillance. But who can these would-be whistleblowers turn to?
An encouraging shift towards better oversight is under way, thanks to changes in compensation policies, renewed momentum to speak out among engineers and the growing clout of a British government-backed safety group.
The financial changes are the most consequential. AI workers suffer from the ultimate First World problem, in that they can make seven or eight figures in stock options if they stick it out with the right company for several years, and if they also keep quiet about its problems when they leave. Get caught speaking out, according to recent reporting by Vox, and they lose the chance to become millionaires. That has kept many of them silent, according to an open letter published this month by 13 former OpenAI and Google DeepMind employees, six of whom remained anonymous.
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
Share with us your feedback on BT's products and services