The AI safety movement is dead
Public pressure to rein in artificial intelligence may be waning, but the work of making these systems less risky is just beginning
MAY 2024 will be remembered as the month that the artificial intelligence (AI) safety movement died. It will also be remembered as the time when the work of actually making AI safer began in earnest.
Some history: In the mid-2000s, a movement known as “effective altruism” made AI safety a top priority, based on fears that highly advanced AI models could vanquish us all or at least cause significant global chaos. Two leading AI companies, Anthropic and OpenAI, set up complicated board structures with nonprofit elements in the mix, to keep those companies from producing dangerous systems.
The safety movement probably peaked in March 2023 with a petition for a six-month pause in AI development, signed by many luminaries, including specialists in the AI field. As I argued at the time, it was a bad idea, and got nowhere.
TRENDING NOW
On the board but frozen out: The Taib family feud tearing Sarawak construction giant apart
Government to roll out more support measures should need arise amid Middle East situation: PM Wong
Seatrium surge leads Singapore stocks slightly higher on Tuesday; STI up 0.1%
Not retirement, but a rewiring and fresh perspectives post-DBS, says Piyush Gupta