The AI safety movement is dead
Public pressure to rein in artificial intelligence may be waning, but the work of making these systems less risky is just beginning
MAY 2024 will be remembered as the month that the artificial intelligence (AI) safety movement died. It will also be remembered as the time when the work of actually making AI safer began in earnest.
Some history: In the mid-2000s, a movement known as “effective altruism” made AI safety a top priority, based on fears that highly advanced AI models could vanquish us all or at least cause significant global chaos. Two leading AI companies, Anthropic and OpenAI, set up complicated board structures with nonprofit elements in the mix, to keep those companies from producing dangerous systems.
The safety movement probably peaked in March 2023 with a petition for a six-month pause in AI development, signed by many luminaries, including specialists in the AI field. As I argued at the time, it was a bad idea, and got nowhere.
Share with us your feedback on BT's products and services