Regulating AI will be essential. And complicated
The middle ground would be creating a new federal agency, like the SEC or the FDA, but staffed with experts on artificial intelligence
WHETHER or not calls for pausing artificial intelligence (AI) development succeed (spoiler: they won’t), AI is going to need regulation. Every technology in history with comparably transformational capabilities has been subject to rules of some sort. What that regulation should look like is going to be an important and complicated problem, one that I and others will be writing a lot about in the months and years to come.
Before we even get to the content of the regulation needed, however, there’s a crucial threshold question that needs to be addressed: Who should regulate AI? If it’s government, which part of government, and how? If it’s industry, what are the right kinds of mechanisms to balance innovation with safety?
I’d like to start suggesting some basic principles that should guide our approach, starting with government regulation. I’ll save the question of private sector self-regulation for a future column. (Disclosure: I advise a number of companies that are involved in AI, including Meta.)
TRENDING NOW
On the board but frozen out: The Taib family feud tearing Sarawak construction giant apart
Thai and Vietnamese farmers may stop planting rice because of the Iran war. Here’s why
PayPal plans job cuts as its new CEO pursues turnaround strategy
MAS, bank CEOs convene over AI cyberthreats; boards told to own risks, not leave to IT teams