SUBSCRIBERS

In regulating AI, we may be doing too much. And too little

Success will mean staying focused on concrete problems like deep fakes

    • Actual harm related to AI, such as human impersonation, and not imagined risk should serve as a guide to how and when the state should intervene.
    • Actual harm related to AI, such as human impersonation, and not imagined risk should serve as a guide to how and when the state should intervene. ILLUSTRATION: PIXABAY
    Published Wed, Nov 8, 2023 · 05:48 PM

    WHEN United States President Joe Biden signed his sweeping executive order on artificial intelligence (AI) last week (Oct 30), he joked about the strange experience of watching a “deep fake” of himself, saying, “When the hell did I say that?”

    The anecdote was significant, for it linked the executive order to an actual AI harm that everyone can understand – human impersonation. Another example is the recent boom in fake nude images that have been ruining the lives of high-school girls. These everyday episodes underscore an important truth: The success of the government’s efforts to regulate AI will turn on its ability to stay focused on concrete problems like deep fakes, as opposed to getting swept up in hypothetical risks like the arrival of our robot overlords.

    Biden’s executive order outdoes even the Europeans by considering just about every potential risk one could imagine, from everyday fraud to the development of weapons of mass destruction. The order develops standards for AI safety and trustworthiness, establishes a cybersecurity programme to develop AI tools and requires companies developing AI systems that could pose a threat to national security to share their safety test results with the federal government.

    Share with us your feedback on BT's products and services