THE BOTTOM LINE

National approaches to AI safety diverge in focus

Countries face competing incentives in the artificial intelligence race

    • The current voluntary commitments for AI safety are much like allowing car manufacturers to self-regulate, says AI governance expert Robert Trager.
    • The current voluntary commitments for AI safety are much like allowing car manufacturers to self-regulate, says AI governance expert Robert Trager. PHOTO: UNSPLASH
    Published Thu, Jun 27, 2024 · 05:00 AM

    DOMESTIC initiatives in artificial intelligence safety are beginning to emerge in countries around the world. In the last 18 months, the UK, US, Canada and Japan have created national AI safety institutes that aim to address governance and regulatory challenges, including issues related to misinformation, human safety and economic equity. Although they are unified by a common goal of creating frameworks for safe AI innovation, they diverge in meaningful ways.

    US: prioritising domestic developments

    The US AI Safety Institute (AISI) was launched in February 2024 by the National Institute of Standards and Technologies. With a total funding package of US$10 million, AISI aims to “facilitate the development of standards for safety, security and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts”. AISI is focused on developing methods for the detection, tracking and potential watermarking of synthetic content.

    Such objectives are focused on actionable policies and the development of safety frameworks that can avert significant risks to “national security, public safety and individual rights”. This includes coordinating with 200 companies on red-teaming exercises to identify vulnerabilities and develop mitigation strategies.

    Throughout its early existence, the AISI was chiefly focused on US domestic safety concerns, with sparser public language about the need for global collaboration. This may be changing, with a new UK partnership to develop safety tests for advanced AI models as well as recent statements of purpose to foster a global AI safety institute, although this remains very preliminary.

    UK: voluntary commitments, global collaboration

    The UK AI Safety Institute, founded in April 2023, evolved from the Frontier AI Taskforce with an initial £100 million (S$171.7 million) investment and ongoing funding as part of a £20 billion research and development initiative. In contrast to the US, the UK AI Safety Institute focuses on a broader array of safety considerations and stakeholders.

    Its mission is to ensure the safe development of advanced AI systems through evaluations, foundational research and information sharing. It places a large emphasis on collaboration with international partners, industry, academia, civil society and national security agencies to advance AI safety and foster global consensus and institution building. In practice, that has meant an approach to AI that is bent on making the UK central to the discourse on global safety but is not immediately interested in creating regulatory obligations for AI firms.

    BT in your inbox

    Start and end each day with the latest news stories and analyses delivered straight to your inbox.

    The UK has remained overwhelmingly focused on voluntary commitments from AI companies, relying on existing regulations to address new risks. As Ellie Sweet, head of AI regulation strategy, engagement and consultation at the UK Department of Science, Innovation and Technology, remarked at OMFIF’s AI in finance seminar: “It’s better to have our existing expert regulators interpret and apply those principles within their existing remits, rather than necessarily standing up a whole new regulatory framework.”

    Meanwhile, the UK has been very active in its development of international partnerships, including a new UK AI Safety Institute Office in San Francisco and a UK-Canada science of AI safety partnership.

    Canada: investing in becoming an AI leader

    In April 2024, Canada announced plans to develop its own AI Safety Institute as part of a broader investment in AI by the Canadian government. The institute is funded with US$50 million and aims to protect against risks posed by advanced AI systems while also solidifying Canada’s place as a potential leader in AI development.

    It will work under the broader Pan-Canadian Artificial Intelligence Strategy, which focuses on commercialisation, standards and research. The institute aims to help Canada better understand and mitigate the risks associated with AI technologies while also supporting international governance efforts. This includes aligning with international AI governance principles set by groups such as the G7 and the Global Partnership on AI to ensure that domestic AI innovation is responsibly conducted.

    Japan: initiatives still in early phase

    Japan has launched an AI Safety Institute that is very similar to the UK’s. The country’s institute – founded in January 2024 within the Information-technology Promotion Agency – involves decentralised AI governance across many governmental departments such as internal and foreign affairs. The exact investment amounts have not been publicly disclosed.

    Current initiatives involve the creation of AI safety standards, conducting cross-department research on AI implications and opportunities, and developing international partnerships with other emerging AI governance leaders, such as those in Europe and the US, to co-ordinate global AI safety and risk standards. The details of many of these initiatives are still emerging.

    These initiatives represent major national efforts to understand AI technology and its opportunities and risks. Most countries are in an information-gathering stage, learning about AI and appearing reticent to deploy non-voluntary rules. But countries are increasingly eager to co-operate to support global governance.

    The key test for the institutes will be the deliberation of mandatory rules for AI use and safety. AI governance expert Robert Trager said that current voluntary commitments are much like allowing car manufacturers to self-regulate. When dealing with technology that poses fundamental risks to national and global safety, governmental rules-based frameworks are vital. Such deliberation on mandatory requirements should include coordination with innovators, local communities looking to leverage AI and the firms driving innovation so that the technology can continue to develop.

    The writer is a senior economist at the Digital Monetary Institute, Official Monetary and Financial Institutions Forum (OMFIF)

    Share with us your feedback on BT's products and services