You are here

OECD policymakers hash out AI policy recommendations

Cambridge, Massachusetts

HAL Abelson, a renowned computer scientist at the Massachusetts Institute of Technology (MIT), was working the classroom, coffee cup in hand, pacing back and forth. The subject was artificial intelligence (AI), and his students last week were mainly senior policymakers from countries in the 36-nation Organisation for Economic Cooperation and Development (OECD).

Mr Abelson began with a brisk history of machine learning, starting in the 1950s. Next came a description of how the technology works, a hands-on project using computer-vision models and then case studies. The goal was to give the policymakers from countries such as France, Japan and Sweden a sense of the technology's strengths and weaknesses, emphasising the crucial role of human choices.

The class was part of a three-day gathering at MIT, including expert panels, debate and discussion, as the OECD seeks to agree on recommendations for AI policy by this summer.

sentifi.com

Market voices on:

The organisation's declarations, when they come, will not carry the force of law. But its recommendations have a track record of setting standards in many countries, including guidelines, going back to 1980, that called on nations to enact legislation to protect privacy and defined personal data as any information that can be used to identify an individual.

The recommendations carry weight because the organisation's mission is to foster responsible economic development, balancing innovation and social protections.

"We're hoping to get out in front and help create some sort of policy coherence," said Andrew Wyckoff, the group's director for science, technology and innovation.

Here are a few themes that emerged at the gathering - ideas that could help shape the debate for years to come.

Rules are needed to make the world safe for AI

Regulation is coming. That's a good thing. Rules of competition and behaviour are the foundation of healthy, growing markets. That was the consensus of policymakers at MIT. But they also agreed that AI raises some fresh policy challenges.

New regulation is often equated with slower growth. But policymakers at the event said they did not want to stop the AI train. Instead, they said, they want their countries fully on board. Nations that have explicit AI strategies, such as France and Canada, consider the technology an engine of growth, and seek to educate and recruit the next generation of researchers.

"Machine learning is the next truly disruptive technology," said Elissa Strome, who oversees AI strategy at the Canadian Institute for Advanced Research, a government-funded organisation. "There are huge opportunities for machine learning in fields like energy, environment, transportation and healthcare."

International cooperation, the attendees said, would help ensure that policymaking was not simply left by default to the AI superpowers: the United States, which is a member of the OECD, and China, which is not.

"We think there can be a new model for the development of artificial intelligence that differs from China or California," said Bertrand Pailhès, the national coordinator for France's AI strategy.

In the view of Mr Pailhès and others, China is a government-controlled surveillance state. In the American model, coming from Silicon Valley in California, a handful of internet companies become big winners and society is treated as a data-generating resource to be strip mined.

AI policy is data policy

One specific policy issue dominated all others: the collection, handling and use of data.

Fast computers and clever algorithms are important, but the recent explosion of digital data - from the Web, smartphones, sensors, genomics and elsewhere - is the oxygen of modern AI.

"Access to data is going to be the most important thing" for advancing science, said Antonio Torralba, director of the MIT Quest for Intelligence project. So much data is held privately that without rules on privacy and liability, data will not be shared and advances in fields such as healthcare will be stymied.

AI can magnify the danger of data-driven injustice. Public-interest advocates point to the troubling missteps with the technology - software, for example, that fails to recognise the faces of black women or crime-prediction programs used in courtrooms that discriminate against African-Americans.

In such cases, data is the problem. The results were biased because the data that went into them was biased - skewed towards white males for facial recognition and the comparatively high percentage of African-Americans in the prison population. "Are we just going to make the current racist system more effective, or are we going to get rid of embedded bias?" asked Carol Rose, executive director of the American Civil Liberties Union of Massachusetts.

These are issues of both technology design and policy. "Who is being mistreated? Who is being left out?" Mr Abelson asked the class. "As you think about regulation, that is what you should be thinking about." NYTIMES