You are here
AI policy is tricky. From around the world, they came to hash it out.
[CAMBRIDGE, Massachusetts] Hal Abelson, a renowned computer scientist at the Massachusetts Institute of Technology, was working the classroom, coffee cup in hand, pacing back and forth. The subject was artificial intelligence, and his students last week were mainly senior policymakers from countries in the 36-nation Organization for Economic Cooperation and Development.
Abelson began with a brisk history of machine learning, starting in the 1950s. Next came a description of how the technology works, a hands-on project using computer-vision models and then case studies. The goal was to give the policymakers from countries like France, Japan and Sweden a sense of the technology's strengths and weaknesses, emphasizing the crucial role of human choices.
"These machines do what they do because they are trained," Abelson said.
The class was part of a three-day gathering at MIT, including expert panels, debate and discussion, as the Organization for Economic Cooperation and Development seeks to agree on recommendations for artificial intelligence policy by this summer.
But where are policymakers supposed to even start? Artificial intelligence seems to be everywhere, much hyped, much feared yet little understood. Some proclaim AI will be an elixir of prosperity, while others warn it will be a job killer, even an existential threat to humanity.
The organization's declarations, when they come, will not carry the force of law. But its recommendations have a track record of setting standards in many countries, including guidelines, going back to 1980, that called on nations to enact legislation to protect privacy and defined personal data as any information that can be used to identify an individual.
The recommendations carry weight because the organization's mission is to foster responsible economic development, balancing innovation and social protections.
"We're hoping to get out in front and help create some sort of policy coherence," said Andrew Wyckoff, the group's director for science, technology and innovation.
Here are a few themes that emerged at the gathering — ideas that could help shape the debate for years to come.
Rules are needed to make the world safe for AI — and let AI flourish.
Regulation is coming. That's a good thing. Rules of competition and behavior are the foundation of healthy, growing markets.
That was the consensus of policymakers at MIT. But they also agreed that artificial intelligence raises some fresh policy challenges.
Today's machine-learning systems are so complex, digesting so much data, that explaining how they make decisions may be impossible.
So do you just test for results? Do you put self-driving cars through a driver's test? If an AI system predicts breast cancer better than humans on average, do you just go with the machine? Probably.
"It's very clear — you have to use it," said Regina Barzilay, an MIT computer scientist and a breast cancer survivor.
But handing off a growing array of decisions is uncomfortable terrain. Practical rules that reassure the public are the only path toward AI adoption.
"If you want people to trust this stuff, government has to play a role," said Daniel Weitzner, a principal research scientist at the MIT Computer Science and Artificial Intelligence Laboratory, who was a policy adviser in the Obama administration.
Everyone, not just the superpowers, wants to shape AI policy.
New regulation is often equated with slower growth. But policymakers at the event said they did not want to stop the AI train. Instead, they said, they want their countries fully on board. Nations that have explicit AI strategies, like France and Canada, consider the technology an engine of growth, and seek to educate and recruit the next generation of researchers.
"Machine learning is the next truly disruptive technology," said Elissa Strome, who oversees AI strategy at the Canadian Institute for Advanced Research, a government-funded organization. "There are huge opportunities for machine learning in fields like energy, environment, transportation and health care."
International cooperation, the attendees said, would help ensure that policymaking was not simply left by default to the AI superpowers: the United States, which is a member of the Organization for Economic Cooperation and Development, and China, which is not.
"We think there can be a new model for the development of artificial intelligence that differs from China or California," said Bertrand Pailhès, the national coordinator for France's AI strategy.
In the view of Pailhès and others, China is a government-controlled surveillance state. In the American model, coming from Silicon Valley in California, a handful of internet companies become big winners and society is treated as a data-generating resource to be strip mined.
"The era of moving fast and breaking everything is coming to a close," said R. David Edelman, an adviser in the Obama administration and the director of the project on technology, policy and national security at MIT.
In Japan, artificial intelligence is being seized as a lever to spur dynamism in its stodgy, hierarchical corporate culture. Japan is investing heavily to encourage the development of AI technology with a particular emphasis on "startups, small companies and young people," said Osamu Sudoh, a professor at the University of Tokyo and a senior adviser to the Japanese government on AI strategy.
AI policy is data policy.
One specific policy issue dominated all others: the collection, handling and use of data.
Fast computers and clever algorithms are important, but the recent explosion of digital data — from the web, smartphones, sensors, genomics and elsewhere — is the oxygen of modern AI.
"Access to data is going to be the most important thing" for advancing science, said Antonio Torralba, director of the MIT Quest for Intelligence project. So much data is held privately that without rules on privacy and liability, data will not be shared and advances in fields like health care will by stymied.
Artificial intelligence can magnify the danger of data-driven injustice. Public-interest advocates point to the troubling missteps with the technology — software, for example, that fails to recognize the faces of black women or crime-prediction programs used in courtrooms that discriminate against African-Americans.
In such cases, data is the problem. The results were biased because the data that went into them was biased — skewed toward white males for facial recognition and the comparatively high percentage of African-Americans in the prison population.
"Are we just going to make the current racist system more effective, or are we going to get rid of embedded bias?" asked Carol Rose, executive director of the American Civil Liberties Union of Massachusetts.
These are issues of both technology design and policy. "Who is being mistreated? Who is being left out?" Abelson asked the class. "As you think about regulation, that is what you should be thinking about."