You are here

Amazon pitches facial recognition tech at agency handling migrants

This is despite protests from its employees and civil liberties groups, and that AI software is racially biased

Seattle

AMAZON.com Inc has pitched its facial-recognition technology as a tool for US Immigration and Customs Enforcement (ICE), even as criticism swirled from within the company's workforce and civil liberties groups.

Employees in the Amazon Web Services cloud-computing unit met with the federal agency in California to present its artificial-intelligence (AI) tools, which can identify people from surveillance footage by tapping its image database, going by e-mail exchanges obtained by the non-profit Project on Government Oversight.

Its AI tools include Rekognition, which can quickly identify people in photos and videos, which means the software will enable law enforcement agencies to track individuals from cameras in public places.

sentifi.com

Market voices on:

The American Civil Liberties Union (ACLU) in May criticised the use of the technology by police departments in Oregon and Florida, saying it threatened civil rights.

News that the technology is being considered by federal immigration officials was reported earlier on Tuesday by news and opinion Web site The Daily Beast.

Amazon shared details about Rekognition and other tools at a "boot camp" sponsored by McKinsey & Co. and attended by other technology companies, an Amazon spokesman said in a statement.

She said: "As we usually do, we followed up with customers who were interested in learning more about how to use our services. Immigration and Customs Enforcement was one of those organisations where there was follow-up discussion."

ICE has no current contract with Amazon, agency spokesman Matthew Bourke wrote in an e-mail. The agency regularly meets with vendors to learn more about the tools they are offering, he said.

Mr Bourke added: "ICE's Homeland Security Investigations has used facial recognition in the past to assist in criminal investigations related to fraudulent activities, identity theft and child-exploitation crimes, and the component will continue to explore cutting-edge technology to compliment criminal investigations."

Law enforcement has made wide use of facial recognition for a range of tasks, from comparing mug shots with databases of drivers' licence photos to scanning of people walking by surveillance cameras.

Some AI software used for facial recognition has been shown to be racially biased because it was trained with relatively few minority images. In an infamous example from 2015, Google's AI-powered photo-tagging system classified some black people as gorillas.

In a paper published this year, researchers at the Massachusetts Institute of Technology and Microsoft Corp found that facial-recognition systems are far less accurate at identifying non-white people and women than white men.

In 2016, researchers at Georgetown University found that at least five major police departments claimed to run real-time face recognition on footage from street cameras, or expressed interesting in doing so.

The use of AI tools by government has divided the tech industry. Alphabet Inc responded to employee protests about its contracts with the military by releasing a set of principles to guide the contracts it would pursue that use its AI tools; it also will not renew a Pentagon contract to use Google AI to analyse drone video footage.

Amazon chief executive Jeff Bezos said earlier this month that his company will "continue to support" the US Defence Department, drawing a contrast with Google. BLOOMBERG