You are here

Big Brother is watching you at work - and his name is AI

Concern has been raised over the use of tech to tighten management's grip on workers

BT_20190625_KELAIBOSS25_3817001.jpg
The MetLife call centre (above) uses the AI program Cogito to monitor the way in which employees handle customers on the phone. The program tells the call centre employee if they are too slow to respond.

BT_20190625_KELAIBOSS25_3817001.jpg
The MetLife call centre uses the AI program Cogito to monitor the way in which employees handle customers on the phone. The program tells the call centre employee if they are too slow to respond (above).

New York

WHEN Conor Sprouls, a customer service representative in the call centre of insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, AI (artificial intelligence) tells him how he's doing.

Talking too fast? The programme flashes an icon of a speedometer, indicating that he should slow down.

Sound sleepy? The software displays an "energy cue", with a picture of a coffee cup.

sentifi.com

Market voices on:

Not empathetic enough? A heart icon pops up.

For decades, people have fearfully imagined armies of hyper-efficient robots invading offices and factories, gobbling up jobs once done by humans. But in all of the worry about the potential of AI to replace rank-and-file workers, we may have overlooked the possibility it will replace the bosses, too.

Mr Sprouls and the other call centre workers at his office in Warwick, Rhode Island, still have plenty of human supervisors. But the software on their screens - made by Cogito, an AI company in Boston - has become a kind of adjunct manager, always watching them. At the end of every call, Mr Sprouls' Cogito notifications are tallied and added to a statistics dashboard that his supervisor can view. If he hides the Cogito window by minimising it, the program notifies his supervisor.

Cogito is one of several AI programs used in call centres and other workplaces. The goal, said Joshua Feast, Cogito's chief executive, is to make workers more effective by giving them real-time feedback.

"There is variability in human performance. We can infer from the way people are speaking with each other whether things are going well or not."

The goal of automation has always been efficiency, but in this new kind of workplace, AI sees humanity itself as the thing to be optimised. Amazon uses complex algorithms to track worker productivity in its fulfillment centres, and can automatically generate the paperwork to fire workers who don't meet their targets, as The Verge uncovered this year.

(Amazon has disputed that it fires workers without human input, saying that managers can intervene in the process.)

IBM has used Watson, its AI platform, during employee reviews to predict future performance and claims it has a 96 per cent accuracy rate.

Then there are the startups. Cogito, which works with large insurance companies like MetLife and Humana, as well as financial and retail firms, says it has 20,000 users.

Percolata, a Silicon Valley company that counts Uniqlo and 7-Eleven among its clients, uses in-store sensors to tot up a "true productivity" score for each worker, and rank workers from most to least productive.

Management by algorithm is not a new concept. In the early 20th century, Frederick Winslow Taylor revolutionised the manufacturing world with his "scientific management" theory, which tried to wring inefficiency out of factories by timing and measuring each aspect of a job. More recently, Uber, Lyft and other on-demand platforms have made billions of dollars by outsourcing conventional tasks of human resources - scheduling, payroll, performance reviews - to computers.

But using AI to manage workers in conventional, nine-to-five jobs has been more controversial. Critics have accused companies of using algorithms for managerial tasks, saying that automated systems can dehumanise and unfairly punish employees. And while it's clear why executives would want AI that can track their workers, it's less clear why workers would.

Marc Perrone, the president of United Food and Commercial Workers International Union, which represents food and retail workers, said in a statement about Amazon in April: "It is surreal to think that any company could fire their own workers without any human involvement."

In the gig economy, management by algorithm has also been a source of tension between workers and the platforms that connect them with customers. This year, drivers for Postmates, DoorDash and other on-demand delivery companies protested a method of calculating their pay that used an algorithm and put customer tips toward guaranteed minimum wages - a practice that was nearly invisible to drivers, because of the way the platform obscures the details of worker pay.

Still, there is a creepy sci-fi vibe to a situation in which AI surveils human workers and tells them how to relate to other humans. And it is reminiscent of the "workplace gamification" trend that swept through corporate America a decade ago, when companies used psychological tricks borrowed from video games, like badges and leader boards, to try to spur workers to perform better.

Phil Libin, the chief executive of All Turtles, an AI startup studio in San Francisco, recoiled in horror when I told him about my call centre visit.

"That is a dystopian hellscape. Why would anyone want to build this world where you're being judged by an opaque, black-box computer?" he asked.

Defenders of workplace AI might argue that these systems are not meant to be overbearing. Instead, they're meant to make workers better by reminding them to thank the customer, to empathise with the frustrated claimant on Line 1 or to avoid slacking off on the job.

The best argument for workplace AI may be situations in which human bias skews decision-making, such as hiring. Pymetrics, a New York startup, has made inroads in the corporate hiring world by replacing the traditional résumé screening process with an AI program that uses a series of games to test for relevant skills. The algorithms are then analysed to make sure they are not creating biased hiring outcomes, or favouring one group over another.

Frida Polli, Pymetrics' chief executive, said: "We can tweak data and algorithms until we can remove the bias. We can't do that with a human being."

Using AI to correct for human biases is a good thing. But as more AI enters the workplace, executives will have to resist the temptation to use it to tighten their grip on their workers and subject them to constant surveillance and analysis. If that happens, it won't be the robots staging an uprising. NYTIMES