THE way we think about artificial intelligence (AI) is changing. Traditionally, AI has tried to emulate and surpass human intelligence by mimicking how people think instead of augmenting the process using computer technology. While this approach has made huge gains - especially in the fields of image recognition and speech recognition where machines are now more effective than humans in many publicly available benchmarks, its ability to cope with unexpected scenarios in the real world has been limited. As the true value of AI lies in how it can increase productivity in the real world, this limitation might prevent the wide adaptation of AI systems if not well-addressed.
But now a new approach is taking shape: human-centred AI. This keeps the end user at the forefront of the experience, and instead of trying to outperform humans, works with them to make them better at the task at hand. When humans and machines work together, the gains for human progress are much greater than AI going it alone.
Lacking a bedside manner: Where modern AI gets it wrong
Currently, if you give an AI a concrete goal and a lot of data on how to achieve it, it can generally outperform the average human. It's a simple formula: big data plus deep learning equals superhuman performance. While this undoubtedly captures headlines (think of the news coverage when IBM's Watson computer won the gameshow Jeopardy in 2011), the worst case would be that performance of AI under unexpected scenarios in the real world is inferior to humans who can typically handle outliers gracefully while not under pressure or fatigue.
For example, autonomous vehicles are safer than human drivers on average performance, because they eliminate human errors that result from fatigue or distraction. However, their behaviour in unexpected scenarios such as a balloon flying in front of the vehicle can be very dangerous. A human might be surprised, but can typically continue driving safely. Another good example is in healthcare, when an AI system might be able to identify a tumor more accurately than a doctor on average because AI has observed more examples than each individual doctor. However, a human still needs to take the responsibility of the diagnosis and win the patient's trust by carefully and kindly explaining it.
This is where the human factor becomes crucial.
Man vs machine
Instead of removing people from the process, human-centred AI works with humans to empower and facilitate them. Stanford University's Institute for Human-Centred AI is at the forefront of this approach. It has identified three areas of focus: to study and forecast the impact of AI on humans and society; to build AI systems to facilitate humans; and to gain a deeper understanding of human intelligence in order to develop new AI technology.
This last one is vital. The current extent of machine learning plus big data doesn't equate exactly to human intelligence. A human-centred approach moves AI beyond just data and algorithms to real-world applications with tangible societal benefits.
Humans are currently significantly better than machines at certain tasks, so it would be ridiculous to outsource every job to a robot. Humans are very good at learning from very few examples, for instance. When faced with a new scenario, before we can collect a lot of data, the best thing to do is to let humans gracefully handle and explore it. In these circumstances, human intelligence is much more efficient than AI.
Humans have a lot of sophisticated sensors too, and while some digital sensors are good (such as a camera), robots can't replicate a human's sense of smell, taste or touch to anywhere near the same degree of sensitivity under an affordable budget. For example, a masseuse can tell where you're sore just from touching you, but a machine without a good-enough touch sensor won't be able to do that. It wouldn't be a very good massage.
We can rebuild it: The key elements of a human-centred AI
With any human-centred AI system, trust is key. To instil trust, you need to let the user know under what conditions the system should work perfectly, and when they should question its decisions. It's also vital to temper its behaviour. For example, if a self-driving car is updated with more data and suddenly starts driving much slower because the data says it's safer, the user will think something is wrong, because it hasn't met their usual expectations. If the behaviour needs to change, it should do so gradually, just as people gradually change their own behaviour as they learn.
Any areas with a very strong human-facing component will benefit the most from human- centred AI. For instance, in call centres, it was thought chatbots would be able to replace human agents, but even the best conversational AI still needs to be monitored by humans in case of errors. Healthcare is also a big one. Instead of replacing doctors and nurses with robots, human-centred AI can take over the diagnostic work (as long as it's checked by a doctor) which would increase the number of patients a doctor can see in a day. It would also reduce the rate of diagnostic error. It makes doctors more efficient and effective.
It all comes down to identifying the user's pain points and making sure the system addresses them. It's also very important that an AI can explain its decisions, especially in a sector like healthcare. This will let the doctor see why it reached the conclusion it did, check whether it is correct, and explain it to the patient.
Human-centred AI is guiding us to develop AI in the right way. By focusing on addressing the end user's pain points, we create an environment in which humans and AI can co-exist, with increased productivity, better standards of living and more benefits for society.
- The writer is chief AI scientist at Appier