COMMENTARY

Artificial intelligence has its limits

SAVE for desserts, curry leaves are essential condiments in any South Indian dish, almost mandatorily so. There is really no other quintessential substitute for the cuisine. Likewise, there is no other common denominator in any of the new tech products than artificial intelligence. AI is omnipresent, demanded by all application users and touted as the new mantra by large and small technology-based lifestyle or industrial players. There is even a prediction that one day, the chess grandmaster could be a machine and that unsolved mathematical theorems will be solved in the 21st century by machines.

OMNIPRESENT, BUT IS IT OMNIPOTENT?

There is a huge fancy for AI because it can help predict the uncertain future. Complex data that humans cannot handle are woven into interesting and viable algorithms that extrapolate patterns into the future. As lives revolve around routines and people settle into habits, the principle is valid. This is what makes us believe that AI can also supplant mental tasks (not just physical ones). Two important caveats, however, escape most people's attention. One is that the past must be a predictor of the future for AI to succeed. More significantly, algorithms are based on known variables that impact outcomes.

Life is not so simple. For example, AI systems should be able to predict the future price of oil, if they could. But they can't. Even plucking a large amount of data of the past 50 years, sizing up the various geopolitical events, wars, alliances, changing basket of supply countries and categorisation of rulers of oil-rich countries could not have predicted the surge in shale oil or the collapse of the Venezuelan oil supply chain or the new Russia-Iran-China axis. In this case, the variables are far too many and far from predictable.

If you love to play the stock market and are looking for an algorithm to help you fore-track rising stocks, you have good news and bad news. The good news is that almost every fund has these algorithms.

The bad news is they have a poor performance track. They are not to be blamed. They can only predict associative variables that have a cause-and-effect correlation. They will fall short when it comes to random variables that can pop up any time.

The Shanghai composite index has fallen some 25 per cent this year, largely attributable to the start of the US-China trade dispute. The random variable was the unexpected tariff outreach of the Trump administration. Which algorithm could have factored this? Robot bankers usurping the roles of wealth managers are still distant, if at all out there.

Our children will be glad to get some advance predictions of how they would fare in the next examinations. After all, depending on their years at school, they could populate the grid with a large volume of past data. AI analytics systems don't predict these either.

An understanding of AI's potential and limitations will help us clear myths and fears. So what are the necessary conditions for AI-based predictive analytics to be useful?

  • Past behaviour that is less likely to change dramatically (weather patterns, traffic patterns, image matching). The changes, if any, follow another pattern which could also be woven into the logic;
  • Variables affecting the outcome (cause and effect) are substantially decoded (human responses to shock events, sales response to promotion programmes, drug impact on diseases);
  • Operations have been substantially mechanised by process standardisation (aircraft navigation, automotive assembly, large-volume mathematical computing);
  • Majority rule applies ("more often than not" conditions - propensity to eat lunch/dinner at certain times). The non-default condition applies minimally or not at all;
  • A range of conditions have already been captured with past data patterns (life expectancy, gender ratio, road accident rates, voice recognition);
  • Large amounts of transactional data are collected for other purposes (supermarket sales of products, tourism data, car engine performance);
  • Human actions are sub-conscious (written language in everyday communication, walking speed). In other words, the moment human actions and decisions are conscious (based on emotions or reverse logic or morality or impulses such as revenge, anger etc), an AI algorithm will falter;
  • The full set of conditions is definable (driverless car, home or office security systems)
  • Unity of relationships (a command has only one meaning - book a taxi, approve loan if conditions are met). Either/or conditions (as in many decision tree problems) will make it difficult to construct AI solutions;
  • An outside trigger sets off the task performance. No AI system starts on its own. But these systems could be programmed to, by a human. In contrast, humans often are proactive;
  • There are no creative, un-patterned alternatives (this is the realm of art, music, scientific discoveries, dance, creative writing etc - the power of thinking differently from patterns).

We are now conditioned to think that AI's predictive powers are near 100 per cent. From the above, you can see that it's a pipe dream. The corollary of these is that if any condition is unfulfilled partially or fully, it goes into the fuzzy area of unpredictability. We could still use AI-based predictions in such situations, but that can only be one scenario. Human interpretation then becomes necessary. It would be interesting to see how much of this gap will be closed with improvements in machine learning processes (imitating human neural networks, for example) in the years to come. AI will bring in significant transformation, but not in everything.

  • The writer is a business consultant and author of Nuanced Account Management: Driving Excellence in B2B Sales.

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to t.me/BizTimes