Friday, 25 May 2018

Will self-learning BOTs repeat our mistakes?

If BOTs are supposed to learn from us, then shouldn't they learn the mistakes we commit and repeat those?


For a moment, think of a self-learning AI/ML BOT as a baby - a sweet innocent BOT. A baby learns from its environment. Good parenting inculcates good behavior and vice versa; and such learnings are something that makes a permanent impression in the mind which the kid carries throughout his/ her life. What if a pattern of wrong human behavior (read training data) in the form of behavioral patterns, purchase patterns, investment patterns, decision patterns etc. tunes the BOT to take inappropriate or unfavorable decisions.

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.


Let's look back at history ...

Folks like me born in the 80s or earlier have heard stories on Enron - the famous O&G company, apple of the eye of stock market, got into a multi-billion dollar lawsuit from its shareholders for destroying, altering, or fabricating financial records to raise valuation. Now it's a different story that many IT companies made a fortune by making systems comply to the Sarbanes Oxley Act that got introduced as a remedial measure post the scandal. But let's look at the events and associated shareholder behavior. Enron became the largest seller of natural gas in North America by 1992. In an attempt to achieve further growth, Enron pursued a diversification strategy. The company owned and operated a variety of assets, most of which were discovered as fake later on but gave a picture of a bloated balance sheet to the shareholders. Enron's stock increased from the start of the 1990s until year-end 1998 by 311%. By December 2000, Enron's market capitalization exceeded $60 billion, 70 times earnings and six times book value, an indication of the stock market's high expectations about its future prospects. In addition, Enron was rated the most innovative company in America in Fortune's Most Admired Companies survey. It was none other than the shareholders, market analysts and their understanding of Enron's potential that took the company there. If an Investment BOT learns from the behavior of Enron's shareholders, then it's bound to make mistakes.

Let's take another example ...

"...two-third of what we buy in the supermarket we had no intention of buying" says consumer behavior expert Paco Underhill. Supermarkets not only rely on such behavior; they encourage and make use of it. Every aspect of the store layout to product display to assortment selection is designed to stimulate shopping serendipity. Theories of visual merchandising to allure customers is commonly used both in brick-n-mortar stores as well as in mobile or e-commerce. And I am sure, everyone who is reading this blog have read the phenomenon earlier elsewhere, but still we all make impulse purchases. Had there been a BOT which learns from this purchasing behavior is bound to learn wrong practices and misguide customers.

If my Alexa keeps suggesting Led Zep, Megadeth or Anthrax at odd hours in the morning since those are the Metallica numbers I request to keep myself awake for finishing off a deck, then I will end up having disturbed sleep every night. My hobby is to listen to music before I go to bed.

If the AI on Zomato keeps suggesting me burger joints since those were the searches I made for the past few cases, then I will never get into healthy eating habits. I am a foodie and often succumb to temptations when I go out to eat.

AI/ML enabled BOTs is definitely giving me a "better experience" based on historical patterns it has learned over time, but is it helping me take "better decision"? (perception)

Ultimately, systems and organizations are run by humans. It is important for us to understand how an AI system arrive at its decisions, and it would be a mistake for us to blindly accept such decisions. Right decisions is more about "REAL Common Sense" than "ARTIFICIAL Intelligence". While there's no doubt of the fact that AI/ML brings in immense possibilities to make our lives better, the ever optimistic part of me still tells me it is our common sense, values, intuitions, acumen and intellect that is going to make a better world for us.


No comments:

Post a Comment