• Home
  • /
  • Digital Transformation Blog

Digital Transformation Blog

Practical insights on the digital revolution

What to Make of the Hype around AI

Artificial Intelligence is certainly at the top of the “hype cycle”. As so often with hype terms, the phrase is not particularly helpful when trying to identify business applications that are able to deliver value here and now.

Let’s look at it as “pattern recognition and machine learning” which covers a big chunk of what “AI” is today:

Pattern recognition is a relatively mature field in computer science. The types of pattern being recognised successfully by machines has risen over time in complexity with the ability of computer systems to sense and process the required information. So, we moved from image recognition to voice recognition and more recently to video recognition.

Facebook, via its DeepFace program, can identify a human face in digital images with 97% accuracy (it uses a nine-layer neural net and was trained on four million images). Google unveiled an artificial neural network that can verify the identity of a speaker saying “OK Google” with an error rate of 2 per cent.

As such, many mass applications are now being deployed: “Intelligent assistants” (Siri, Alexa, Google Assistant, …) have been deployed in smartphones and appear increasingly in our homes. Border controls in airports are replaced at a rapid pace with “smart gates” (Australia, New Zealand, US, Canada, …). Law enforcement agencies operate sophisticated image recognition data bases. And Walmart is even using real-time face scanning to identify known shoplifters. Recently, the Chinese government deployed smart glasses to police officers using real time video connected to centralised data bases to identify criminals on the go.

Machine learning is closely related to pattern recognition and effectively refers to “self-learning” systems. These systems produce non-deterministic outcomes, ie results that go beyond following specific instructions to select one of a set of pre-determined choices. Machine learning dates back to the 1950s and many sophisticated algorithms have been developed since.

Machine learning has many applications. Recently, Google deployed Smart Replies to Gmail: based on an incoming email, a user is presented with the system’s three best guesses for short replies. Based on the user choice, Gmail will learn and improve its performance.

Chatbots are a popular application, but also demonstrate the current limitations: many disappoint and are a rather unintelligent combination of basic FAQs providing a barrier between customers and “real humans” in customer service.

Artificial Intelligence” then refers to a combination of pattern recognition and machine learning. Two high profile systems have gained prominence in particular: Google’s Deep Mind famously beat the world’s best player at Go in 2017 and is now being deployed for its first commercial applications.

IBM’s Watson has been in the making for many years and demonstrated its power not only by beating champions at Jeopardy in 2011, but also in being able to learn from top cancer scientists how to diagnose the disease. It is now being marketed as a commercial application in many fields, including customer service provision.

For organisations looking to make resource allocation decisions, it is important to recognise two realities:

Firstly, it is unlikely that artificial intelligence tools are going to be developed in house in any meaningful way. Rather, expect artificial intelligence to be accessed as a feature in one of the enterprise platforms already on offer. Salesforce, for example, markets its “Einstein” module to offer its users predictive, intelligent support.

Secondly, any deployment of artificial intelligence requires at least two critical inputs: clean, useful data and the ability to “train” the system. Many system deployment projects have found that these requirements eventually emerged as the critical path, not the power of the system itself.

Whilst data is generally plentiful, it is often poorly governed and badly managed. Customer databases have been built over decades, dating back to an era when data was considered ancillary to a business. For example, customer email addresses may have been collected decades ago for marketing campaigns, but are now being used to provide access to sensitive data.

Traditional training methods (courses, manuals and the-like) are not very useful when training a computer system. As it turns out, many of an organisation’s business rules are not codified and often not even documented. Rather, they are enshrined in “practices” that have developed over time. This is why customer service outcomes at large consumer brands tend to depend on who happens to answer the phone or be rostered on a store.

So perhaps the current focus needs to be on getting ready for the intelligent machine world by preparing data and training capabilities. Even basic applications of “AI” can then produce high value commercial outcomes in relatively short periods of time. For example, it is still amazing that most large consumer providers rely on a centralised training method for their customer service agents: typically, a centralised team produces material which is then vetted by lawyers and deployed in a one size fits all, one-way program. This is contrary to reality where the most valuable content is enshrined in the best operators’ brains. It wouldn’t be that difficult to deploy a crowd-powered knowledge system that offers a peer-to-peer knowledge sharing tool, all the while also codifying the company’s customer service practices for a future AI to use.

In the meantime, as the underlying technology develops further at predictably exponential speed, the cutting edge debate has moved to explore how to handle issues of ethics and likely “biases”.

If we hand off decision making rights to machines, how do we expect them to apply questions of ethics? MIT Media Lab has explored this question in the context of self-driving cars: should an autonomous vehicle kill its driver or a pedestrian if faced with that choice. Should it consider the age or profession of either person? Should this decision be different if a pedestrian was jaywalking?

Making this more complicated is the realisation that, as most of the data being used to train intelligent systems was originally created by humans, it also reflects humans’ biases. If, for example, a system was to be trained to write movie scripts, it would conclude from past examples that men ought to get more lines than women, as a recent analysis has found. It is easy to imagine how future “AIs” might be racist or harbour ill will against immigrants.

Organisations sifting through call centre scripts, live chat transcripts or other potential training material in order to ready themselves for an automated future need to be prepared for that and set some principles now before being caught out embarrassingly in the public field.

In 2016, Microsoft released an experimental chatbot, “Tay” on Twitter to demonstrate conversational understanding. Sadly, Tay learned from users to be a racist bigot in less than 24 hours.

Don’t be that bot.