Cast your mind back to the early 1990s, specifically the cell phone market at the time. Devices were bulky, basic, and expensive. Motorola’s DynaTAC cost around $10,000 and weighed in at 2.5 pounds. A full charge took 10 hours, affording just 30 minutes of call time – its one and only function. It stored some 30 phone numbers and offered what can charitably be described as inconsistent call quality.
Compared, today’s cell phones feel like digital Swiss knives.
Sleek form, incredible computing power, and powerful internet connectivity aside, what is most remarkable about the modern cell phone is that it’s built to support a vast variety of functions. To do this, every cell phone component, micro-system must fit tightly and seamlessly with others, optimized to the point that they work intuitively and in perfect harmony. Like neurons firing in the brain, enabling us to do a hundred different things with the same body.
This progression from single-use cases to compact multimodal capabilities happened at a breakneck pace — just under three decades. Artificial intelligence (AI) is advancing at a similar speed. In fact, we have grown to live with AI; sophisticated algorithms are already informing our daily choices – from movie recommendations to email responses. AI, in various forms and scope, is already out in the real world.
While early evidence shows great potential in present-day AI – the many hundred applications of AI prove this to be true – it is still light years away from becoming capable of understanding, learning, and executing tasks the way humans can.
IBM spent the best part of a decade training its supercomputer Watson to diagnose diseases. While it thrived in laboratory conditions, it consistently failed to perform in the messy reality of today’s healthcare systems. In facial recognition, recruitment, and security, other AI-powered systems fared a little better. Craig Smith, CEO of artificial intelligence publication Eye On, pulled no punches when he described AI as “useful, but crude and cumbersome.”
Like all technologies that came before it, AI or machine learning – its subfield – is moving through the typical hype cycle: from the innovation trigger through the peak of inflated expectations to the trough of disillusionment. Inevitably, it will climb up the slope of enlightenment. Eventually, it will deliver on its ‘grand promise.’ Getting there will likely start, if not with a sudden breakthrough, with piecing together different forms and capabilities of the technology to reveal some semblance of cognitive girth. Or intelligence.
Indeed, we are beginning to see the green shoots of such thinking. Particularly in customer service.
The state of AI in customer service
Perhaps ironically, customer service, a people-centric discipline, is defined by technology. The advent of the telephone detached customer service from its face-to-face past. Switchboards and contact centers condensed the function into a single department. Interactive Voice Responses (IVR) offered the first look at automation. The proliferation of internet-based communication—email, social media, live chat, and bots—ushered in a new era of consumer-centric service. All through, technology expanded the range of interactions and the overall performance of the customer service function.
Leaders in the field are always on the hunt for new technologies to deliver improvements in efficiency and output. While still in its infancy, artificial intelligence, in one way or another, has already found a home in one-third of customer service departments. Even those hesitant to immediately invest admit its potential; two-thirds of leaders say they expect AI to have greater importance over the next two years. More so since personalized, conversational interfaces are becoming the gold standard for delightful customer engagement.
Although interest in AI is spiking, its application in customer service remains limited to intermediary and supporting roles. This is because, in its current form, AI is most impactful as a technology that complements and strengthens human capability, not one that fully replaces it.
Consider live chat and self-service systems – two of the most popular AI-enabled tools in customer service. While algorithmic process automation relieves shallow work, this is almost always administrative work—identity verification, call routing, and so on. The duty of ‘service delivery’ still remains with the human agent to a large extent. The role of chatbots is similar. They triage customer queries and locate information rather than directly providing service.
“All these systems can do one tiny thing: they learn a function that relates inputs to a predictive variable,” explains veteran AI researcher, statistician, and investor Steve Shwartz. “There’s no intelligence. If I have a system that can translate language, it has no idea what the words mean.”
Consider an image identification tool, says Schwartz. The AI can identify your brother in one photo and your sister in another — but it doesn’t know what a sibling is. It doesn’t even know what a person is.
This is why customer service leaders ought to apply rational optimism to AI. For the CS function, which is inherently language-driven, largely unstructured, and undefined, most AI tools today are applicable only to a specific subsection of actions and interactions. Particularly, those that can be codified as per some rule. For them to be more deeply impactful, the customer service function will need something akin to general intelligence, a system capable of turning millions of interactions, conversations, and rules into generalized ‘common sense’ to react appropriately — something today’s AI systems can’t. This fact fundamentally delineates what AI can and cannot do for customer service today. Indeed, for the foreseeable future, companies should deploy modern AI tools in offensive functions (proactive), rather than defensive (reactive) functions.
Despite these limitations, technologists are not daunted. They believe we are racing through the 1990s DynaTAC era of artificial intelligence. Some promise that impactful applications are on the horizon if we hold our nerve. But as with most technologies, the implementation is more nuanced.
A glimpse into the future
Most AI practitioners agree the technology operates best in narrow, tightly-defined customer service jobs—transcription, triage, information finding, and so on. What it will take to raise AI technology beyond such a supporting role is debated, but some highlight emotion and sentiment as the next significant milestone for AI’s evolution.
Currently, most technologies lack insight into the motivation and intention behind the language. Consider the phrase, “You’re doing a great job.” Said honestly and earnestly, it is a compliment. Sarcastically, however, it’s a withering insult. Most AI systems can’t discern between the two, which is potentially ruinous as the optimal response to each is wildly different. The former requires recognition and thanks; the latter placation and apology.
Emotions and intent are not clear-cut, either. Alongside basic emotional understanding, edge cases provide endless nuance.
“Brits are really good at being aggressively polite, which is an interesting challenge,” says Graham Page, Global Managing Director of Media Analytics at Affectiva. “I’m going to be nice to you, but you can see from my face that I’m really unhappy about this. If they don’t have [understanding], it’s easy for systems to respond inappropriately and frustrate people.”
Despite the enormity of the challenge, companies like Affectiva are making headway. Page says his media analytics technology can extrapolate human emotions from video with accuracy equal to or better than a human coder. More impressively, the system can now operate in real-time. In a customer experience application, that means systems can understand what is being said and why someone is saying it.
This sounds like a technological renaissance. With a more nuanced understanding of emotions, cognitive states, and intents, AI tools could relieve the burden of service delivery entirely. But Page advises caution.
Understanding emotion and intent prompt the question, “What next?” At the coal face of service delivery, customer queries, complaints, and questions are rarely simple. To survive on the frontline, AI applications must be able to respond effectively and efficiently. This is a challenge orders of magnitude larger than preliminary understanding.
“The second challenge for these systems is creativity,” Page says. “They’re good at black and white problems. They’re good when there’s a defined answer. They struggle more where there is a degree of creative problem-solving.”
Here, advancement will be neither swift nor straightforward. “Right now, as soon as they realize that they’re not able to deal with a query, the best thing systems can do is hand it over to a person as quickly as possible,” Page says. “The quicker that happens, the happier people will be.”
An alternative, synthetic future
The idea of a centralized, all-knowing AI system like HAL 9000, the sentient ship computer from 2001: A Space Odyssey, dominates most discussions, but some practitioners believe the future lies more in, not concentration, but cooperation.
As many academics, technologists, and commentators have highlighted, artificial general intelligence (AGI) is a long way off — if it is possible, at all. Indeed, the vast majority of today’s AI systems focus on one aspect of intelligence, isolating its characteristics and creating an evaluation parameter to train a neural network.
Their algorithms train on closely related data (pictures of faces, clips of speech, photographs of road markings). Each new batch of data improves the system’s pattern matching accuracy. Adding more data doesn’t add new capabilities, it only improves existing ones.
The future may deliver not a centralized, all-knowing artificial mind, but a distributed mesh of smaller, ambient AI technologies, each cooperating and collaborating with its neighbors.
This mesh will exist both on the consumer’s and company’s sides. For example, Page says he can imagine a scenario where a dozen separate AI-ML systems cooperate to deliver a service or experience.
“People will bring lots of signals together using different AI systems in a multimodal approach,” he says. “AI systems are good at defined tasks. One of our systems is really good at spotting faces and expressions. Our system for measuring tone of voice is completely different. A system for measuring the emotional tone of written words is a separate system again.”
These applications tend to work independently, both at Affectiva and more widely. But the opportunity for integration and cooperation is huge.
Yet, this scheme raises more questions than it answers – what’s the right combination of technologies? which ones are more important than others? how do you consolidate what they compute in a seamless way? In theory, there should be a more efficient way of creating a winning configuration. But the reality today involves a lottery-like play.
Artificial General Intelligence is decades or centuries off; it’s an academic abstraction still. With potent computing power and advances in deep learning, contours of it may begin to emerge in the near future. Until then, motivated businesses can reliably park their bets with narrow, use-case-specific AI to produce results that easily rival (and exceed) what humans do. The short- and medium-term future of customer service is in narrow artificial intelligence. AI-powered virtual assistants, chatbots, and self-service are, individually or integrated, vastly capable of reducing cognitive overhead by sweeping away scenarios that involve repeated decisions and demand consistent results. Trained on well-labeled data, these tools are exceedingly capable of predicting and providing touchpoint-level support to move people towards their goals with great ease and efficiency. Over time, with advancements in areas like speech recognition, sentiment and emotional analysis, natural language processing, intent prediction, and dozens more, AI will become the mainstay of customer service and customer experience.