Conversational user interfaces

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Conversational interfaces (CUI) are platforms that mimic a conversation with a real human. Historically, computers have relied on graphical user interfaces (GUI) such as the user pressing a “back” button to translate the user’s desired action into commands the computer understands. While an effective mechanism of completing computing actions, there is a learning curve for the user associated with GUI[1]. Instead, CUI’s provide opportunity for the user to communicate with the computer in their natural language rather than in a syntax specific commands. [2]

To do this, conversational interfaces use Natural Language Processing (NLP) to allow computers to understand, analyze and create meaning from human language. Unlike word processors, NLP considers the structure to human language (i.e. Words make a phrase, phrases make sentences which convey the idea or intent the user is trying to invoke). The ambiguous nature of human language makes it difficult for a machine to always correctly interpret the user’s requests, which is why we have seen a shift towards Natural Language Understanding (NLU)[3].

NLU allows for sentiment analysis and conversational searches which allows a line of questioning to continue, with the context carried throughout the conversation. NLU allows conversational interfaces to handle unstructured inputs that the human brain is able to understand such as spelling mistakes of follow-up questions [4]. For example, through leveraging NLU, a user could first ask for the population of the United States. If the user then asks “Who is the president”, the search will carry forward the context of the United States and provide the appropriate response.

Conversational interfaces have emerged as a tool for businesses to efficiently provide consumers with relevant information, in a cost effective manner. CUI provide ease of access to relevant, contextual information to the end user without the complexities and learning curve typically associated with technology.

While there are a variety of interface brands, to date, there are two main categories of conversational interfaces; voice assistants and chatbots.

Voice Assistants[edit]

Voice assistants are interfaces that allow a user to complete an action simply by speaking a command. Introduced in October 2011, Apple’s Siri was one of the first voice assistants widely adopted. Siri allowed users of iPhone to get information and complete actions on their device simply by asking Siri.

Further development has continued since Siri’s introduction to include home based devices such as Google Home or Amazon Echo (powered by Alexa) that allow users to “connect” their homes through a series of smart devices to further the options of tangible actions they can complete. Users can now turn off the lights, set reminders and call their friends all with a verbal queue.

These conversational interfaces that utilize a voice assistant have become an efficient and popular way for businesses to interact with their customers as the interface removes the typical friction in a customer journey. Customers no longer need to remember a long list of usernames and passwords to their various accounts; they simply link each account to Google or Amazon once, and gone are the days where you needed to wait on hold for an hour to ask a simple question.

Chatbots[edit]

Chatbots are web or mobile based interfaces that allow the user to ask questions and retrieve information. This information can be generic in nature such as the Google Assistant chat window that allows for internet searches, or it can be a specific brand or service which allows the user to gain information about the status of their various accounts. Their backend systems work in the same manner as a voice assistant, with the front end utilizing a visual interface to convey information. This visual interface can be beneficial for companies that need to do more complex business transactions with customers, as instructions, deep links and graphics can all be utilized to convey an answer. The complexity to which a chatbot answers questions depends on the development of the back end. Chatbots with hard-coded answers has a smaller base on information and corresponding skills. Chatbots that leverage machine learning will continue to grow and develop larger content bases for more complex responses [5].

More frequently, companies are leveraging chatbots as a way to offload simple questions and transactions from human agents. These chatbots provide the option to assist a user, but then directly transfer the customer to a live agent within the same chat window if the conversation becomes too complex.

References[edit]

  1. ^ "Conversational Interfaces: Where Are We Today? Where Are We Heading?". Smashing Magazine. Retrieved 2018-05-23.
  2. ^ Brownlee, John (2016-04-04). "Conversational Interfaces, Explained". Co.Design. Retrieved 2018-05-23.
  3. ^ Pan, Jiaqi (2017-08-25). "Conversational Interfaces: The Future of Chatbots – Chatbots Magazine". Chatbots Magazine. Retrieved 2018-05-23.
  4. ^ Lola (2016-10-05). "NLP vs. NLU: What's the Difference? – Lola – Medium". Medium. Retrieved 2018-05-23.
  5. ^ Onlim (2017-03-22). "How do Chatbots Work? – Chatbots Life". Chatbots Life. Retrieved 2018-05-23.