Have you ever listened to an adult talk to a small child? Their tone shifts, they use shorter words and sentences. They repeat the same phrases, “Look at the dog Billy, do you see the dog? It’s a friendly dog, isn’t it? A lovely dog. Do you see the dog Billy?”
Listen to them now talk to their boss, “Yes sir, if you just look at this slide you’ll observe that we have increased our disruptive synergies across a number of verticals, leveraging our cognitive advantage for omnipresent brand awareness.”
As humans we use context so seamlessly we barely even notice. We automatically know how to adjust conversation for a noisy environment or for different people. We can sift between different meanings of the same word and pick the right one instantaneously.
Chatbots come with no automatic knowledge of how to use context, but since we as humans do, it is possible for us provide them with the information and tools they need to be able to use context powerfully. Let’s explore how we can do that.
As humans we are always aware of our situational context — are we standing outside on a public street, or sitting in our own living room with friends? Is it first thing in the morning, or the middle of the night? It changes how we talk, and what we talk about.
Chatbots “live” in a number of different locations — on websites, through smart speakers, in apps. Knowing which channel your users are chatting to you through should be simple. You may even be able to determine the specific device they are using, or physical location of where they are. We can use this to shape different conversations in different channels, locations or times of day. At a minimum we can simply use this to track where and how our chatbot is used so that our default conversations are shaped for how and where it is most used.
When we converse, we draw from a vast amount of information we have about the world. We assemble words into sentences without even giving it a second thought. We understand different meanings of words not only based on the sentence they are used in, but who is using them, where we are, and a knowledge of the world in general.
Chatbots have a limited vocabulary they can understand, and it comes from what we train them on. Having a really good set of training data, and especially being able to add in synonyms, will help your chatbot be able to understand more variations of phrases. We should try to gather as many of these as we can from research before we build our chatbot, but once it’s “in the wild” we should constantly review what isn’t being understood so we can keep adding to our training data and making our chatbot smarter. Be especially aware of any unusual language used by your target audience. There may be local dialects, or business specific ways of referring to things.
If you spoke to someone who forgot everything you said from sentence to sentence it would be a very frustrating experience.
As well as understanding individual sentences, your chatbot should be able to keep some context during the conversation — even if just to keep track of data that has been collected. The more you can remember about what has been said before, the more helpful and personalized your chatbot can be. This can be a challenge, and will mean treading the line between being helpful and creepy (someone who remembers every single thing you ever said might be a little overwhelming). The best way to start to use this context is to collect analytics, as this will allow you to see common patterns that the chatbot can then anticipate and use to shape the conversation.
As humans we are very sensitive to different types of emotions in conversation, and how they are conveyed through vocal tone, and wording changes. Computers are less good at that, but are improving all the time. Whether it is appropriate to incorporate that into your chatbot depends on how important it is to your chatbot’s tasks. If it is not a key part then I would recommend not incorporating this immediately given that it will likely require using additional services. If you do use it, remember that the technology is still not perfect, it should be used carefully, and never relied on.
This one is interesting… we have different conventions for interacting with different people — our boss, our friends, our children. At the moment there isn’t a social convention for interacting with bots, but as they become more popular we may see one emerge. Definitely something to watch, because should one emerge, it will change how people interact with chatbots.
Conversation is a collaborative act — just as your chatbot is getting context from your user, your user is getting context from your chatbot. Ensure that your chatbot is giving good guidance on what it can do so your users understand the boundaries of interaction. Be clear when there are issues on what options the user has (do they just need to repeat, or is the chatbot not able to take requests at all right now).
Be very aware that language is reflective – your users are more likely to use the same words you do. You can use this powerfully to guide users towards the types of phrases that can be understood by the chatbot, but with power comes responsibility. Never have your chatbot say anything it can’t understand.
Finally, use active listening – repeat back key parts of what the user has told you. Key parts are those that you will use to take some sort of action on. This will give the user the appropriate context to decide if the chatbot has understood, and also to decide if they want to proceed with any actions.
Context is the “secret sauce” of any great conversation, and one we know so well as humans that we never even think about. Teach your chatbot to use it well and it will delight your human users.
Gillian Armstrong is the Lead Conversational AI Specialist at Workgrid Software. She’s passionate about using new technologies to improve users’ lives and is currently leading a team that is exploring the use of cognitive technologies within the enterprise to improve productivity and augment human intelligence. You can connect with Gillian on Twitter and Medium.