Improve your chatbot by analyzing user messages in 5 steps (Part 2)

This is Part 2 in the post series about improving your chatbot by analyzing user messages. (You can read Part 1 here)

Social impact organizations need to pay attention to what users say when communicating with their chatbot. Users often communicate with natural language responses, also known as “free-text” or “open” responses. Organizations may actually elicit open responses to get feedback or for filling out forms. Additionally, users may try to chat with a bot using natural language responses, whether the bot prompts such messages or not. All of these responses provide nonprofits and other organizations an opportunity to gain insight into the most critical aspects of their chatbot.

In part 1 of this post series, we described a process to systematically collect and organize free-text user responses. Once that’s done, though, you need to know what to look for to get the most value from them. We’ll look at 5 key areas to focus on when analyzing natural language responses for insights to improve your chatbot.

Questions / Queries

Even with menu-driven chatbots, people will still have questions that aren’t addressed or aren’t easy to find. Questions provide insights into what users care about and want to know. In educational bots we have worked on for nonprofit organizations, we noticed users looked for information about very relevant topics that were not initially considered for the project. We’ve used these insights to add new features and develop an FAQ section for the bots.

In Maya, a chatbot we developed that educates Nepali youth about the risks of human trafficking, the initial scope of the project focused on basic information about what trafficking was and how to stay safe. Users asked many additional questions, such as statistics on trafficking and how domestic violence is related to trafficking. Adding these insights not only helped engagement with the bot but also improved the depth of learning Maya could assist with.

Navigation

Navigation is critical to a chatbot’s success. Analyzing messages shows where users might be trying to switch conversation paths or return to certain menus. Users give up pretty quickly when they can’t find what they are looking for. Navigation issues need to be fixed, but it can be hard to understand why users aren’t reaching certain content.

Even simple menu-based bots are not immune to navigation errors. In one bot we worked on, we noticed that users typing “C” (with quotation marks) to select option C caused a fallback response, because the system was trained to recognize option C without quotes. It would have been difficult to spot this problem without analyzing natural language responses.

Small Talk

A chatbot needs to be able to respond to small talk messages. These messages may not target relevant information, but providing a good response improves the user’s conversational experience and increases the chance of the user sticking around. Users bring their experience talking to people to their interactions with chatbots. So they expect human-like reactions, such as greetings, introductions, and closings. When working on bots at Tangible AI, we have a list of common intents that chatbots should be able to respond to, such as “who are you”, “continue”, and “thank you”.

Aside from basic small talk, we have also noticed that users across our bots are concerned about their privacy. They ask “Is my conversation private” or “Can I talk with you privately”. Having a poor response to such an important question will certainly make a user uncomfortable. You need to help users understand the context of their conversation with a chatbot.

Feedback

Users might also provide feedback on their experience with the chatbot. The feedback might come from a survey within the chatbot itself. It’s also possible a user would just give some feedback response to your chatbot unexpectedly.

These messages could be positive or negative. With Maya, users left encouraging messages that certain parts of the bot were great, particularly that they learned a lot. They also left messages critiquing the activities the bot provided, or expressing their confusion. Both complimentary and negative feedback provided us with a better understanding of how the experience was working. We learned how to improve our activities based on the feedback. Also, we learned why other components of the bot worked well, which let us improve our development going forward.

Out of Scope/Spam

Some messages users send will not be relevant to the purpose of your chatbot. Sometimes, users may not understand what the chatbot’s for. They may ask for information or functions that aren’t available, like providing local daily news. Of course, users may just want to test out the chatbot or play around with it. It’s unfortunately common that chatbots face harassment. In most cases, there is not much one can do to reduce spam messages, especially for a bot that targets a broader audience.

In order to get the most out of their chatbots, nonprofits need to work with natural language responses that their chatbot doesn’t recognize. While it can be tempting to focus on the successful exchanges a chatbot has, messages that a chatbot doesn’t know how to handle can offer insights into what’s not working. More importantly, free responses are direct communication from users, and the messages provide a rich source of information that can help you plan how to make your bot more effective for your audience. Implementing a process to deal with natural language is important for the same reason that the bot exists. With a clear process, these messages can provide insights into some of the most critical aspects of a chatbot that tell how well it is accomplishing its purpose.

Scroll to Top