Chatbots are a growing research topic. In the spotlight since the advances in artificial intelligence in recent decades, conversational agents (CAs) constitute a part of the scientific literature, in the fields of information systems, computer science and human-computer interactions, among others. Here we will establish the lessons we can learn from these studies to create more efficient chatbots.
Inefficient until now?
One of the first lessons we can learn from the scientific literature and experiments that have been conducted so far – both in the laboratory and with the general public – is that most experienced chatbots have failed to meet the expectations of either their users or their creators. Thus, the majority of chatbots that have been implemented in the past no longer exist, because they were disappointing or because they are outdated, due to a lack of follow-up to keep them up to date.
As users become more demanding in terms of speed and service customization, however, companies and researchers have intensified their efforts in taking this technology to the next level: that of a reliable, user-friendly and scalable conversational agent, with minimal production costs. This last point is important because the possibility of having an efficient customer service at a low price is the main driver for research in the field.
If such a chatbot seems feasible today, it’s because there have been many failures in the past so we will explain them here so that you don’t have to repeat them.
Not too human
In the quest for an ideal conversational agent, it seems obvious at first glance that the best results will come from an interaction similar to that which one can have with another human being. This, however, is one of the biggest mistakes that has been made. Indeed, as long as the illusion cannot be perfect – that is, as long as the customer can suspect that it is a robot and not a human being – too much resemblance to a human being will be counterproductive. The imperfection of the chatbot will lead – through interaction with it – to disappointment or even rejection and disgust. This can threaten customer loyalty and trust in the company and therefore result in an economic loss. Rather than trying to simulate a human interlocutor, whether by adopting an avatar or mimicking natural language, it is better to assume from the beginning that the chatbot is a robot, and take advantage of this situation.
Indeed, user experiences are more positive when the chatbot laughs off its condition as a robot, rather than denying it. A chatbot with self-derision is more likely to amuse the user. In addition, if users know from the beginning that they should not expect to talk with a human, they are more likely to take the experience as a game and will be less likely to be disappointed. Research clearly established that chatbot architectures and algorithms were not yet sufficiently developed to maintain the illusion of a human agent. Moreover, beyond the purely technical aspect, the ‘social’ science of the human-computer interface is also not sufficiently developed to ensure a smooth conversation.
Avoid the famous ‘uncanny valley’ (that’s the name given to the phenomenon in which a robot that looks too much like a human being, while having imperfections, will give a ‘creepy’ feeling) and assume the robotic character of chatbots and go as far as making it a strength.
Understand what users want
Often, the reason for a chatbot’s failure is not due to a technical defect or lack of fluency in the conversation, but to the fact that users do not find what they are looking for in the chatbot. It is therefore very important when you want to make a chatbot, to first ask yourself what the user will want to find by contacting your department and interacting with your chatbot. This point is essential and it is probably preferable to do this reflection process first and only then align the technical architecture and conversation strategy accordingly. The design phase is not to be neglected, otherwise your chatbot will be disappointing, even with the best technologies behind it. In addition to this, few active chatbots have a real coherence between their support and their interaction, yet scientists believe that coordinated research between information systems and human-machine interactions could be the key to an effective chatbot.
Finally, if you embark on the chatbot adventure, be prepared to invest enough energy in it to make it really interesting, i.e. able to hold a long conversation and react to the context by capturing the user’s emotions, in order to direct the conversation in the desired direction. Chatbots have the unfortunate tendency to be sloppy, or limited to a sales interface. Think of your chatbot as a conversation machine per se, whose commercial or advertising aspect is only a branch of a whole conversation tree, the visible part of the iceberg, and which therefore only leads to a commercial proposal when the conversation has led to that point and when the customer has expressed that intention.
Research is still trying to establish a true chatbot science. In the meantime, you can already learn from past studies to avoid common mistakes, and place yourself at the forefront of communication technologies.
[…] leadership unless they can present a clear bot strategy, funding and staffing plan to avoid a chatbot failure that languishes unloved by customers and the business, or creates negative sentiment about the […]
To leave a reply, please join the community: