Your AI Likely Won’t Kill Anyone, but Build Chatbots and Services to Avoid the Risk

terminator, killer, robot, ai, bot

Death by AI is a tragic tale, and there are several high-profile instances, usually from self-driving cars or robots. Not because the AI was wrong, but because the human developers gave it the wrong information or wouldn’t enable it to act in a safe manner. Your chatbot or AI service might not be put people in the firing line, but build them like they might, just to be safe.

The latest report to make the press about a terrible loss of life due to an AI’s poor decision making, comes from one of Uber’s fleet of test self-driving cars back in 2017, makes for grim reading. Even though the AI has since been updated, the poor victim can’t be rebooted. 

“For one, the self-driving program didn’t include an operational safety division or safety manager. The most glaring mistakes were software-related.

Uber’s system was not equipped to identify or deal with pedestrians walking outside of a crosswalk. So, despite the fact that the car detected a 49-year-old woman named Elaine Herzberg with more than enough time to stop, it was traveling at 43.5 mph when it struck her and threw her 75 feet. When the car first detected her presence, 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

It never guessed Herzberg was on foot for a simple, galling reason: Uber didn’t tell its car to look for pedestrians outside of crosswalks. “The system design did not include a consideration for jaywalking pedestrians,” the NTSB’s Vehicle Automation Report reads.”

Build your bots like they matter

With that in mind, and other robotic-involved deaths, you might not see any obvious way your customer service chatbot or AI tool could lead to someone’s death. But, consider a banking chatbot that is dealing with someone mired in debt and frustration, or a medical chatbot or AI GP triage bot that might miss a vital clue and not refer someone to a specialist.

These are all possibilities, as are AI systems that deal with computer vision or augmented reality, where the code might miss something important or distract the user at a critical time. In short, if it can happen it will happen eventually, especially with millions of chats over the years of a bot, or large numbers of hours in the field.

So, while your business can’t see a way that the bot could do harm, there is no harm in appointing a safety officer to consider the impacts. Even someone to play devil’s advocate and look at ways someone could come to harm that your typical coder tucked safely in an office might not consider.

Then, there are the users. You might think that a few hours of testing with real people is enough to prove your system works. But as the complexity of the bot rises and the number of options within a chat expands, especially as self-learning AIs provide new answers, advice or possible to solutions to user questions, the chance of an unfortunate and damaging chain of events increases.

Finally, there are the obvious steps that many bots can take to help, but often don’t because the developers consider them trivial. You will note Google searches, Twitter and other social media provide links to suicide helplines when the subject comes up. There will be a lot of false positives as people take about the Suicide Squad movie, Suicidal Tendencies band, but they also provide useful information about helping others, just in case.

Doing some good because you can outweighs the risk of doing nothing because it might not happen. Teaching a bot to look for signs of customer stress or distress might not be in the brief, but enabling it to provide a warm response or link to someone who can help. Even for a small bot working with a modest customer, that one time it provides some help could make a big difference.

Bots will cause people harm, either through their action or inaction, and despite being guided by Asimov’s Laws of Robotics, it will be down to design decisions made by people. Do keep that in mind when it comes to your chatbot and AI plans.


Leave a Reply

Your email address will not be published. Required fields are marked *