Chatbot Identification Now Law in California and Expected to Spread

chatbot in jail, chatbot, arrested

A clampdown on video face recognition systems is spreading around major cities, as politicians police and others continue to argue its merits. Meanwhile, a tightening of rules on other AI and smart technologies is on the way. Chatbots are the latest to feel the effect with a Californian law banning bots that fail to identify themselves as such.

When any new technology arrives, there are people who want pragmatic laws to protect users and bystanders – think the red flags that had to be waved in front of early cars. There are also crazy laws passed to try and protect vested interests or big business. Look at all the roadblocks thrown in front of solar power and electric cars to keep big oil happy.

With digital technology, there might not be such an obvious risk or hazard, or a line of legacy business to protect. But people taking this approach would be wrong. The chatbot law was written on the back of political shenanigans in 2016 US elections, trying to prevent further foreign or fraudulent interference – this in a country plagued by robocalls and other harassment during any election.

Now in force since July 1st, its passage through the Californian legislature has shown what a hot and controversial topic chatbots are. Over time, the bot law has been watered down, facing opposition from local tech companies and those politicians who want to present a fake face to voters.

That aside, most chatbots are built on honest intentions. Yet, as chatbots become more popular, there may be some creators who think it is okay to pretend the bot is a human agent. That might be to create a false sense of reassurance, or to attempt to capture information from people who think they are talking to someone else, and so on.

Breaking the bot law?

As with many laws in the digital realm, actually policing this law, even in its watered-down form, is largely impossible. What happens if the bot was created outside California or America? What’s the point of prosecuting if the bot was created years ago, and exists on some niche website?

However, it is both symbolically and physically a warning shot to chatbot creators and developers to remind them now, before their bots get too big or complex, that they do the right thing now, rather than having to retrofit this simple identification feature later. For some insight into the law and its potential ramifications, see “Will California’s New Bot Law Strengthen Democracy?

From a practical aspect, chatbot builders should follow the advice of Avi Ben Ezra, CTO of SnatchBot: ‘The best practice is to always have your bot introduce itself as a chatbot. It can do so in a way that has personality (sombre for a banking chatbot, cheerful for a customer service chatbot, etc.) but the fact that it is a chatbot should be stated up front. Our experience shows that in many cases customers prefer talking to a chatbot and there is no legitimate reason for disguising the chatbot as a person.’

Watching out for bad bots

Until a law with more teeth is posted, perhaps by the EU which has a keen interest in online safety and trust (see the GDPR rules), there is still open season for bots that are trying to create fake outrage, canvas opinions through misrepresentation and other tricks. That will be largely for political gain, but it is just as easy to see legacy businesses pedaling fake news or opinion as fact to try to win favor.

While some governments and totalitarian states will try every trick in their power to keep the people in line, in most regions that need for trust is essential for healthy respect and acceptance for chatbots. Bots, along with systems like the controversial AI-powered surveillance tools, will soon be in the frontlines of services for millions or billions of citizens or consumers, and the need for trust makes a strong legal basis for their operation an imperative for all.

It might take a year or two, but expect more laws around all classes of AI-powered services that will be put in place to ensure good business behavior with fines in place similar to the recent British Airways data breach. These should incent all sizes of business to follow the rules and encourage well thought out plans for bots and AI services.

In the mean time, every business should plan their chatbots as if these rules are already in place.

What KPIs Should You Track for Chatbot Success? – The Chatbot December 27, 2019

[…] technology is becoming so seamless and sophisticated that many customers don’t even realize they aren’t talking with a human […]