Avoiding Chatbot Crime: The Burden of Proof on Your Chatbot and Its Users

robot, criminal, wanted

Cyber crime, deep fakes, AI heists and political hacks all help to reduce our trust in technology. Chatbots will be on the frontline in the next generation of customer/consumer crime, and your bot needs to be ready with a range of bona fides to create and build trust.

If you are at work in your accounts/finance office and you pick up the phone (assuming you still do, with all the robo/scam calls going on). Imagine the boss on a crackly line demanding you transfer a large sum to seal a big deal in a do-it-now-or-we-re-all-fired manner. That is starting to happen for real, except it isn’t your boss, it’s an AI voice emulator that sounds like your boss, perhaps based on a speech he gave online or a few YouTube clips.

This is real crime today, and if someone can be fooled and pressured into sending money like that, then you can see why criminals will be throwing their efforts into more digital crime beyond bitcoin scams and ransomware attacks.

It was the Chatbot what dunnit!

Chatbots, increasingly used to ask personal questions or make financial transactions, are a tempting target. No need for a deep fake photo, video or voice, just an easily trackable string of text with a payout at the end. Consider the growing number of chatbots for political or charitable causes, those linked to web stores or asking for a booking or credit card number before securing a restaurant table.

All it takes is a cloned/fake website, app or social media page and criminals could be making money off the back of your business, no matter how small. At a more personal level, consider a chatbot providing a general “attitudes” survey as part of consumer research or from your flavour of political organisation.

All it takes is a completely legitimate sounding survey, promoted on social media, those spammy adverts or directly from a fake source, or if genuine survey data is intercepted or stolen by criminals. Next thing you know, an email threatening to share your views on politics, sexual preferences or the state of your mental health with friends, family your boss or so on. That’s currently a common tactic, targeting anyone who’s shared some saucy pics that have been stolen in hacks, creating a lucrative market – often, even if no photos exist.

Either way, we will soon have similar endless, automated messages reaching millions of people, while businesses will face a string of customer complaints, tabloid tales of woe and other reputational damage.

Therefore, as the chatbot becomes the first point of contact for growing numbers of businesses, we need methods to prove that the business is legitimate (and the customer is real too) and that the connection is truly secure. The obvious solution is to bake the chatbot point-of-access into a genuine app, which is easy for the banks and airlines of this world, who already use apps to track and trade with the majority of customers.

For those without an app, the use of social media tools like Facebook Messenger add a layer of security and authenticity for most companies, local or major, for customers that deal with it. Beyond this, businesses can spend some of their IT department or “security time” looking for fake sites or profiles that might lead customers astray.

Future protection for chatbots

Beyond current efforts, making chatbots secure and identifiable will be the work of security add-ons, either using blockchain to create a proof of ownership, governance and transactions or additional layers of security.

Apple iPhone users might log-in to banking chatbots using faceID beyond the usual app login to ensure the right person is in touch, and using voice chat to speed up the conversation can be another avenue for security (the whole fake voice issue aside). Multiple methods of proof of identity may be needed in a crime-heavy future for major apps, while local stores and websites might use SMS access codes.

All of these methods can be added into chatbot tools via APIs or upgrading apps, so there is no major technological barrier to more secure bots, but developers need to plan ahead and prepare now for a high-security future. Build them with upgradability in mind, focus on ways to ensure the customer can identify themselves and ensure all documentation and introductions promote the use of good security practices.

Does this apply to your business? Well, you can bet the likes of Monroe County School District in Florida didn’t expect to be the victim of ransomware. Miley Cyrus and other celebs didn’t expect to have their racy pics stolen from their phones. No one expects to be a victim of digital crime but anyone operating a chatbot is putting up an advert to criminals and they will take an interest.


Leave a Reply

Your email address will not be published. Required fields are marked *