The Risks of Chatbot Data Breaches and Privacy Issues Made Clear

data breach, hack, cyber crime

With the news that Data Airline is filing a lawsuit against its chatbot provider, among endless IT breaches and disasters, the reality is now starkly clear that chatbots need to be secure and well-managed to protect the business and customers.

The cloud is so easy and seductive, sign up for a service, create something amazing and off you go. That flexibility and access has been a huge boon, driving startups and helping departments get ahead of their plodding IT departments. However, in the charge to cool AI and chatbot products, or using the cloud for storage and third-party solutions, the need for cast-iron security becomes all the greater, and most businesses lack the expertise to manage that facet.

This issue was brought to light by US airline Delta filing suit against [24]7ai,claiming it lacked the proper security procedures for the product, allowing hackers to alter the chatbot’s source code. And it also delayed admitting the breach to the airline, by some five months. 247ai’s site has no mention of security beyond a front page post about support for GDPR, but the breach happened before that came into force, and has limited impact on US businesses working with US clients.

The case brings into focus the risk for any business that decides it wants a chatbot or to operate an AI service that stores customer data or personal details. The business needs to understand where all that data resides, how it is protected and who is responsible should there be an issue.

Looking for data trouble

One of the most popular tactics for casual and more committed hackers is to search Amazon AWS for open buckets of data, leading to a weekly treasure trove of private, sometimes critical data that businesses are leaving exposed. Reading through this and related news pieces, like the Capital One breach, highlights how serious the problem is.

Any business working with a chatbot provider needs to know:

  • What data is stored?
  • Where it is kept?
  • Who has access to it?
  • Who is responsible for managing it?
  • How long is the data stored for?
  • What identifiable data is kept?
  • How is security monitored and by whom?
  • What steps are in place for a breach?

The long version for cloud security looks at access rights, identification and other issues, all of which need to be addressed. None of this is rocket science, and even a startup with limited technical resources needs to ask the right questions, or check when looking for services or partners for key information. As an example, here is how the SnatchBot service manages bot security:

The end client’s chatbot is hosted in the cloud by Amazon AWS. This provides state-of-the-art security, which consists of network isolation via Virtual Private Cloud; security groups; AWS IAM-based resource-level role permission controls; encryption at rest using AWS KMS or Oracle/Microsoft TDE; SSL protection for data in transit.

This leads to compliance with: CJIS; CSA; Cyber Essentials Plus, DoD SRG Levels 2 and 4; FedRAMP SM; FERPA; FIPS 140-2; FISMA and DIACAP; GxP; HIPAA, IRAP; ISO 9001; ISO 27001; ISO 27017; ISO 27018; ITAR; MPAA; MTCS Tier 3 Certification; NIST; PCI DSS Level 1; SOC 1/ISAE 3402; SOC 2; SOC 3.

That doesn’t mean the data can never be accessed in some way, but it reduces the risk hugely and should act only as the starting point for your business discussion on security and protecting customer data.

As chatbot and AI services become ever more reliant on live data, there is also the risk of it being intercepted en route using weaknesses in endpoints, which is why encryption is vital. Or, even something as simple as a hacker scraping/watching the screen of a customer as they type – something that any bot cannot mitigate.

The key is to ensure the business and provider are doing the best they can to protect data, and to be open with customers about how they handle it and what the customers’ rights are. Failure to do so will add to the risk and if something does go wrong will leave them responsible and liable, whatever the fancy small print on a website.

Ultimately, security is a never-ending battle, and just because cloud “looks easy” to the end user and most businesses, there still needs to be a degree of knowledge about security and responsibility among the business, either in-house or using a third-party expert and service.


One response to “The Risks of Chatbot Data Breaches and Privacy Issues Made Clear”

Leave a Reply

Your email address will not be published. Required fields are marked *