Bots have become a crucial component in driving our real-time economy forward. By now it is clear to all that the use of bots will only grow in the coming years.
With the help of bots, businesses can be more agile, make better decisions, provide superior customer experience, and boost overall efficiency. However, as they expand bot operations, organisations should also invest resources in ensuring that this technology is not put to the wrong use. Recent examples of bot misuse hitting headlines include the facilitation of election interference as well as the aiding of corporate espionage by none other than McDonalds!
Considering this, let’s examine how businesses are using bots (both functionally and operationally). Let’s take a closer look at whether they’re prioritising compliance and ethical use, and whether external regulation is the best way forward for enterprise bot operations. I won’t rely on assumptions to answer these questions but, rather, years of experience combined with cold, hard evidence! This specifically includes recent Bright Data and Vanson Bourne research, which surveyed hundreds of US and UK executives (C-level, directors and senior managers) from IT, tech and financial services organisations that use bots.
So, how are businesses using bots?
Currently, businesses are using bots across a huge number of functions. Our research found that the most common applications were related to customer service (76%) and data (57%). Other common uses include cybersecurity (51%), the automation of back-end administrative tasks (35%), automated trading (23%), and social media engagement (22%). Interestingly, almost all respondents reported using bots to collect publicly available information for data-driven insights (94%).
Given the speed at which AI is advancing, we can soon expect businesses to start using bots across all functions, including HR, legal, sales, and more. In fact, almost all (95%) of the organisations surveyed plan to expand their automated functions, and with that bot usage, in the next two years.
How seriously are businesses taking compliance-driven, ethical bot use?
Despite the bad press that bots often receive, the vast majority of businesses are committed to ensuring that bots are used responsibly, according to Bright Data’s research. Overall, 48% of US survey respondents reported having guidelines in place to moderate all uses of bots, and another 48% had guidelines regulating some uses. In the UK, these figures were 57% and 40%, respectively.
This suggests that most, although by no means all, businesses are taking responsible bot use seriously. Also heartening is that almost three-quarters (74%) of survey respondents said that bots were the ultimate responsibility of a C-level decision maker in their organisation (14% CEO, 26% CIO, and 40% CTO). This indicates that businesses are taking a strategic approach to bot operations. When the buck stops with a senior decision maker rather than with a more junior employee, we can generally expect to see increased transparency, a higher level of accountability, and more robust processes.
But is self-regulation or in-house compliance enough?
Like with any other new technology (e.g., social media, blockchain, etc.) the speed of AI development has so far outpaced global regulators. Currently, no country has a coherent strategic approach to governing and regulating AI, although the EU published a draft proposal for its Artificial Intelligence Act earlier this year.
Looking specifically at enterprise bot use, there’s little public discourse around regulation. So, what do IT, technology, and financial services executives really think about this contentious topic? You might be surprised to learn that our research revealed significant appetite for more regulation. Bright Data found that an average of 40% of survey respondents actively wanted to see increased external regulation of bots (45% in the US and 33% in the UK). Meanwhile, just 7% didn’t want to see any external regulation of bots.
As a leading provider of web data collection platforms, Bright Data is committed to promoting responsible bot use – and we think regulation should play a core part in this. To manage the explosion in enterprise bot use, it’s crucial for governments to work alongside businesses to create an extensive compliance framework that all bot-using organisations in their jurisdiction should adhere to. Given the speed at which AI is developing, this certainly won’t be easy, as the guidelines would need to be updated on a frequent basis. However, we believe that such regulation is necessary to ensure the long-term success of this technology.
When it comes to ethics, is it better for organisations to outsource their bot operations or keep them in-house?
Bright Data’s research found that 38% of US and 19% of UK businesses outsourced the majority of their bot operations. Meanwhile, almost three quarters (74%) of UK respondents, and around half (53%) of those based in the US, outsourced at least some of theirs. But what are the ethical and compliance implications of procuring an external partner to run your bot operations?
On the one hand, an external specialist is likely to have robust ethical policies in place, based on years of experience of knowing what to do (and what not to do!). Plus, it’s difficult for businesses to build an internal team that can compete with an external partner’s resourcing and team expertise. Given the speed at which this area moves, keeping up with the latest ethical best practices and any regulatory or compliance demands can be overwhelming.
On the other hand, businesses need to be sure they’re using a reputable external provider to manage their bot operations. Interestingly, only half (54%) of Bright Data’s survey respondents said they were ‘totally aware’ of their organisation’s third-party bot provider – a concerning figure. It’s imperative that businesses demand transparency from their bot operations’ provider and conduct robust due diligence before committing to one.
Overall, there’s no question that the positive value bots deliver to businesses outweighs the negatives. But this doesn’t mean we should become complacent when it comes to bot misuse. Mandating clear, highly compliant standards through regulation is the best way to ensure that bots are used responsibly.