Responsible AI: Future Rules For Bots & Artificial Intelligence

Responsible AI: Future Rules For Bots & Artificial Intelligence

Technology has been one wild west after the other. We had OS wars, the browser wars, the console wars, and endless scuffles over each business IT development. Coming up fast are the AI and bot wars, as businesses look to make the best use their power, while lawmakers seek to limit the unwanted impact they might create.

The AI and Bot Future Needs Rules, Fast

For anyone wondering just how serious the rise of artificial intelligence, voice and interactive assistants, and chatbots is, check out Google’s Responsible AI practices page. The company is putting huge amounts of resources into building the next generation of AI services for consumers and businesses. But, at the same time, it is trying to walk the legal and moral tightrope. While juggling the many balls of vested interests, academia, worker and advocate pressure to do the right thing, plus pressure from world governments of varying levels of authoritarianism.

At this point, your business might think that plans for an office chatbot that takes bookings or deals with customer service queries isn’t worth all the fuss. Luckily, these bots with natural language processing features of some limited AI, are just the starting point of the AI revolution, and few customers would be upset if the bot couldn’t complete a task.

However, already we’re seeing bots that can deal with more complex queries and it is here where the issues start to become more complex and the shading of right and wrong becomes crucial. Accenture is another giant, focusing on a quality and “right” service, claiming to play a part in its Tech Vision 2018: Citizen AI by “raising AI for responsibility, fairness, and transparency, businesses can create a collaborative, powerful new member of the workforce.”

With AI services on the fast-growth track, approaching $90 billion by 2025, according to recent research, While the researchers say AI allows businesses “to roll out hyper-personalized services by following an ‘AI first’ strategy. The rest of the market in the enterprise and government sectors is still catching up on adopting AI and has yet to fully understand its value, including the breadth and depth of use cases, the technology choices surrounding AI, and the implementation strategies.” There’s the risk that businesses and governments will deploy first without proper testing, or even asking if they should.

Examples of Bot and AI Quality of Service

Consider these three relatively simple examples where bots need to be 100% on the right side of law, moral code and the customer.

A banking chatbot agrees a loan with the customer. It displays the terms and conditions, and the customer press accept. If the bot notes that the customer pressed the button instantly, not taking they typical three minutes to read the T&Cs, should it accept the interaction and proceed with the loan? Should it report the issue to the authorisations staff, or even to a higher banking authority if the bank always lets this happen?

Then there’s AIs that operate in a black box manner, producing results for the client who may find it very hard to find out how the AI came about with the result, or if there are any biases in the AI that may skew the data. For example, a recruitment bot that may lean toward male candidates rather than women.

That’s not a million miles away from what happened with an actual Amazon bot, although the details are more complex than the tabloid headlines make out. Removing identifying features may be one solution, but other issues are sure to arise.

And these also create further challenges bot vendors and end users will face. Deeply complex issues need to be explained to consumers and the press in a digestible manner. Any business that claims bots are for “the greater good” or “wider benefit” may well find a rebellion on its hands when customers don’t agree with that or any other definition.

Finally, there will be bots that make moral or medical judgment calls. For now, these are all backstopped by a human professional, but soon we will be relying on AIs to make decisions about mental health care, such as does a person need to see a professional? Or, truth bots might be used in the judicial system or in the election process to vet politicians. We can bet a backdoor will be in there somewhere and the inevitable blockchain security will be compromised.

Take a read of 90’s tale “The Truth Machine: A Speculative Novel” by James L. Halperin to see just how vulnerable these innocent AI machines could be.

The Quest for Bot Truth

Trying to make AI and bots accountable will require built-in features that will start appearing in bots soon. Already developers are building “explainable AI” systems that aren’t quite as complex as the full-fat models. However, whose logic and reasoning is visible to end users, so they can understand why results are the way they are.

Bots may also need to be equipped with kill switch-type features to turn them off and revert to a backup. That might be a regular-scripted chatbot or a less complex system, or even directing straight back to the old call center scripts, something that would limit the loss of jobs through AI, as agents are given others roles.

These vital features would only be used in case of a looming or emerging crisis, such as if the AI is hacked or develops a major fault. But, having several backup features could prevent reputational damage before the problem got out of control.

At the highest level, where AI systems fail or cause harm, as required by public or legal interest, businesses or justice departments could hire or send in AI expert investigators. These high-level data scientists could examine the systems from a privileged position to see if they are providing valid answers, and make recommendations that the business would have to carry out.

Will these types be our first Blade Runners? As AI gets smarter it is more likely to happen sooner rather than later.


Leave a Reply

Your email address will not be published. Required fields are marked *