You wouldn’t install a network without a firewall, you wouldn’t let staff work remotely without a VPN and you really shouldn’t adopt AI tools without thinking of the risks and consequences. See what the latest research says your business should think about before bringing AI services to power the company.
Chatbots, analytics tools, security apps, smart services and virtual assistants all use AI to some degree. Many businesses adopt them or build their own tools around them, without really considering what the risks are. We have lived with 99% uptime SLAs for years now, so what’s wrong with a 99% accurate AI when it comes to translation, offering advice or form processing? Quite a lot, it could transpire.
Deloitte has been asking those thorny questions of enterprise executives when it comes to the company’s third annual AI adoption survey. The key messages are that:
- Adopters continue to have confidence in AI technologies’ ability to drive value and advantage.
- Early-mover advantage may fade soon.
- Virtually all adopters are using AI to improve efficiency; mature adopters are also harnessing the technologies to boost differentiation.
- AI adopters tend to buy more than they build, and they see having the best AI technology as key to competitive advantage.
- Adopters recognize AI’s risks, but a “preparedness gap” spans strategic, operational, and ethical risks.
Those risks haven’t stopped the rapid growth in adoption with a recent Gartner Survey revealing that organizations expect to double the number of AI projects over the next year. That survey shows customer experience and task automation are the leading use cases for AI. Gartner says, “While technologies such as chatbots or virtual personal assistants can be used to serve external clients, most organizations (56%) today use AI internally to support decision making and give recommendations to employees. It is less about replacing human workers and more about augmenting and enabling them to make better decisions faster.”
Mitigating the AI risks
From the Gartner report, they see risk in several areas:
- A lack of skills, which could lead to non-expert or best-guess efforts that fail to deliver.
- Understanding AI use cases, with businesses not really sure what AI can do for them.
- Concerns with data scope or quality, ensuring their own data is fit for purpose.
“Finding the right staff skills is a major concern whenever advanced technologies are involved,” said Mr. Hare. “Skill gaps can be addressed using service providers, partnering with universities, and establishing training programs for existing employees. However, establishing a solid data management foundation is not something that you can improvise. Reliable data quality is critical for delivering accurate insights, building trust and reducing bias. Data readiness must be a top concern for all AI projects.”
Whatever the use case be it an AI tool that checks new job applications or a chatbot performs first-level interviews, or a business analytics tool, any business needs to get used to AI and have the people in place to deliver strong results.
Deloitte’s report shows that only 26% of business consider themselves seasoned at AI, with 47% of businesses think they are skilled will 27% are just starting out on their road to AI, and likely at more risk.
Among those risks, Deloitte lists the following as high-profile ones.
Making your company a good fit for AI
From a startup to an enterprise, AI should be treated as another tool, but one with wider implications beyond the usual IT adoption. AI’s ability to change business thinking should be considered as it starts to take over analysing data, or its ability to impact PR when chatbots are delivering key messages all need careful planning with mitigation and response plans in place.
Cybersecurity is an across-the-board issue for most businesses, and AI efforts should be as well protected as a company database or cloud office tools and files.
Privacy, transparency and ethics issues need to be addressed using expert counsel and adopting best practices, which should already be in place, or if not adopted across the business.
The impact of AI on business operations or making bad decisions requires good training of the application and thorough testing alongside existing operations. When a chatbot or AI goes live, close monitoring is required to ensure it delivers on both productivity, accuracy and satisfaction metrics.
The issues around user/customer backlash and the risk of job losses should be addressed in advance. Do so through education and by upskilling workers or highlighting that they can focus on more value-added tasks in their roles. And, if there are job losses, they can be framed against a wider change in business practices rather than using AI as the source of the blame.
In many cases, AI should be rolled out gradually in different areas, using easy wins to show the value of the technology (HR, customer services and training are popular). With those efforts delivering results, a team can build up experience on the good and the issues they found, and then apply them to new tasks, while learning how other businesses have coped.
Working with a third-party provider can help overcome weaknesses within the business and studying how large firms like Bank of America handle AI ethics can help guide your thinking.
When it comes to the “AI plan”, Deloitte recommends that any business starting late when it comes to AI should:
- Pursue creative approaches to AI: unlocking value beyond efficiency and becoming more creative with their AI applications, balancing evolution and transformation.
- Become a smarter consumer: with more AI vendors, platforms, and technologies available, becoming better at evaluating buying options.
- Actively address risks: not allowing the perceived risks of AI to derail efforts by becoming more conscientious about how AI gets used, and by building trust with customers and partners.
That type of thinking and finding great AI tools on the market that are proven in action will help make the road to adoption and success easier for any type of business.