Generative AI and Business Risks
Generative Artificial Intelligence tools and LLMs are seemingly everywhere at present. From META launching an AI chatbot in Facebook, Instagram, and WhatsApp, to the influence Artificial Intelligence is having in board rooms around the world, AI is very much the topic of conversation for businesses in the modern age.
Even while the bubble around AI may be bursting, and serious concerns are starting to be raised around the energy requirements needed to drive Artificial Intelligence platforms, major tech companies are continuing to tout the promises of AI. This technology is not going away, but the intrusion of AI into everyday life – as well as its implementation by businesses of all types worldwide – is creating conversations about safeguards and regulations for AI systems.
These conversations are warranted as AI takes up more space in our lives – from phones that come packaged with AI assistants, to Artificial Intelligence chatbots being used for legal advice (with disastrous outcomes), there is a vast amount of uncertainty when it comes to this technology.
What is Artificial Intelligence?
Artificial Intelligence is formally defined as: the theory and development of computer systems able to perform tasks that normally require human intelligence.
By getting computers to perform human tasks, or by getting computers to pass the Turing test, AI evangelists are hoping to usher in a new era of human creativity and productivity. The current iteration is largely based on generative Large Language Models (LLMs), which use human content to “learn” and improve the output created by the AI program.
Artificial Intelligence is not a new idea. Since the 1970’s researchers have been looking at the possibility of getting computers to think like people, with various cycles of success; to such an extent that the downturn in AI research is called an “AI Winter.” The world is currently in an “AI Summer,” with most of the innovation and production coming from the same venture capital organizations which were previously funding the cryptocurrency boom.
This is in stark contrast to previous AI development cycles, where a majority of funding came from government sources with very specific objectives. Because the private sector is spurring AI innovation, the implementation of these tools across all industries is more accessible than under previous cycles, which has in turn fed back into the almost universal uptake of artificial intelligence systems across business and technology worldwide.
Artificial Intelligence comes with Risk
Artificial Intelligence LLMs have a fundamental and obvious problem – they use human generated content to provide a “next best answer.” In many cases, this “next best answer” or the output generated by the AI platform, will be incorrect.
The technical term for incorrect information provided by a generative AI platform is a “hallucination,” and it’s a big problem; because the machine is relying on information already in existence, and is not actually “creating” anything new, the output provided by Artificial Intelligence can be tainted by the information it is receiving and then collating. From the NYC Chatbot telling users to break the law, through to ChatGPT recommending that a vinyl chloride fire is extinguished with a water fog (vinyl chloride is water reactive), to the proliferation of AI generated foraging books that mycologists are warning will be deadly, in the short time that this technology has been in use there have already been concerning indications of the many potential ways that AI has to cause harm.
AI has the potential to revolutionize how humanity lives, works, and even plays, but it also carries a significant amount of risk that, as yet, is not well understood.
For example: when AI makes a mistake that impacts an end user or member of the general public, who is responsible for that mistake? Is it the programmer that created the tool, the company that implemented it, the end user for taking the information at face value, or even the AI tool itself for getting it wrong?
At present, because the legal framework surrounding Artificial Intelligence has not yet been developed, these questions are unanswered. But there will come a point when a consumer has used an AI generated mushroom book, which has misidentified a button mushroom and destroying angel, to deadly consequences. At that point, these questions will be addressed and an entity involved in the handling or production of the AI assets that caused harm will have to manage the consequences.
Artificial Intelligence and the Future of Insurance
There is no way to put the genie that is Artificial Intelligence back in the bottle, but is critical that businesses approach the conversation about AI tools with a risk-first approach. These systems are inherently risky, especially as they represent emerging and largely unproven technology that is making obvious errors. Unfortunately, there is no quick or easy solution as to how an organization can efficiently manage those risks.
AI is already having an impact on insurance as providers and insurance companies across the globe look to offer value-add Artificial Intelligence services to their customers. One possible insurance future could be entirely AI driven.
Despite the focus of AI on Insurance, there has been relatively little attention paid to the impact of Insurance on AI. What structure exists in the current commercial insurance market to cerate space for robust protection of Artificial Intelligence tools and platforms? And how can existing professional insurance products address the concerns and exposures created by generative AI?
Cyber Insurance and Artificial Intelligence Protection
Cyber Insurance is probably the obvious first place to look when considering how to best insulate your organization from the risks presented by Artificial Intelligence. After all, its in the name – cyber insurance.
Cyber Insurance can offer some immediate support against AI created cyber threats. Attackers and hackers are increasingly relying on AI tools to access secure networks and data.
While AI can be used for threat detection and analysis, in the last 12 months cyber-crime events using have dramatically increased, and show no signs of slowing down. Cyber insurance, as a consequence, is no longer “reactive” – by affording organizations with extensive crisis management solutions, security oversight, and claims management, Cyber Insurance is able to proactively assist policyholders with managing their digital risks. From the risks created by artificial intelligence systems to the more mundane problems of human error, cyber insurance is at the forefront of addressing the emerging challenges presented by a dynamic, technologically connected society.
With most Cyber Insurance products being tailored exactly to the needs of the organization obtaining coverage, conversations about risk management in the 21st century have to address the need for digital insurance protection or companies run the risk of being hamstrung by the evolving capabilities of bad actors.
Business Interruption Insurance and Artificial Intelligence
With the Internet of Things businesses are at risk of their tools, or even their storefront being taken down under an Artificial Intelligence driven cyber-attack. An e-commerce seller who is unable to put their webstore on the internet because of a DDOS attack is going to suffer significant financial harm.
Business Interruption Insurance is a type of coverage that financially protects a company if it is not able to operate in a typical manner because of a covered loss. Not all business interruption insurance plans will cover cyber events, or incidents related to technology, but there are more products coming onto the market that are starting to address this concern.
There may be some overlap between Business Interruption Insurance and Cyber Insurance Products, so it is important that you are aware of your options in regards to this emerging area of cover, and thoroughly investigate the products you may be considering.
Technology Errors and Omissions Insurance for Artificial Intelligence
Errors and Omissions Insurance, also known as Professional Indemnity Insurance or Professional Liability Insurance, is going to play a far greater role in business risk management as AI tools further interact with the general public. In fact, there is possibly no greater question at present than whether AI should be covered under a Professional Indemnity Insurance plan.
In some cases, existing Professional Indemnity products may cover the misinformation provided by an Artificial Intelligence chat bot, or may cover any copyright claims caused by an AI tool under a media liability rider. However, there is no guarantee that this is the case under policy language that pre-existed the development and wide-spread implementation of artificial intelligence tools.
In a perfect world, all Professional Indemnity Insurance in the market products would cover advice risks related to artificial intelligence misinformation. But the current situation means that this may not actually be the case for your extant errors and omissions coverage; meaning that if your company is using artificial intelligence tools, you may be exposed even with a professional indemnity policy already in place, if that policy does not specifically cover the risks associated with AI.
It is important to address this with your insurer, and if you are utilizing AI ensure you are not doing so in a way which could void your policy – having a chatbot “insurance broker” in Hong Kong may run contrary to local laws, in that the chatbot is not authorized to provide insurance advice. An argument could then be made that the chatbot “broker” is not able to provide advice, which may fall outside the coverage provided by an insurance broker professional indemnity policy.
Insurance, Business Risks, and Artificial Intelligence
AI is impacting businesses the world over, as well as offering myriad opportunities within the Insurance Industry itself. But the questions posed by Artificial Intelligence are not well understood, are dynamic, and are placed to have a fundamental impact on how we both do business, and manage risks.
There are, currently, no targeted solutions to the questions posed by artificial intelligence. The questions themselves will evolve over time, as will the way in which the global business community handles digital risks created by AI tools. At this time, with regards to insurance, there are no bespoke products targeting the wide range of issues presented by artificial intelligence, and the best bet organizations may have for addressing these problems is with a patchwork of existing, and overlapping policies.
This isn’t helped by the almost ubiquitous introduction of tools that are prone to errors and mishaps, into an environment where there is the possibility of very real harm. Consequently, we are moving towards a future where novel forms of insurance will need to be created to address the many concerns of artificial intelligence, but we are not there yet. So ensuring you are as prepared as possible with the existing products will be of paramount importance.
CCW Global can help with this. We work with a wide range of Professional Insurance solutions, and operate on a no-cost, no-obligation consultation process. With the ability to quote from an extensive number of leading international and specialty insurance companies, our independent brokers can assist you in closing the gaps on the risks presented by Artificial Intelligence.
Contact Us to request a conversation about Artificial Intelligence Insurance today.
Ask CCW – We’re Simplifying Your Insurance