• Home
  • News
  • The Hidden Risks of Buying Insurance from AI
09 April 2026

The Hidden Risks of Buying Insurance from AI

Microsoft copilot gh V Md PN33v M unsplash

We are currently in the middle of the Artificial Intelligence revolution.

Companies around the world are quickly inserting Artificial Intelligence tools and systems into all of their offerings. From children’s toys, to fitness wearables, and even in insurance, AI is quickly becoming ubiquitous in our everyday lives.

One of the biggest reasons for this is the promise of ease offered by AI tools; answer a few prompts, accept whatever recommendation is being offered as the best one, hit the pay button, and move on with your day.

And while insurance is often marketed as something that can be purchased in seconds, and although adding AI to the purchasing or advice process can seem like it is simplifying the entire landscape for both professionals and customers, it is important to understand that this does not come without serious, and very important risks. AI can be a useful tool in some instances. It can summarize policy wordings, answer routine questions, and even (to an extent) speed up policy administration and management concerns.

But using AI as a substitute for regulated insurance activities is a very different proposition.

In many jurisdictions around the world, including in Hong Kong, insurance is a regulated activity subject to specific (and strict) rules and requirements. Brokers in HKSAR, for example, are covered under a code of conduct and overseen by the Insurance Authority, with stringent reporting regulations and financial requirements, receiving a license to operate as insurance advisors abiding by those standards.

Artificial Intelligence, at present, is not licensed by regulators.

These are pieces of software, tools which are only as good as their programming, or the information being fed into the system. In Hong Kong, this distinction matters because the regulatory responsibility falls to the licensed individual or legal persons deploying or using a tool, not the tool itself.

a picture of a CHATGTP screen

AI Is Not An Insurance Shortcut

Whether you are buying Business Insurance or Health Insurance, when you receive insurance advice from an underwriter or intermediary it is important to understand that this is not simply a transfer of policy information. 

Giving “advice” requires fact-finding, analyzing data, interpreting life events and future goals, implementing judgements, and challenging assumptions. A good advisor doesn’t simply tell you what a policy says, they are also able to explain how the policy will impact you directly, on a personal level, and why that matters to you. An insurance advisor should be able to directly relate the product to your life, asking what risks you are personally concerned about, figuring out which exclusions will matter down the line, and whether the policy is actually providing the protection you need.

A core truth about insurance is that the “right” policy is often entirely dependent on the details.

At this point in time AI is, largely, designed to generate plausible sounding outputs based on readily accessible information that are related to a wide range of scenarios. Plausibility is not the same thing as suitability, or even accuracy. Relying on an AI for your insurance advice can be a mistake, and insurance buyers using AI for their coverage needs may discover (too late) that a plausible sounding answer can still be unspecific, incomplete, or even just wrong.

a pair of glasses in front of a computer screen displaying code in windows

Insurance Regulations are for Humans

Insurance advisors and professionals in Hong Kong (and elsewhere around the world) are regulated by professional and governmental bodies. They have passed exams to prove their competence, they attending classes every year to ensure that they are up to date on current standards and practices, and they answer to licensing boards should any mistakes be made.

Artificial Intelligence tools have Terms of Use.

This difference is not meaningless, and when a mistake is made has real consequences for how the situation is handled.

In Hong Kong, the Insurance Authority has oversight for all insurance activities as the primary regulator. Under their rules and local law, anyone “carrying out” insurance business or giving insurance advice must be properly licensed, and that licensed intermediaries must comply with the statutory requirements under the Insurance Ordinance. These requirements and conduct standards include things like honesty, integrity, acting in the customer’s best interests for brokers, exercising due care and diligence, and being competent to advise. When an insurance company or intermediary makes a mistake, complaints can be lodged with the Insurance Authority for further investigation (which can result in penalties, including license revocation).

A chatbot, recommendation engine, or generative AI assistant is not, itself, a licensed broker, agent, or any other form of accountable insurance professional. However, in Hong Kong the Insurance Authority has explicitly discussed their usage in the local insurance ecosystem; where a licensed insurer or intermediary deploys a chatbot the responsibility in compliance remains with the licensed entity using the system, and not the chatbot/AI itself. This means that AI systems found on the websites and platforms offered by insurance companies (of all types) fall under the regulations and code of conduct for the person or company holding the license. Should that tool make a mistake, it is the company operating the tool which is responsible for the error.

It does, unfortunately, mean that there is a significant gap for Hong Kong consumers and policyholders using generic AI systems outside of the insurance ecosystem for assistance with their insurance queries or to seek advice. These platforms are not licensed, and do not fall under the auspices of the Hong Kong Insurance Authority, or any insurance regulator in many jurisdictions around the world.

A computer rendering of an image that looks like a brain

Mistakes and Remedies with AI insurance Advice

If an AI like ChatGTP, Claude, Gemini, CoPilot makes a mistake, recommends a product that is completely unsuitable, omits an exclusion, oversimplifies a claim scenario, or misunderstands a medical disclosure, there is likely no recourse when dealing with the outcome. The AI is not an insurance professional, or operated by an insurance company, and consequently has no meaningful professional duty of care towards users.

In a situation involving a mistake made by a generic AI where there is no human license to review, no regulator to provide sanctions, and no professional conduct requirements to enforce, there is far less culpability and responsibility when something goes wrong. This leaves consumers extremely exposed.

When a human insurance professional makes a mistake or gives unsuitable advice there are myriad avenues for addressing the problem. There are established procedures with regards to conduct, suitability, disclosures, and complaint handling. The Hong Kong Insurance Authority, along with regulators providing similar oversight in a number of jurisdictions around the world, makes it clear that complaints about insurance professionals can be raised formally, and with actual legal consequences should wrongdoing be discovered.

In contrast, the users of an AI platform often get no more recourse or information than a simple service disclaimer. The output of a generative AI or LLM platform may be “for informational purposes only,” or framed as “non-binding,” but that isn’t going to prevent an unaware user from taking the information at face value; including any hallucinations that have been conjured during the conversation. Even when an AI presents information and “advice” in a confident manner, the legal foundation of any recommendation is often very thin; the platform can sound definitive, the ultimate authority, while simultaneously disclaiming and disavowing any responsibility on a fine print page that no casual user will ever actually visit.

This is, actually, a very dangerous situation.

Especially when it comes to insurance, because errors often compound. A mistaken assumption when purchasing a policy can lead to devastating consequences when a claim isn’t paid due to the risk being out of scope. By the time the problem is discovered, the policyholder may have paid years of premiums for something that wasn’t suitable in the first place; all on the basis that the tool sounded confident.

A man with an AI shirt in a group

AI Indifference is a Structural Insurance Problem

A simple truth is that AI doesn’t care what advice it gives. The goal of the tool is to satisfy you, as the user.

This is not a moral reflection, or an ethical statement, it is simply a design statement; when dealing with an AI or LLM tool there is no professional conscience, it is a machine and code. An AI is not going to lose sleep over a mistake that may have been made, it generates responses based on probability, patterns, prompts, and parameters. In fact, an AI is doing no more than offering a “best guess” at what it thinks may be the solution to any problem it is presented.

The tool can be tuned, it can be monitored, it can even be restrained, but it does not possess any “duty” in the way a regulated insurance professional does. This is critical because insurance advice is not simply about “accuracy,” it also involves “responsibility.”

In Hong Kong, and elsewhere around the world, the conduct regime for licensed insurance intermediaries and insurance professionals is built around principles of honesty, integrity, fair treatment, competence and (in respect of brokers) acting in the best interests of clients and potential clients. These are behavioral expectations directed at regulated persons that an AI simply cannot internalize; they are, by their very nature, human attributes. AI tools can mimic empathy, they can mimic understanding, and they can mimic language, but mimicry is not accountability or responsibility.

Providing a carefully worded recommendation based on language patterns and public record probability is one thing, but it is important to understand that wording is not the same as judgement. An AI does a good job at pretending to be, and passing itself off as being a human with polished language and patterns, but the danger to insurance consumers is that the polish can make the tool feel far more reliable and trustworthy than it actually is.

Two people sharing a high five over paperwork and a computer

Privacy is not the same as trust

Trust is a critical component of purchasing insurance as you will often have to disclose sensitive information to an underwriter. Things like medical history, travel plans, salary details, assets and liabilities, family information and history, corporate exposures, and even prior claims may need to be released depending on the product you are considering.

Sharing this information with a generic AI or LLM tool is not the same as providing it to a regulated insurance advisor operating in their professional capacity.

To this end, the Hong Kong Privacy Commissioner for Personal Data has warned consumers to read the privacy policy and terms of use for AI and ChatBot tools carefully. The HKPCPD has also encouraged individuals to opt out of chat history sharing where possible, and to avoid disclosing personally sensitive information on these systems. Further to this the HKPCPD has further highlighted the issue that any information provided to an AI platform may, in turn, be used for training data, creating a leakage risk.

In Hong Kong, the Personal Data (Privacy) Ordinance places obligations regarding data in relation to accuracy, retention, use limitations, security, transparency, and access. The ordinance specifically notes that contractual or other means should be used to ensure processors comply with retention and security requirements, but is also why the phrase “AI is not protected by confidentiality or contract” lands so heavily in the insurance context.

It should be noted that some AIs do operate under contractual terms. However, this is not the same thing as having a bespoke advisory engagement with duties shaped, specifically, around your insurance needs, your confidential disclosures, and your expectations of professionalism. This is a fact that is not well understood by many casual AI users, who treat prompts and engagement casually; without realizing that they are sharing material facts, commercially sensitive information, or confidential files into an ecosystem where the governing rules are made (and instituted) by the platforms themselves, rather than regulations or tailored professional mandates.

For insurance buyers this is a crucial concern. If you would hesitate to post your underwriting details and policy information in a South China Morning Post newspaper ad, then you should consider providing them without limitation to a public AI platform.

A worker at a computer operating a keyboard and mouse

Where does AI fit in Insurance?

This is not to say that AI has no place in insurance, or that AI should not become a part of the insurance landscape. It does, and it will.

AI, with proper oversight, can assist in triage, document handling, internal first-draft comparisons, administrative burdens, and can help to simplify some customer interactions. The Insurance Authority in Hong Kong has recognized that ChatBots and AI are a part of the evolving insurance market and the insurance sector’s long-term development. However, the Insurance Authority has also emphasized the need for adequate disclosures, monitoring, risk mitigation, cyber security, and corporate governance and oversight when these tools are deployed.

The dividing line should be seen to be that AI can support insurance advice, but should not be mistaken as a substitute for an accountable professional. This distinction, in Hong Kong, protects consumers. We are able to recognize that the technology is useful, but that risk transfer is still a high-stakes process, often containing decisions that have legal and long-lasting ramifications. The smart model is not replacing human insurance professionals with AI, or choosing to go to an AI first distribution channel. The smart model is using the best available technology, carefully, within a framework of human expertise, experience, and regulatory enforcement.

Before relying on an AI tool for insurance, ask a few important questions:

  • Is the person or firm behind the recommendation licensed to carry on regulated activity?
  • If something goes wrong, who is accountable?
  • Where can a complaint be made?
  • What happens to the information you type into the system?
  • Is the advice tailored to your specific circumstances, or merely generated from a generic prompt?
  • Has anyone with real market experience challenged the assumptions in the recommendation?

These questions can, very quickly, illustrate why buying insurance with AI can be a risky proposition. The tools can generate language, but it cannot replace the regulated obligation of duty that a human professional is required to hold. The AI tool can offer answers, but it won’t understand why the question was important; it can process information, but very rarely offers real trust.

When your subject is, very literally risk, these differences matter.

For buyers who want insurance to work when it matters, the safer path is rarely the most automated one. It is the one that combines careful analysis, professional accountability, privacy awareness, and human judgement before the policy is ever put in force.

For more information about insurance in Hong Kong from an expert human insurance broker, or to learn more about the way which CCW Global offers advice, please Contact Us Today.

Ask CCW – where your insurance is always Swift, Simple, and Sorted.

About Author

Michael Lamb is an insurance industry professional with many years of experience within the Hong Kong Insurance market. Focusing on APAC coverage issues, Michael is able to provide extensive analysis and insight to a range of pressing topics. Previously, Michael provided insurance broker Globalsurance.com with their most highly valued articles and was a key influence in the development of all the content on Pacificprime.com, Michael has a passion for insurance matched by few others in the region.

Connect with us

  • Facebook
  • LinkedIn
  • Twitter