• Home
  • News
  • Vibe Coding, Vibe Working, and the Big Professional Liability Problem
17 October 2025

Vibe Coding, Vibe Working, and the Big Professional Liability Problem

Boliviainteligente D Eci5 GH0r0k unsplash

Vibe Coding is apparently very in right now.

What is vibe coding? Well, if you haven’t been paying attention, Vibe Coding is the name being given to A.I.-assisted software development. Under Vibe Coding, a developer focuses more on the overall “vibe” or goal of the project, leaving the Artificial Intelligence tool to handle producing the actual code.

Vibe coding is big business.

Salesforce is launching an enterprise Vibe Coding Product, Grafana Labs has over US$400 million in annual “vibe coding” revenue, and Supabase has raised over US$ 100 million on a US$ 5 billion valuation. All signs point to this emerging software trend as being here to stay, especially as the impact and influence of Artificial Intelligence tools in business continues to expand. 

But it is not just coding that is being touched, and changed by A.I.

In recent weeks Microsoft has announced “vibe working”, bringing AI “vibes” to professions including accounting, professional presentations, and even office administration tasks.

While fans of Artificial Intelligence and AI companies are proclaiming the growth of these tools and their continued intrusion into every day life as positive benefits, and companies are rushing to offer ever more AI-assisted services to consumers, there is a serious concern that the Artificial Intelligence eco system is moving too fast, and is in danger of breaking too many things, without any of the guiderails or protections that have been historically offered through traditional insurance coverage.

A picture of a ChatGTP screen

AI, Vibe Coding, and the New Frontier of Professional Risk

Over the last 24 months Artificial Intelligence has moved from a niche tool for technology enthusiasts to a fully realized part of life with daily influence over myriad systems and processes. A.I. tools and products are increasingly embedded in decision support systems, customer relationship management platforms, software solutions, accounting and auditing tools, and are even replacing humans as the primary way in which customers interact with businesses.

This is, potentially, a massive problem.

As AI tools step up to “assist” humans, the boundary between human and machine work begins to erode. While business productivity can improve with the assistance of Artificial Intelligence tools, new and novel liabilities have emerged in tandem with the development of these tools largely due to the fact that A.I. introduces new avenues for failure, as well as shifts in transparency and accountability. From a simple risk perspective, A.I. changes both the frequency and complexity of the risks being faced by an organization utilizing these tools. This is to say that errors may become more frequent as companies become more reliant on using A.I. to automate at scale, and incidents will become more complicated as the proximate cause of the incident is understood; was the fault in the A.I. model, the training data, human oversight (or the lack thereof), or even the prompt input?

If, and when, a client alleges that an AI assisted tool has caused them a loss, the resolution of that claim becomes much harder than a similar case with no Artificial Intelligence involvement. Artificial Intelligence tools are “black boxes” that generate outputs that can be both unreliable and inscrutable; consequently, the “defend-ability” of negligence claims that have been impacted by AI, under current systems of risk management, becomes much weaker.

A man holding a pair of glasses that say "Code" "Debug" on either lens.

The Limits of Traditional Professional Liability Insurance

Traditionally, Professional Liability Insurance (otherwise known as Professional Indemnity Insurance, or Errors and Omissions Insurance) has offered coverage to service providers against their risk of making a mistake in their work, and consequently causing a customer to experience a loss. In many jurisdictions, including Hong Kong, this Professional Liability coverage provides for the costs associated with legal defense costs, settlements, and even damages in the event that an error, or mistake was made in a company’s professional offerings.

However, Professional Liability Insurance emerged as a popular corporate risk management solution in the 1970s; far before the emergence of Artificial Intelligence tools. While Professional Indemnity Insurance may be a relative newcomer to the insurance market, many of these policies were written without considering A.I., or its impact on a business. This means that many of the professional indemnity insurance products available today (both in Hong Kong and on the global market) are not in a position to cover mistakes made by an Artificial Intelligence.

In fact, many current professional liability insurance solutions only cover advice, reports, or deliverables that are entirely human produced. This means that errors, or claims of negligence stemming from “AI software errors,” “algorithmic malfunction,” or “modeling inconsistency” are likely to be excluded from most coverage.

Further to this, many Professional Indemnity Insurance solutions operate on a Claims Made basis and rely on precise definitions of when “work” occurred. When dealing with A.I. and Vibe coding, in a situation where an AI may produce “work” well before a human touches the product, the simple delineation of ‘work’ versus ‘review’ becomes contentious. Without carefully structuring the coverage, and tailoring it to the business activities being undertaken, a firm could be left unprotected for their liability arising from AI-assisted projects.

The final challenge, and limit with regards to traditional Professional Liability Insurance and emerging AI risks, is the overlap with Cyber Risks. If an AI model accessed via an API misclassifies client data, leading to a privacy violation or regulatory fine, is this a professional error, or a cyber event? Some professional liability policies exclude “cyber acts,” expecting clients to purchase separate cyber insurance. Without a clear understanding of exactly what coverage is offered by any of a business’ insurance protection, claims may fall between the cracks leaving an organization exposed, and alone.

a programmer working at his laptop computer

Vibe Working” and the Professional Liability Risks of the Future

Vibe Coding, the practice of steering AI to generate software based on intent, style, or high-level goals rather than actually physically writing any code, is one possible future for delivery of professional services. In that paradigm, the “deliverable” is less hand-coded logic and more specification, supervision, validation, or refinement of AI outputs. The human developer is elevated to curator, prompt engineer, validator, and integrator of AI reasoning.

This transformation is not just limited to software; it is happening in every industry that uses technology. In accounting, for example, Vibe Accounting via AI tools may propose journal entries, tax planning strategies, financial reports, and forecasts that the licensed accounting professional then reviews and edits. The human expert remains responsible for correctness, legality, and accountability, even though AI did much of the draft work.

But this confusion of roles, and the creation of a hybrid work flow, intensifies professional liability exposures.

A customer who claims that a company has caused them to experience a loss may allege that an AI tool made an error, but it is the professional (or their company) who is going to have to deal with the fallout (and defend any subsequent legal action). When looking at professional liability risks in relation to AI, the question then becomes; was the professional negligent in specifying, validating, or deploying the AI output? Did the professional exercise due care in reviewing or auditing the AI suggestion?

Can the insurer draw a clear line between pure AI output and human-assisted work?

The legal and insurance industries are only beginning to grapple with these nuances. Underwriting models must evolve to evaluate not just a human’s track record, but the governance, versioning, audit logs, oversight procedures, human fallback controls, model provenance, and “explainability” architectures for Artificial Intelligence being used in professional, customer facing work. As such, firms using vibe coding right now, must show strong AI governance and risk controls to secure favorable insurance terms on that risk in the future.

many Multi colored pencils

Adapting Professional Insurance for the A.I. Age

While there are a limited number of Professional Liability Insurance solutions that address the needs (and concerns) of A.I and “Vibe Coding” in the workplace, it is clear that in order to remain viable in a world that is heavily investing in Artificial Intelligence these products must evolve.

The first place to start is by updating PII policy wording to address the gaps between traditional coverages and emerging AI risks. Clear definitions and exclusions should be provided for things like “algorithmic error,” “model negligence,” and “prompt engineering failure.” Moving forward, clarity is going to be essential in a landscape where the rules (and laws) are going to be created as the issues emerge. It is imperative that insurers, and businesses, know where they stand with professional indemnity insurance products that draw clear lines between the AI risks that are covered, and those that are specifically excluded.

Secondly, the market is going to need tailored options specifically designed for AI and Vibe Workplace risks. One way to achieve this is by offering modular policy riders, similar to those available on Life Insurance products. AI specific exposures can then be added on an “as-need” basis to the standard policy offering for those organizations that do, in fact, need them. By allowing for granular control over the coverage of AI and Vibe workforce risks, underwriters can impose thresholds, or controls, on AI-forward businesses that allow for things like Audit logging, monitoring, and human supervision in order for the insurance to be valid.

By getting in front of the potential gaps in coverage, insurers have a massive part to play in the development of the artificial intelligence workplace. Both in defining how A.I. is able to contribute to professional “work” but also in terms of identifying where the major failure points are. The legal system around Artificial Intelligence is nascent, but rapidly growing. Regulators around the world are looking at A.I. risks, and starting to craft rules, liability frameworks, and standards of care for organizations that are putting the product of Vibe Workplace projects in front of consumers. Consequently, in some jurisdictions professional standards may evolve to include AI governance best practices, making an absence of oversight a breach of duty.

That shift heightens the relevance of insurance as a risk buffer. Over time, insurers may demand compliance with emerging standards as a condition of coverage.

A man in a blue AI shirt talking to another group of men at a technical conference.

Managing Business Risks in a Vibe Coded Ecosystem

As AI, vibe coding, and hybrid human-machine professional services proliferate around the global business ecosystem, the insurance industry must catch up. Businesses that integrate AI deeply into their offerings must realize that their liability is evolving, and historical risk models no longer suffice. Professional liability insurance, once focused on human advice and deliverables, must expand to cover the nuances of algorithmic error, prompt failure, data leakage, model bias, and oversight lapses.

For firms and professionals exploring vibe coding or AI augmentation, the simplest starting point is risk awareness.

Map your AI processes, classify potential failure modes, and seek insurance coverage early, even if that coverage is limited. As the AI industry evolves, companies with clean architectures, strong controls, and responsive insurance policies will thrive, while others will struggle under the weight of unforeseen liability.

As AI continues to reshape the professional services landscape, insurance is not a backstop; it must become an integral component of your technology strategy.

Ask CCW – where your business insurance is always swift, simple, and sorted.

About Author

Michael Lamb is an insurance industry professional with many years of experience within the Hong Kong Insurance market. Focusing on APAC coverage issues, Michael is able to provide extensive analysis and insight to a range of pressing topics. Previously, Michael provided insurance broker Globalsurance.com with their most highly valued articles and was a key influence in the development of all the content on Pacificprime.com, Michael has a passion for insurance matched by few others in the region.

Connect with us

  • Facebook
  • LinkedIn
  • Twitter