The AI Consent Gap: What US Health System Lawsuits Tell Every Client-Facing Business

US health systems are being sued for AI-recording patient visits without consent. Every client-facing business handling personal data should take note.

Picture this: a patient sits down with their doctor for a follow up health consult. The patient and the doctor are discussing the test results and diagnoses around the patient’s current ailments. What the patient doesn’t know is that an AI tool is transcribing every word they say. Months later, they find out — not from their clinic, but from a class-action lawsuit.

That is what happened to patients at Sutter Health and MemorialCare in California: two major health systems now facing legal action for allegedly deploying AI medical scribe software without obtaining patient consent.

A near-identical lawsuit against San Diego’s Sharp HealthCare remains open. In each case, the AI tool in question is Abridge, a widely-used clinical documentation platform. Abridge’s own website provides clinicians with a ready-made consent script. The patients claim they never heard it.

The AI worked as designed. The governance around its use however fell through the cracks.

This Is Not Just a Healthcare Problem

Healthcare attracts attention because the stakes — sensitive health information, strict regulatory obligations — are immediately obvious.

But the underlying failure here has nothing to do with stethoscopes or clinical records. It has everything to do with a common pattern playing out across every industry that collects personal information as a normal part of doing business.

Think about the interactions that organisations routinely have with their customers and clients:

  • A financial adviser uses an AI meeting assistant to transcribe a client strategy session — capturing income details, family circumstances, and investment goals.
  • An insurance broker feeds a client’s claims history and personal details into an AI platform to generate a coverage recommendation.
  • A real estate agent uses AI to summarise client briefs and match them to properties — processing location preferences, financial capacity, and household details.
  • A recruitment firm runs candidate information through an AI screening tool without informing applicants that automated analysis is involved.
  • A mortgage broker uses an AI tool to draft a loan assessment summary based on uploaded bank statements and personal financial records.

In each of these cases, an organisation is collecting personal information (often sensitive) as part of a service it already provides.

The AI is not doing something fundamentally new. It is processing information that the client provided in the course of a trusted relationship. What is new is that the client may have no idea the AI is involved, or what happens to their data once it enters an external system.

That is the gap the US lawsuits are exposing. Deploying AI without governance, consent, disclosure, and documented policy creates liability that did not exist before the tool was switched on.

The Shadow AI Problem Sits Underneath All of This

The health system cases involve AI tools that were formally adopted and deployed by the organisations themselves. The governance failure was in operationalising the consent process: a serious lapse, but at least one the organisation knew it had a tool in play.

In most organisations, the problem runs deeper. Staff are routinely using personal AI accounts (often not known by their employer) to process client information. They are doing it with good intentions because the tools are fast, useful, and cost nothing. They are doing it because no one told them not to, no policy exists to define what is and is not acceptable, and using them seems fairly benign.

This is shadow AI: the unmanaged layer of AI use that sits beneath any formal technology adoption.

It is present in nearly every organisation, and in industries where staff routinely handle personal, financial, or commercially sensitive client information, it represents a material and unquantified exposure.

The question for any organisation is not whether shadow AI use is happening, as it almost certainly is. The question is whether anything is in place to govern it.

Four Lessons That Apply Across Every Sector

1. Vendor tools do not come with your governance built in

Abridge provided a consent script, but the health systems did not operationalise it.

The party that deploys the software into their own organisation is responsible for how that tool is embedded in practice. The vendor’s terms of service do not substitute for your organisation’s own policy on how tools may be used with client data.

The liability does not transfer with the licence agreement.

2. The absence of a policy is not a neutral position

When staff use AI tools to process client information and no policy exists to govern that use, the organisation is not in a neutral, risk-free position. It is in an unmanaged risk position. In Australia, Privacy Act APP 11 requires organisations to take reasonable steps to protect personal information from misuse, interference, loss, unauthorised access, modification and disclosure.

In 2026, reasonable steps must include a clear, communicated policy on AI use.

The absence of one is increasingly difficult to defend before regulators and affected clients.

3. Consent and disclosure are not administrative niceties

The central allegation in these lawsuits is simple: clients were not told.

Whether or not they would have consented if asked is almost beside the point — the failure to ask is itself the breach. Across most client-facing industries, the expectation of transparency is embedded in both regulatory obligations and the trust relationship that underpins the service.

When AI becomes part of how you deliver that service, your clients have a reasonable expectation of knowing.

4. Good intentions do not create a defensible posture

The health systems were not acting maliciously. They were deploying technology to reduce clinician burnout — a genuinely worthy objective.

That intent is irrelevant in litigation, and it will be equally irrelevant before a regulator. What matters is whether documented policies existed, whether they were communicated, and whether there is a traceable record that the organisation took its obligations seriously.

Documentation creates defensible posture. Intent does not.

What Governance Actually Requires

AI governance is not a policy document filed once and forgotten. It is the operational framework that is used daily and connects AI use across your organisation to your legal obligations, your client relationships, and your duty of care.

At a minimum, it requires honest answers to four questions:

  • What AI tools are in use in your organisation, and by whom? (Most organisations do not know the full answer.)
  • What client or personal information is being processed through those tools, and under what terms?
  • What obligations, whether legislative, sector-specific, or professional standards apply to how AI is used in your specific context?
  • What could you produce if a regulator, a client, or a court asked you to demonstrate that AI use in your organisation is controlled and appropriate?

If any of those questions produces a blank, your organisation has a governance gap.

The health system lawsuits are a preview of what happens when this governance gap is exposed. Not in a planned audit, but by a client who felt they were not told the truth.

The Cost of Getting This Wrong Is Rising

Australia’s regulatory environment is tightening. Privacy Act amendments expanding obligations around automated decision-making take effect in December 2026. The OAIC has issued increasingly specific guidance on AI and privacy. Sector regulators across financial services, health, and beyond are signalling that AI governance is an area of active scrutiny.

The organisations that act now and build governance frameworks while in this window will be in a materially stronger position when that scrutiny intensifies. They will have:

  • documentation that demonstrates defensible practice.
  • client-facing disclosures that reinforce trust rather than eroding it, and
  • they will not be constructing a governance narrative after an incident has already occurred.

The US health system cases are an early signal. The dynamics they reveal — AI deployed into client-facing contexts without consent frameworks, without disclosure, without policy — are not unique to healthcare. They are present wherever AI touches personal information collected as part of a service relationship.

The only question is whether your organisation addresses the gap on your own terms, or on someone else’s.

Share

Not sure where your AI governance gaps are?

Start with an assessment. We’ll benchmark your current practices and give you a clear, prioritised roadmap.

Related insights

Why AI Governance Is Now Critical for Australian Professional Services Firms

AI governance is now a compliance obligation for Australian accountants, lawyers and advisers. Learn what's changed and what to do next.

How Mid-Sized Enterprises Are Winning with AI

Discover how mid-sized enterprises can use AI to gain a competitive edge. Learn strategies for practical, budget-conscious AI transformation.

Project Glasswing: AI Cybersecurity and What Businesses Need To Know

Anthropic's Glasswing initiative signals a calm before the storm of AI-supported cyberattacks. What this means for businesses, and the practical steps to take.