When Trusted Healthcare Software Quietly Hands Private Health Information to Someone Else’s AI

You go to the clinic. You interface with clinic staff, nurses, and maybe even a doctor. You’ve grown accustomed to technology at the hospital—diagnostic tools, tablets, computers, note-taking. In the United States, you take comfort knowing that HIPAA keeps your health information safe. When clinic staff use software to maintain and store your information, generally there’s a separate agreement between your provider and the software company that basically says they will keep your data just as safe as if it were acting as the provider itself. This is exactly what we want.

But, then one night, that software updated. There are new functions. Note taking, voice recording. Communications automation. Data analysis. These are just a few examples. These new functions are powered by AI. The companies that provide the software made your provider’s life easy by integrating that AI seamlessly into their software product. The product looks and feels the same, but it just has a little extra functionality. 

But, what is that AI doing with your information? Does your provider know? Was your provider even informed? Is there a downstream contract restricting the AI company’s use of the data or restrictions on secondary use?  

I call it the Subprocessor Blindspot. A signed Business Associate Agreement (“BAA”) with the vendor on the screen may say little about the model host, speech engine, support queue, or vector store now touching patient data. In healthcare AI, procurement, privacy, and governance have to be aware of AI backdoors that may leak patient data worse than any data breach. This article explores the privacy issues around that blindspot.

Introduction: Do We Really Know Who Has Access To Our Patient Data?

A health system approves an EHR, patient portal, or revenue cycle platform. Procurement clears the deal. Security reviews the vendor. Legal signs the BAA. A few years later, the same platform releases a built-in AI charting tool, a patient messaging assistant, or a predictive denial automation that can be switched on with a feature flag, an admin setting, or the next routine update. The clinicians still see the same logo. The contract file still carries the same vendor name. But the underlying data path may now route protected health information through a speech engine, a hosted model, a vector database, a trust and safety review queue, or a subprocessor chain that nobody in the room has actually mapped.[1] That is the subprocessor blindspot. 

The legal and operational risk may sit behind the trusted front end, inside the AI supply chain that processes the prompt after it leaves the approved interface. A signed BAA with the primary vendor is often the starting point of the analysis. It is rarely the end of it. If the primary vendor sends readable patient content to a downstream AI service with different retention settings, different review practices, or different commercial incentives, the healthcare organization may have created a second disclosure problem while still telling itself it is operating inside the first contract.[2]

The False Comfort of the Signed BAA

HIPAA does not ignore subcontractors. The post-HITECH framework makes business associates directly liable for certain HIPAA obligations, and it requires business associates to obtain satisfactory assurances from subcontractors that create, receive, maintain, or transmit PHI on their behalf. On paper, that sounds like a complete chain of custody. In practice, it leaves a gap that sophisticated buyers should take seriously. The covered entity usually has no direct contract with the model host, the transcription provider, the annotation vendor, or the cloud layer that may actually process the data. If the primary vendor chose those providers, or enabled them through embedded tooling, the health system may be depending on a set of downstream promises it has never seen.[3]

That problem gets sharper in cloud and AI contexts. HHS has been clear that a cloud provider can be a business associate when it stores or processes ePHI on behalf of a covered entity or business associate. HHS has also made clear that this does not turn on whether the provider holds the decryption key. A platform can still be a business associate even if it cannot independently view the content at rest. AI deployments often create a false sense of comfort around encryption language. The fact that the infrastructure is encrypted does not answer who can access the data in application memory, in support workflows, or inside the model pipeline.[4] 

And, in practice, while primary software providers offer assurances of data privacy and encryption, it is often the case that otherwise protected data is provided to third-party AI providers in plain text format. It’s therefore critical that providers know up front what subprocessors have access to patient data and what secondary uses they may be planning. 

The Hidden Data Path: Who Can Read The Plaintext After The Prompt Leaves The Screen

In AI contracting, the most common technical misunderstanding is also the most consequential one. Encryption in transit and at rest does not resolve whether the downstream provider (subprocessor) can read the content. A language model cannot summarize a chart, draft a patient message, or score a denial letter without the service reading the prompt in plaintext at the application layer. Once that basic point is understood, the real questions come into view. Is the prompt stored in logs? Is there temporary caching? Is output stored as application state? Can support engineers inspect it? Can safety personnel review it? Does the workflow create embeddings or vector indexes that persist after the session ends? Those are critical data governance questions.[5]

The answers vary by provider and by feature. Some enterprise model services state that prompts, completions, and embeddings are not used to train models without permission. Some allow configurations that sharply limit retention. Others retain data for abuse monitoring or maintain application state for specified periods unless the customer changes the setting or secures special approval. OpenAI’s current API documentation, for example, distinguishes between default abuse monitoring logs, application state, and modified retention modes such as Zero Data Retention for approved uses, while also noting that some features remain ineligible for that treatment and that some stored objects persist until deleted. Microsoft’s current Azure AI documentation takes a different posture for certain direct model deployments and states that customer prompts and completions are not made available to OpenAI or other model providers and are not used to improve models or services without permission or instruction. Those differences determine whether PHI stays within a narrowly governed data pipeline or is at risk of exposure to a much broader technical system.[6]

Healthcare leaders should pay even closer attention when the software already marketed as a clinical product sits on top of its own subprocessor network. Microsoft’s current Dragon Copilot materials disclose both a subprocessor framework and a privacy architecture that includes streaming patient encounter audio to the cloud, return of generated documentation for clinician review, limited human review, and the transmission of select customer data into a research environment where it is anonymized within 90 days and then used to enhance the product’s core functionality and AI or machine learning models. Whether a particular organization is comfortable with that design is a separate question. The important point is that this risk lives in the service-specific documentation and operational model, not in the comforting abstraction that the health system has already “vetted Microsoft” or “signed the BAA.”[7]

But you should also ask, do my patients want Microsoft to use their data to improve its product? Did they give consent? 

Minimum Necessary and Operational Over-Disclosure

While the minimum necessary rule applies in AI deployments, it needs to be applied carefully. HIPAA does not apply the minimum necessary standard to disclosures to or requests by a health care provider for treatment. That exception is important, and counsel should not pretend otherwise. The trouble begins when organizations use the same broad intuition about context everywhere else. Revenue cycle analysis, denial management, utilization review support, administrative triage, quality workflows, and many forms of patient messaging support can involve healthcare operations rather than treatment. In those settings, piping broad swaths of PHI to a third-party AI layer because the model may function better with more context is a weak answer to a regulatory problem.[8]

This is where operational design is as important as legal doctrine. A patient portal assistant may only need the current message thread and a handful of discrete chart facts. An ambient tool may only need the encounter audio and a constrained slice of the record. A denial analyzer may only need the claim, the denial reason, and the supporting documentation at issue. Once the workflow starts ingesting entire charts, long free-text histories, or collateral sensitive material because the vendor wanted convenience rather than precision, the organization has moved from useful context toward over-disclosure. HHS made a similar point in its online tracking guidance, where it warned that a privacy policy or terms of use alone do not authorize disclosure of PHI to third-party technology vendors and that regulated entities must ensure an actual HIPAA-permitted pathway exists.[9]

When Service Improvement Means Secondary Use 

The most dangerous words in an AI contract are often the least technical. “Service improvement,” “analytics,” “evaluation,” “safety,” and “model enhancement” can look harmless to a business team reading at speed. In a generative AI context, those phrases may carry the permission to retain prompts, route data into review queues, test outputs against internal benchmarks, or use customer interactions to improve a broader system. That is why silent changes to online terms and quiet expansions of permitted uses deserve executive attention. The FTC has already warned that rewriting privacy policies or terms of service to allow new product development uses can be unfair or deceptive.[10]

The legal review cannot stop with the BAA. Counsel and related executives (CIO, CTO, CPO, etc.) have to compare the BAA, the master services agreement, the product-specific AI terms, the online privacy notice, the subprocessor schedule, and the technical documentation for the actual feature being deployed. Those documents often describe different rights with different levels of precision. One provider may say that prompts and outputs are not used for model training without permission. Another may require opt-in or special approval for reduced retention. Another may reserve the ability to create anonymized research datasets and use them to improve core AI functionality. Another may permit limited human review for tickets, reliability, or annotation. Once that picture comes into focus, a board or general counsel can finally ask the right question: is the organization using AI inside a tightly governed clinical service, or is it handing readable patient content to a third party that can keep more of it than the buyer ever intended?[11]

De-Identification, Embeddings, and False Comfort

Vendors frequently offer a second reassurance when questions about secondary use appear. They say the data will be de-identified. That assurance is relevant, but is it technically and legally sound? HHS recognizes two lawful paths to HIPAA de-identification, Safe Harbor and Expert Determination. Safe Harbor depends on the removal of specified identifiers. Expert Determination requires a qualified expert to conclude that the risk of re-identification is very small. Both routes can work in the right setting. Neither should be accepted as magic language for unstructured clinical text without further analysis.[12]

A progress note may still contain a rare diagnosis, a distinctive injury, a unique timeline, or a combination of facts that allows linkage back to a person. Recent research has shown that large language models can re-identify a measurable share of clinical notes even after explicit identifiers are masked. The same caution applies to embeddings and vector stores. They may not look like ordinary text, but they can still embody sensitive information and support later retrieval or inference. When a vendor says a database is not PHI because humans cannot casually read the numbers, that statement deserves pressure testing. If the system can retrieve patient-specific context from the representation, or if the representation can be reconstructed or queried in ways that reveal sensitive attributes, the privacy problem is still unresolved.[13]

The Enforcement And Litigation Environment: Hidden Downstream Sharing Can Become An Enforcement Event

Healthcare lawyers should stop assuming that AI subprocessor problems only mature into risk after a classic data breach. OCR and the FTC have both offered a different lesson. OCR’s online tracking guidance warns that covered entities may not use third-party technologies in ways that cause impermissible disclosures of PHI, and it rejects the idea that a privacy notice alone can bless those disclosures. The FTC’s enforcement actions against GoodRx and BetterHelp show the same theme in a non-HIPAA posture. Both cases turned on sensitive health data being shared with third parties in ways that conflicted with what users were told. The FTC’s updated Health Breach Notification Rule also confirms that unauthorized disclosure can itself qualify as a reportable event under that regime. A hidden downstream AI disclosure can therefore create an enforcement narrative even when no ransomware actor ever appears.[14]

State law adds another layer, but precision matters. Washington’s My Health My Data Act imposes separate consent, processor contract, deletion, and enforcement rules for “consumer health data,” and it treats violations as unfair or deceptive acts under the state consumer protection framework. At the same time, the statute expressly exempts PHI governed by HIPAA and certain other regulated categories. That means MHMD is not a universal second hit for every covered-entity workflow involving PHI. It becomes more relevant when a healthcare organization operates hybrid consumer-facing tools, apps, websites, intake flows, or mixed datasets that sit partly outside HIPAA’s protected perimeter. The risk can be real. The analysis has to start with the statutory exemptions rather than skip over them.[15] 

California presents a different set of problems. The California Invasion of Privacy Act bars recording a confidential communication without the consent of all parties, where the circumstances reasonably indicate an expectation that the communication will be confined to the participants. The Confidentiality of Medical Information Act separately limits disclosure of medical information by providers, plans, and contractors absent authorization or another applicable exception. Those statutes do not automatically prohibit all AI-enabled healthcare access. They do, however, create fertile ground for litigation when an organization records or transmits patient communications to a third-party AI tool without a well-designed consent and disclosure process.[16]

The recent ambient scribe suits against Sharp HealthCare and Heartland Dental show that plaintiffs’ lawyers are already testing these theories. The reported allegations focus on recording and transmitting patient interactions to third-party cloud documentation vendors without adequate consent. One early dental case saw wiretap claims dismissed without prejudice under an ordinary-course exception, while the Sharp matter remains pending. The larger lesson is not that plaintiffs have already won. It is that the theory is live, attractive, and likely to be refined.[17]

Discovery, DRS, and the Recordkeeping Trap

Once AI enters a clinical workflow, health information management problems arrive right behind it. HIPAA’s access right extends to PHI in a designated record set, including records used, in whole or in part, to make decisions about individuals. HHS has also made clear that this obligation can reach records maintained by a business associate on the covered entity’s behalf. That does not mean every prompt, log entry, or intermediate model output automatically becomes part of the designated record set. It does mean the organization needs a defensible policy position before it is asked for raw transcripts, prompt histories, retrieval outputs, or draft artifacts that influenced care decisions.[18]

AHIMA’s long-running distinction between the legal health record and the designated record set makes the problem easier to describe and harder to ignore. The designated record set can be broader than the legal health record. AHIMA has also recognized that source data, including items like WAVE files and other underlying artifacts, may need to be reproducible and accessible depending on how the organization defines its record sets and relies on those materials. With AI systems, that means a hospital can create a retrieval problem for itself if critical artifacts are retained somewhere inside a vendor or subprocessor environment but cannot be readily searched, produced, or tied to a specific patient encounter. In litigation, that same setup can turn into a hold problem or a spoliation argument if the downstream service overwrites logs on its own schedule.[19]

The Contracting Problem 

Most AI contract failures are not dramatic. They are boring. The vendor does not attach a current subprocessor schedule. The BAA promises compliance in general terms but says nothing about the specific model host or annotation service. The customer gets no advance notice before a new AI subprocessor is added. The contract does not say whether prompts, outputs, embeddings, or application state are retained, for how long, or in which jurisdiction. The support terms say little about human review. The discovery clause is silent on litigation holds. The termination language promises deletion but not certified destruction across the subprocessor chain. HHS has already signaled that cloud and BAA terms should not prevent an entity from accessing its own ePHI. That principle should be translated into hard contract language before the feature goes live.[20]

The strongest AI riders do not speak in slogans. They name the subprocessors. They lock the permitted uses. They forbid model training and generalized product improvement with customer PHI unless a separately negotiated and legally supportable pathway exists. They describe retention by data type, including prompts, outputs, transcripts, caches, vector stores, and backup copies. They require advance notice and objection rights before a new AI subprocessor is introduced. They define who may see the plaintext for debugging, support, or annotation. They require cooperation for access requests, audits, discovery, and legal holds. They make the configuration itself part of the deal, because the difference between a manageable deployment and an ugly one may be a default setting buried two menus deep.[21]

The Insurance Problem 

Executives usually ask the insurance question late. They should ask it early. If an undisclosed AI subprocessor becomes the center of a privacy class action, an FTC investigation, a wiretap claim, or a dispute over medical record artifacts, the organization should not assume that cyber coverage, technology E&O, D&O coverage, or media liability will line up neatly behind the event. Coverage will turn on the policy language, the exclusions, the insured’s representations, the way the claim is pled, and whether the event looks like a privacy failure, an unfair practices case, a professional services dispute, or something else entirely. Market commentary over the last two years has reported increasingly broad AI exclusions, including “absolute” formulations and new ISO-based exclusions in some lines. That does not mean every AI claim is uninsured. It does mean no board should accept generic reassurance from a renewal slide deck.[22]

The blunt question for the broker and coverage counsel is simple. Which policy actually responds if the subprocessor you did not know about becomes the villain in a health privacy class action. If nobody in the room can answer that cleanly, the organization is carrying more uninsured AI risk than it probably understands. 

Governance And The Rise Of Shadow AI Inside Approved Software: The Most Dangerous AI May Already Be Inside The Tools You Approved

Classic shadow IT involved employees using tools the organization never approved. AI has added a second, quieter version of the same problem. The risky feature may already sit inside approved enterprise software and arrive through a routine update, a product release, or a change in service-specific terms. That is why AI governance cannot be treated as a one-time procurement event. The American Medical Association has urged organizations to develop policies that govern acceptable uses, protect privacy and security, define accountability, and restrict the use of public or unapproved AI tools with patient information. Those same principles belong inside vendor management and change management for approved platforms as well.[23]

For healthcare boards, CIOs, and general counsel, that means the governance agenda has to move one layer down. Vendor inventory needs to capture embedded AI features, not just vendor names. Security review needs to include architecture review when product capabilities change, not just when the master agreement renews. Privacy teams need visibility into prompts, logs, retrieval layers, and support workflows. Clinical leaders need boundaries on which use cases are appropriate for approved tools and which require separate review. Epic’s current product materials make clear that core clinical, patient-facing, and operational workflows are all being infused with built-in AI. The most important control in that environment is no longer the ability to block a rogue app. It is the ability to detect when approved software starts sending patient data somewhere new.[24]

Conclusion: Approved Software Is Not An Approved Data Path

Healthcare organizations do not need to reject embedded AI. They do need to start treating it as an additional data access point. A BAA with the primary vendor does not tell you who reads the prompt, who stores the output, who reviews the logs, who builds the embedding index, or who gains the benefit if the data is later used to improve a broader system. Those are the questions that determine whether the deployment is defensible.

That is why the subprocessor blindspot is both a legal problem and an operational one. It lives in service-specific terms, retention settings, feature flags, support workflows, model architectures, subprocessor schedules, and policy exclusions. It reaches privacy, discovery, recordkeeping, procurement, insurance, and board oversight all at once. The health system that sees only the interface will miss the exposure. The one that maps the full data path will know where to negotiate, where to restrict, and where to say no.

Every hidden subprocessor is a liability decision disguised as a product feature.

Endnotes

  1. Epic Systems Corp., Epic AI Charting Rolls Out Alongside an Expanding Set of Built-in AI Capabilities (Feb. 4, 2026) (describing Epic’s built-in AI charting and related clinical AI features), available athttps://www.epic.com/epic/post/epic-ai-charting-rolls-out-alongside-an-expanding-set-of-built-in-ai-capabilities/. Epic Systems Corp., Artificial Intelligence (last visited Mar. 20, 2026) (describing patient-facing and clinician-facing AI capabilities embedded in Epic workflows), available at https://www.epic.com/software/ai/. Fed. Trade Comm’n, AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive (Feb. 13, 2024) (warning that retroactive or opaque term changes permitting new data uses, including AI training, may be unfair or deceptive), available at https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/ai-other-companies-quietly-changing-your-terms-service-could-be-unfair-or-deceptive.
  2. U.S. Dep’t of Health & Hum. Servs., Business Associates (May 24, 2019) (explaining that business associates and downstream subcontractors can be directly liable under HIPAA), available at https://www.hhs.gov/hipaa/for-professionals/privacy/guidance/business-associates/index.html. 45 C.F.R. §§ 164.502(e)(1)(ii), 164.504(e)(5) (2025) (requiring business associates to obtain satisfactory assurances from subcontractors and to impose the same restrictions and conditions by contract), available at https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E/section-164.502 and https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E/section-164.504. U.S. Dep’t of Health & Hum. Servs., Business Associate Contracts (Jan. 25, 2013) (providing sample contract language that may be adapted for business-associate-to-subcontractor arrangements), available at https://www.hhs.gov/hipaa/for-professionals/covered-entities/sample-business-associate-agreement-provisions/index.html.
  3. U.S. Dep’t of Health & Hum. Servs., May a HIPAA Covered Entity or Business Associate Use a Cloud Service to Store or Process ePHI? (Oct. 5, 2016) (explaining that a cloud service provider storing or processing ePHI can be a business associate and that the covered entity or business associate must understand the cloud environment well enough to perform risk analysis and risk management), available at https://www.hhs.gov/hipaa/for-professionals/faq/2075/may-a-hipaa-covered-entity-or-business-associate-use-cloud-service-to-store-or-process-ephi/index.html.
  4. U.S. Dep’t of Health & Hum. Servs., If a CSP Stores Only Encrypted ePHI and Does Not Have a Decryption Key, Is It a HIPAA Business Associate? (Oct. 5, 2016) (explaining that a cloud service provider can still be a business associate even if it cannot view encrypted ePHI at rest), available at https://www.hhs.gov/hipaa/for-professionals/faq/2076/if-a-csp-stores-only-encrypted-ephi-and-does-not-have-a-decryption-key-is-it-a-hipaa-business-associate/index.html.
  5. OpenAI, Data Controls in the OpenAI Platform (last visited Mar. 20, 2026) (describing abuse monitoring logs, application state retention, Zero Data Retention, Modified Abuse Monitoring, and endpoint-specific retention behavior), available at https://developers.openai.com/api/docs/guides/your-data. Microsoft, Data, Privacy, and Security for Azure Direct Models in Microsoft Foundry (last visited Mar. 20, 2026) (stating that prompts, completions, embeddings, and training data are not available to OpenAI or other model providers and are not used to improve models or services without permission or instruction), available at https://learn.microsoft.com/en-us/azure/foundry/responsible-ai/openai/data-privacy.
  6. OpenAI, Data Controls in the OpenAI Platform (last visited Mar. 20, 2026) (explaining that some endpoints remain ineligible for Zero Data Retention, that certain application state may persist, and that extended prompt caching and some hosted features alter retention behavior), available at https://developers.openai.com/api/docs/guides/your-data. Microsoft, Data, Privacy, and Security for Azure Direct Models in Microsoft Foundry (last visited Mar. 20, 2026) (describing a contrasting enterprise posture for Azure Direct Models), available athttps://learn.microsoft.com/en-us/azure/foundry/responsible-ai/openai/data-privacy.
  7. Microsoft, Privacy White Paper, Dragon Copilot (last updated Mar. 11, 2026) (describing cloud streaming of encounter audio, transcript generation, 90-day anonymization of select customer data, continued use of anonymized data to enhance core AI/ML models, and limited human review), available at https://learn.microsoft.com/en-us/industry/healthcare/dragon-copilot/whitepapers/privacy. Microsoft, Sub-processors White Paper, Dragon Copilot (last updated Feb. 18, 2026) (identifying Dragon Copilot subprocessors, their roles, and geographic processing locations), available at https://learn.microsoft.com/en-us/industry/healthcare/dragon-copilot/whitepapers/subprocessors.
  8. 45 C.F.R. §§ 164.502(b)(2), 164.514(d) (2025) (stating when the minimum necessary standard applies and exempting disclosures to or requests by a health care provider for treatment), available athttps://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E/section-164.502 and https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E/section-164.514.
  9. U.S. Dep’t of Health & Hum. Servs., Use of Online Tracking Technologies by HIPAA Covered Entities and Business Associates (June 20, 2024) (warning that privacy policies or website terms alone do not authorize impermissible disclosures of PHI to third-party technology vendors), available at https://www.hhs.gov/hipaa/for-professionals/privacy/guidance/hipaa-online-tracking/index.html.
  10. Fed. Trade Comm’n, AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive (Feb. 13, 2024) (warning that retroactive or opaque term changes permitting new data uses can create unfairness or deception risk), available at https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/ai-other-companies-quietly-changing-your-terms-service-could-be-unfair-or-deceptive.
  11. Microsoft, Data, Privacy, and Security for Azure Direct Models in Microsoft Foundry (last visited Mar. 20, 2026) (stating that customer prompts, completions, embeddings, and training data are not used to improve models without permission), available at https://learn.microsoft.com/en-us/azure/foundry/responsible-ai/openai/data-privacy. OpenAI, Data Controls in the OpenAI Platform (last visited Mar. 20, 2026) (describing logging, application state, and retention settings that vary by endpoint and configuration), available athttps://developers.openai.com/api/docs/guides/your-data. Microsoft, Privacy White Paper, Dragon Copilot (last updated Mar. 11, 2026) (describing anonymization, research-environment processing, and limited human review), available at https://learn.microsoft.com/en-us/industry/healthcare/dragon-copilot/whitepapers/privacy.
  12. U.S. Dep’t of Health & Hum. Servs., Guidance Regarding Methods for De-identification of Protected Health Information in Accordance with the HIPAA Privacy Rule (Feb. 3, 2025) (describing Safe Harbor and Expert Determination as the two recognized de-identification methods and emphasizing that Expert Determination requires a very small risk of re-identification), available at https://www.hhs.gov/hipaa/for-professionals/special-topics/de-identification/index.html. 45 C.F.R. § 164.514(a)-(c) (2025) (codifying the de-identification standards and re-identification code restrictions), available at https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E/section-164.514.
  13. Shuyue Sun et al., DIRI: Adversarial Patient Reidentification with Large Language Models for Evaluating Clinical Text Anonymization, Proc. AMIA Annu. Symp. (2025) (showing that large language models can re-identify a measurable portion of notes even after explicit identifiers are masked), available athttps://pmc.ncbi.nlm.nih.gov/articles/PMC12150728/. OpenAI, Data Controls in the OpenAI Platform (last visited Mar. 20, 2026) (stating that vector store and related objects may persist until deleted, which makes lifecycle governance material in retrieval-based systems), available at https://developers.openai.com/api/docs/guides/your-data.
  14. U.S. Dep’t of Health & Hum. Servs., Use of Online Tracking Technologies by HIPAA Covered Entities and Business Associates (June 20, 2024) (warning that impermissible disclosures to third-party technologies can violate HIPAA even without a traditional breach scenario), available at https://www.hhs.gov/hipaa/for-professionals/privacy/guidance/hipaa-online-tracking/index.html. Fed. Trade Comm’n, FTC Enforcement Action to Bar GoodRx from Sharing Consumers’ Sensitive Health Info for Advertising (Feb. 1, 2023) (announcing a civil penalty and order tied to unauthorized sharing of sensitive health information), available athttps://www.ftc.gov/news-events/news/press-releases/2023/02/ftc-enforcement-action-bar-goodrx-sharing-consumers-sensitive-health-info-advertising. Fed. Trade Comm’n, FTC Gives Final Approval to Order Banning BetterHelp from Sharing Sensitive Health Data for Advertising, Requiring It to Pay $7.8 Million (July 14, 2023) (announcing final relief tied to sensitive health-data sharing that conflicted with privacy representations), available at https://www.ftc.gov/news-events/news/press-releases/2023/07/ftc-gives-final-approval-order-banning-betterhelp-sharing-sensitive-health-data-advertising. Fed. Trade Comm’n, FTC Finalizes Changes to the Health Breach Notification Rule (Apr. 26, 2024) (explaining that the rule more clearly covers health apps and unauthorized disclosures of health data), available at https://www.ftc.gov/news-events/news/press-releases/2024/04/ftc-finalizes-changes-health-breach-notification-rule.
  15. Wash. Rev. Code ch. 19.373 (2025) (defining consumer health data, requiring consent and processor contracts in covered circumstances, creating a private right of action through the Consumer Protection Act, and setting out statutory exclusions and exemptions), available at https://app.leg.wa.gov/RCW/default.aspx?cite=19.373&full=true.
  16. Cal. Penal Code § 632(a) (West 2025) (prohibiting recording of a confidential communication without the consent of all parties where confidentiality is reasonably expected), available athttps://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?lawCode=PEN&sectionNum=632. Cal. Civ. Code § 56.10(a) (West 2025) (restricting disclosure of medical information by providers, plans, and contractors absent authorization or another permitted basis), available athttps://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?lawCode=CIV&sectionNum=56.10.
  17. Reuters, Health Care Ambient Scribes Offer Promise but Create New Legal Frontiers (Jan. 23, 2026) (reporting on allegations in the Sharp HealthCare and Heartland Dental ambient-scribe suits, the dismissal without prejudice of the Heartland wiretap claims, and the continued pendency of the Sharp matter), available athttps://www.reuters.com/legal/litigation/health-care-ambient-scribes-offer-promise-create-new-legal-frontiers–pracin-2026-01-23/.
  18. U.S. Dep’t of Health & Hum. Servs., Individuals’ Right Under HIPAA to Access Their Health Information (May 30, 2025) (explaining that individuals have a right to access PHI in a designated record set, including records used, in whole or in part, to make decisions about them, even if maintained by a business associate), available athttps://www.hhs.gov/hipaa/for-professionals/privacy/guidance/access/index.html. U.S. Dep’t of Health & Hum. Servs., What Personal Health Information Do Individuals Have a Right Under HIPAA to Access from Their Health Plans and Health Care Providers? (June 23, 2016) (explaining that designated record sets include medical records, billing records, and other records used, in whole or in part, to make decisions about individuals), available athttps://www.hhs.gov/hipaa/for-professionals/faq/2042/what-personal-health-information-do-individuals/index.html.
  19. Am. Health Info. Mgmt. Ass’n, Fundamentals of the Legal Health Record and Designated Record Set (Nov. 26, 2024) (explaining that the designated record set can be broader than the legal health record and that retained artifacts must be reproducible and accessible according to organizational policy), available athttps://journal.ahima.org/Portals/0/archives/AHIMA%20files/Fundamentals%20of%20the%20Legal%20Health%20Record%20and%20Designated%20Record%20Set.pdf. Am. Health Info. Mgmt. Ass’n, Legal Process and Electronic Health Records (Dec. 3, 2024) (explaining that discoverability can reach beyond the formal legal health record and that HIM and IT custodianship issues matter in litigation), available athttps://journal.ahima.org/Portals/0/archives/AHIMA%20files/Legal%20Process%20and%20Electronic%20Health%20Records.pdf.
  20. U.S. Dep’t of Health & Hum. Servs., May a HIPAA Covered Entity or Business Associate Use a Cloud Service to Store or Process ePHI? (Oct. 5, 2016) (stating that BAA and related service terms should not prevent the entity from accessing its own ePHI), available at https://www.hhs.gov/hipaa/for-professionals/faq/2075/may-a-hipaa-covered-entity-or-business-associate-use-cloud-service-to-store-or-process-ephi/index.html. U.S. Dep’t of Health & Hum. Servs., Business Associate Contracts (Jan. 25, 2013) (supplying model provisions that can be adapted to downstream subcontractor arrangements), available at https://www.hhs.gov/hipaa/for-professionals/covered-entities/sample-business-associate-agreement-provisions/index.html.
  21. OpenAI, Data Controls in the OpenAI Platform (last visited Mar. 20, 2026) (showing that retention and logging can turn on endpoint choice, feature choice, and configuration), available athttps://developers.openai.com/api/docs/guides/your-data. Microsoft, Privacy White Paper, Dragon Copilot (last updated Mar. 11, 2026) (showing that support, reliability, annotation, and anonymization workflows must be addressed explicitly rather than assumed away), available at https://learn.microsoft.com/en-us/industry/healthcare/dragon-copilot/whitepapers/privacy. Microsoft, Sub-processors White Paper, Dragon Copilot (last updated Feb. 18, 2026) (showing why named subprocessor schedules and location disclosures matter), available at https://learn.microsoft.com/en-us/industry/healthcare/dragon-copilot/whitepapers/subprocessors.
  22. Hunton Andrews Kurth LLP, The Continued Proliferation of AI Exclusions (May 28, 2025) (describing the emergence of broad and even “absolute” AI exclusions in certain liability forms), available athttps://www.hunton.com/hunton-insurance-recovery-blog/the-continued-proliferation-of-ai-exclusions. Iowa State Bar Ass’n, AI Exclusions Are Creeping into Insurance: But Cyber Policies Aren’t the Issue (Yet) (Sept. 17, 2025) (observing that AI exclusions are appearing across some coverage lines and that policy language must be reviewed carefully), available at https://www.iowabar.org/?blAction=showEntry&blogEntry=131301&pg=IowaBarBlog. Hunton Andrews Kurth LLP, How Insurance Policies Are Adapting To AI Risk, Law360 (July 2, 2025) (explaining how emerging exclusions and affirmative AI coverage products are reshaping the insurance market), available athttps://www.hunton.com/insights/publications/how-insurance-policies-are-adapting-to-ai-risk.
  23. Am. Med. Ass’n, How to Develop AI Policies That Work for Your Organization’s Needs (Aug. 18, 2025) (urging organizations to adopt governance structures, define permitted uses, and set clear organizational policies for AI use), available at https://www.ama-assn.org/practice-management/digital-health/how-develop-ai-policies-work-your-organization-s-needs. Am. Med. Ass’n, AMA Principles for Augmented Intelligence Development, Deployment, and Use (Nov. 12, 2024) (calling for transparency, oversight, privacy, security, accountability, and qualified human intervention in healthcare AI), available at https://www.ama-assn.org/system/files/ama-ai-principles.pdf.
  24. Epic Systems Corp., Epic AI Charting Rolls Out Alongside an Expanding Set of Built-in AI Capabilities (Feb. 4, 2026) (illustrating how AI is being embedded into clinical and operational workflows inside existing platforms), available at https://www.epic.com/epic/post/epic-ai-charting-rolls-out-alongside-an-expanding-set-of-built-in-ai-capabilities/. Epic Systems Corp., Artificial Intelligence (last visited Mar. 20, 2026) (illustrating the same trend in patient-facing and administrative workflows), available at https://www.epic.com/software/ai/. Fed. Trade Comm’n, AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive (Feb. 13, 2024) (showing why governance must account for evolving features and changing terms after go-live), available athttps://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/ai-other-companies-quietly-changing-your-terms-service-could-be-unfair-or-deceptive.

Subscribe to AMERICAN COUNSEL

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe