As Healthcare AI Expands, States Are Passing Laws to Protect Consumers
*Originally published by The Governance Institute
In the last year, state activism in healthcare consumer protection has surged, with new laws that heighten oversight of for-profit investors’ engagement with healthcare marketplaces and scrutinize pharmaceutical pricing practices. As part of this activism, several state legislatures have enacted laws regulating use of artificial intelligence (AI) in healthcare delivery.
Overview
States are beginning to regulate the use of AI as a patient care support tool, for both healthcare payers and providers. Several factors combine to make this state activism likely to be more prevalent in 2025, including:
- Technical progress and a favorable business climate for expanding AI in healthcare
- Uncertainty about federal policy efforts in this area
- Strong consumer interest in and concern about both application of AI in healthcare and perceived abuses by large health insurers
At the same time, private class action litigation and state attorneys general are challenging AI practices in the healthcare sector. Understanding this recent consumer protection activism3 is crucial for healthcare entities. In this climate, governing boards should ensure that these evolving state law developments are being monitored and, as applicable, adjustments to operations are made.
Uncertainty About Federal Policy
In 2023, President Biden issued Executive Order 14110 to ensure responsible AI development, focusing on safety, competition, non-discrimination, consumer protection, and data privacy.4 However, President Trump revoked this order and issued Executive Order 14179, focusing on developing AI systems “free from ideological bias.”5 President Trump’s order tasked departments with reviewing and potentially rescinding Biden’s AI regulations, leaving federal AI guidance, particularly on non-discrimination in healthcare, uncertain. As a general matter, the current administration appears to be supportive of expanded use of AI across many sectors of the United States economy.
Recent Consumer Protection Litigation Challenging AI in Healthcare
Health Plans
Recent uses of AI tools by healthcare payers have prompted national class action lawsuits. In July 2023, plaintiffs filed a class action lawsuit against Cigna, alleging that the company wrongfully denied claims using an AI tool.6 They claim the tool allowed for automatic claim rejections without proper review by doctors, violating the implied covenant of good faith and fair dealing and California’s Unfair Competition Law and causing a breach of contract. The plaintiffs argue that Cigna’s use of the tool led to over 300,000 payment denials with minimal review time and lacked disclosure about the AI’s role in decision making.
In November and December 2023, two more groups of plaintiffs brought class action lawsuits against UnitedHealth Group and Humana, respectively.7 Both lawsuits allege that the health plans improperly used an AI tool to deny patient services, resulting in breach of contract, breach of the implied covenant of good faith and fair dealing, unjust enrichment, and insurance bad faith. Plaintiffs argue that the AI model improperly replaced doctors’ recommendations, failed to consider individual patient needs, and prematurely ceased care coverage.
UnitedHealth Group and Humana moved for dismissal, but to no avail, and Cigna’s motion to dismiss is still pending; all three cases still sit on the courts’ dockets.8
AI Technology Used in Hospitals
In September 2024, Texas Attorney General Ken Paxton reached a settlement with Pieces Technologies, an AI healthcare company, over allegations of false claims about the accuracy and safety of its products as used in Texas hospitals.9 The Texas Attorney General asserted that Pieces misrepresented its AI’s accuracy, potentially misleading hospitals and risking patient privacy and safety.10 As part of the settlement, Pieces must now disclose its products’ accuracy and ensure hospital staff understand the appropriate reliance on its AI tools.11
State Legislatures Act to Protect Consumers from Wrongful Use of Healthcare AI
In the absence of clear guidance at the federal level, and possibly taking a page from recent litigation efforts, several states have enacted laws to regulate AI use in healthcare. As a general proposition, these laws are designed to:
- Prevent use of discriminatory AI models that lead to selection bias.12
- Require healthcare entities that use AI to disclose such use to patients.
- Require healthcare payers and providers to maintain final say over medical determinations rather than ceding this authority to AI tools.
Of the states passing such laws, California, Colorado, and Utah are worthy of special note.
Health and Disability Insurers
In California, healthcare service plans and disability insurers now must adhere to strict procedures for AI utilization review, ensuring that a licensed physician or healthcare professional maintain ultimate responsibility for making personalized medical necessity decisions for each member of a healthcare service plan or health insurer.13 Health and disability insurers must maintain written policies for using AI that align with clinical decision-making guidelines, overseen by licensed medical directors.14
Meanwhile, Colorado lawmakers created a framework that requires health insurers to demonstrate that: 1) use of AI or algorithms to manage patient data is lawful under forthcoming insurance regulations and 2) AI programs or algorithms have been tested for unfair discrimination.15 Colorado’s Division of Insurance has proposed regulations that would require governing boards for health insurance companies to form cross-functional committees comprised of representatives from their legal, compliance, risk management, and product development teams to ensure compliance with AI regulations (e.g., data reporting requirements that indicate what AI systems the insurers deploy and any external consumer data the insurance providers use).16 Additionally, under the pending regulations, these boards must create a risk management framework to ensure that AI, algorithms, and predictive models do not result in unfair discrimination of payees’ claims.17
Healthcare Providers
Both California and Utah now require certain physicians and healthcare providers to disclose the use of generative AI to patients.18 The Utah law impacts all Utah-licensed professionals, including physicians, and requires licensed professionals to verbally disclose at the start of oral interactions or electronically disclose before written interactions when AI is used in their service provision.19 The California law applies to patient communications from health facilities, clinics, and physician offices; unless a human provider reviews AI-generated communications before the communications are sent, communications must include disclaimers indicating AI generation and provide contact information for human providers.20
An existing California law continues to require laboratory directors or authorized designees to establish criteria for auto-verifying clinical laboratory results, ensuring that humans make final determinations about diagnostic test results when using AI-assisted diagnostics.21 Similarly, since 2024, Colorado healthcare providers who deploy AI to make “consequential decisions” about patient care or the cost of care have been required to: 1) implement risk management policies, 2) mitigate algorithmic discrimination, and 3) conduct impact assessments about the use of AI to deliver healthcare services.22
The California, Colorado, and Utah laws highlight a growing trend towards state regulation of AI in healthcare for the benefit of consumers, focusing on transparency, accountability, and the ethical use of technology.23
Recommendations
As hospitals and health systems increasingly integrate AI into their operations, compliance with state law is paramount. In California and Utah, certain healthcare providers must inform patients when generative AI is used in clinical communications; in California, AI disclosures must be accompanied by clear instructions on how patients can contact human providers. Additionally, AI should not improperly replace clinician involvement in medical decision making. Rather, California mandates that licensed professionals oversee AI-driven utilization reviews. For health systems with affiliated health plan operations, Colorado requires board-directed integrated compliance and risk management frameworks to prevent discriminatory outcomes.
In states where there are not currently laws directly governing the use of AI in healthcare delivery, boards may want to look to California, Colorado, and Utah, as these states provide valuable examples of comprehensive regulatory frameworks that other states may adopt in the future. In addition, boards should monitor national class action litigation and state attorney general actions challenging the use of AI in healthcare. These various efforts may directly or indirectly impact the manner by which hospitals, health systems, and health insurers should deploy AI, for example by developing disclosure policies when AI is integrated into patient care, taking steps to ensure healthcare professionals can be shown to retain ultimate decision-making authority when using AI as a patient care support tool, and implementing integrated compliance and risk management frameworks for AI. Regular audits and assessments of AI systems can help identify and address potential biases and security risks.
Conclusion
As AI continues to reshape healthcare delivery, hospitals and health systems must take proactive steps to navigate the legal landscape. Increasingly, this will entail monitoring class action litigation and state-initiated litigation and legislative efforts to protect consumers against AI abuses in healthcare delivery. Boards play a key role in assuring this vigilance, and in prioritizing integrated implementation of AI-focused compliance and risk management.
Key Board Takeaways
Given emerging state activism to protect consumers in healthcare AI, boards should:
- Ensure the organization is closely monitoring and, as appropriate, modifying operations in response to a) state legislation regulating use of AI in the healthcare sector and, as applicable, implementing necessary consumer protections, and b) national class action litigation and state attorney general actions concerning AI in healthcare.
- Consider developing cross-functional governance committees comprised of representatives from legal, compliance, risk management, and product development to ensure integrated AI oversight.
- Taking cues from key themes in state AI oversight efforts, consider:
- Establishing policies mandating regular audits of AI systems to maintain transparency and accountability
- Ensuring AI disclosure policies are in place for patients
- Confirming healthcare professionals’ ultimate decision-making authority when utilizing AI in clinical settings
*This article was originally published by The Governance Institute. Reprinted with permission from The Governance Institute, Lincoln, NE, (2025), (Anne Murphy, As Healthcare AI Expands, States Are Passing Laws to Protect Consumers). The author would also like to thank Aida Al-Akhdar, J.D., from ArentFox Schiff for her contributions to this article.
1 Anne Murphy, et al., “What Private Equity Investors and Real Estate Investment Trusts Need to Know About the Newly Enacted Massachusetts Health Oversight Law,” ArentFox Schiff, January 9, 2025.
2 Anne Murphy, Stephanie Trunk, and Aida Al-Akhdar, “Massachusetts Enacts Drug Pricing Legislation: Introducing PBM Licensure, Mandatory Cost Reporting, and Consumer Cost-Sharing Limits,” ArentFox Schiff, February 27, 2025.
3 Michelle M. Mello, et al., “President Biden’s Executive Order on Artificial Intelligence—Implications for Healthcare Organizations,” JAMA Network, November 30, 2023 (explaining that Executive Order 14110, issued on October 30, 2023, addresses “concerns about unfair applications of AI tools in healthcare delivery and insurance coverage [and that] healthcare facilities and insurers could find themselves in the bullseye”); Angel West, “The Legal Landscape for AI-Enabled Decisions for Healthcare Claims and Coverage Continues to Evolve: From Litigation to Emerging Legislation”, MaynardNexsen Newsroom, January 20, 2025 (explaining that President Biden issued Executive Order 14110 in “attempts to address AI standards and establish some guidance and guardrails in the healthcare industry”).
4 Executive Order No. 14,110, 88 Fed. Reg. 75,191 (October 30, 2023).
5 Executive Order No. 14,179, 90 Fed. Reg. 8,741 (January 23, 2025).
6 Kisting-Leung v. Cigna Corp., 2:23-cv-01477-DAD-CSK (E.D. Cal. July 24, 2023).
7 Estate of Lokken v. UnitedHealth Group, Inc., 23-cv-03514-JRT-DTS (D. Minn. November 14, 2023); Barrows v. Humana, Inc., 3:23-cv-654-CHB (W.D. Ky. December 12, 2023).
8 West, January 20, 2025.
9 Petition for Approval and Entry of Assurance of Voluntary Compliance, Texas v. Pieces Tech.’s, Inc., DC-24-13476 (September 21, 2024) (establishing a settlement agreement under the Texas Deceptive Trade Practices—Consumer Protection Act).
10 Ibid.
11 Ibid.
12 Malwina Anna Wójcik, “Algorithmic Discrimination in Healthcare: An EU Law Perspective,” Health and Human Rights Journal, June 24, 2022 (“When big data on which the algorithm is trained are not representative of the target patient population, selection bias occurs. In this case, AI can produce unintended results, such as interpreting the lack of data as the lack of disease.”).
13 California Health & Safety Code § 1367.01(k)(1) (effective January 1, 2025); California Insurance Code § 10123.135 (j)(1) (effective January 1, 2025).
14 California Insurance Code § 10123.135 (j)(1).
15 Colorado Revised Statutes Ann. § 10-3-1104.9 (most recent provisions going into effect on July 1, 2025).
16 Ibid.
17 3 Colorado Code Regulations. 702-10:10-1-1, Draft Proposed Amended Regulation 10-1-1 Governance and Risk Management Frameworks.
18 California Health & Safety Code § 1339.75 (effective January 1, 2025); Utah Code § 13-72-1 et seq. (effective May 1, 2024).
19 Utah Code § 13-72-1 et seq. (effective May 1, 2024).
20 California Health & Safety Code § 1339.75 (effective January 1, 2025).
21 California Business & Professions Code § 1209.1 (effective January 1, 2007).
22 Colorado Revised Statutes. Ann. § 6-1-1701 et seq. (effective May 17, 2024).
23 Additionally, Kentucky and Rhode Island have laws that regulate use of AI devices to perform eye exams. Kentucky Revised Statutes § 367.6802 (effective July 14, 2018); 23 Rhode Island General Laws § 23-97-1-7 (effective June 29, 2022). In Oklahoma, use of medical algorithms to generate treatment protocols is only appropriate if a physician reviews the protocol before the protocol is implemented. Oklahoma Statutes Title 63, § 1-290 et seq. (effective May 1, 2012). Lastly, Virginia law requires assisted living facilities and skilled nursing facilities to establish policies for the use of intelligent personal assistants provided by patients, ensuring compliance with HIPAA and protecting health information. Virginia Code § 32.1-127 (effective July 1, 2025).
Contacts
- Related Industries