Seth Goldberg and Dominique Kilmartin Author Article "AI-Related Class Action Exposure"
This article provides an overview of types of AI and the risks associated with them. It also provides examples of how those risks have become reality, resulting in the increased filing of AI-related class actions with potentially detrimental consequences to the financial and reputational well-being of the companies defending against them.
December 15, 2025
By Seth A. Goldberg
Dominique Kilmartin
An AI-powered teddy bear called Kumma designed to interact with children using OpenAI models reportedly relayed sexually explicit and other inappropriate responses to product testers. Daniel Wu, A teddy bear powered by AI told safety testers about knives, pills and sex, MSN (Nov. 21, 2025). Tesla’s “Actually Smart Summon” parking system caused at least 16 reported accidents, prompting the investigation of an estimated 2.6 million vehicles. AI Incident Database, Incident 889: Tesla's 'Actually Smart Summon' Feature Reportedly Linked to Multiple Parking Lot Collisions. A lawsuit filed by the parents of a 23-year-old against OpenAI alleged ChatGPT exchanged humanlike messages that encouraged his suicide. Rob Kuznia, Allison Gordon, Ed Lavandera, ‘You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself, CNN (Nov. 20, 2025).
Scenarios like these are becoming increasingly common as AI has become a fixture in our personal and professional lives. But with increased use comes increased risk. AI hallucinations can alter outcomes, hijacked AI can result in identity or property theft, and AI trained on biased data can perpetuate those biases. Unsurprisingly, the use of AI and AI systems themselves are resulting in litigation—including class actions—with increasing frequency.
This article provides an overview of types of AI and the risks associated with them. It also provides examples of how those risks have become reality, resulting in the increased filing of AI-related class actions with potentially detrimental consequences to the financial and reputational well-being of the companies defending against them.
Types of AI
The types of AI are categorized by their capabilities and functionalities. Generative AI uses machine learning and large language models (LLMs) to generate text, images, audio, video, and other content. It can draft documents, make music, tell stories, and produce realistic images. Agentic AI acts more autonomously in that it can take action based on continuous learning to achieve its objectives. It is used in self-driving cars, virtual assistants, and industrial robotics.
Generative and Agentic AI systems are “narrow” or “weak” in that they are dependent upon information supplied by humans and can only perform pre-determined functions to achieve certain objectives. Once developed, General AI and Super AI, on the other hand, will be able to make decisions and train themselves to become more powerful. Ultimately, those systems, which are still theoretical, will be able to mimic human decision-making and emotion.
Risks of AI
While there are macro risks of using AI, such as eliminating jobs and reducing the workforce, class action litigation relating to AI involves micro- or system-specific risks, like privacy, fairness and bias, and safety. According to Stanford University’s 2025 AI Index Report, 233 incidents of ethical misuse of AI in 2024 were reported to the AI Incident Database, a 56.4% increase in the number of incidents reported in 2023. Stanford University, AI Index Report 2025. Reported incidents include robotics mishaps causing physical injury, deepfakes causing financial and emotional injury, and biased algorithms causing discriminatory outcomes. Through the first three quarters of 2025, 270 incidents of ethical misuse were reported. AI Incident Database.
Responsible AI refers to the development and implementation of AI systems in ways that address such ethical misuses. According to the Global State of Responsible AI survey, which analyzed the responses of 1500 organizations in 19 industries across 20 countries, as compared to 2024 businesses have become increasingly concerned in 2025 with AI risks that are already being litigated in class actions, including: privacy and data-related risks (51% in 2024 versus 65% in 2025), like data leaks; reliability risks (45% versus 59%), like hallucinations; compliance risks (29% versus 56%), like copyright violations; human interaction risks (35% versus 40%), like misuse and misinformation; and diversity and discrimination risks (29% versus 35%). See AI Index Report at 181-82.
AI-Related Class Actions
Businesses are growing increasingly concerned about the risks associated with AI for good reason: litigation, including class actions like those described below, has been increasing against companies developing AI and companies deploying AI.
Copyright Infringement
There has been a steady rise in AI-related copyright lawsuits where owners of copyrighted materials allege that AI companies improperly used their copyrighted works to train AI models.
One of the most closely watched AI copyright lawsuits, Bartz v. Anthropic PBC, 24-cv-05417 (N.D. Cal.), was filed by three esteemed authors against Anthropic, a leading developer of LLMs. In August 2024, the authors filed a putative class action in the Northern District of California alleging that Anthropic directly infringed on their copyrights when it relied on pirated versions of their e-books to train “Claude,” Anthropic’s flagship next generation AI assistant.
In March 2025, Anthropic moved for summary judgment, arguing that its use of copyrighted books to train Claude was “fair use”—a narrow exception to copyright infringement that permits the limited use of copyrighted material for certain “transformative” purposes; i.e., teaching, scholarship, and research. In June 2025, the court granted and denied in part Anthropic’s motion, holding that although Anthropic’s use of lawfully acquired books for AI training was “quintessentially transformative,” its creation of a “central library” comprised in part of pirated books was not.
By mid-July, the court had certified a class of US copyright holders whose works Anthropic had acquired, which opened up the class to millions. Because statutory copyright damages can reach up to $150,000 per infringed work, Anthropic’s potential exposure was massive. Just weeks later, the parties announced a proposed settlement, which has since been granted preliminary approval by the court. Anthropic has reportedly agreed to pay at least $1.5 billion to settle the suit which, according to motion papers, is the largest publicly-reported copyright recovery in history.
Algorithmic Bias/Discrimination
Anthropic demonstrates the risks associated with using copyrighted materials to train AI. But what if AI isn’t properly trained in the first place? AI has been at the center of class actions alleging “algorithmic bias”—that is, when AI algorithms produce discriminatory or unfair outcomes.
Among the most closely watched algorithmic bias lawsuits is Mobley v. Workday, 23-cv-00770 (N.D. Cal.), a class action pending in the Northern District of California. There, the plaintiffs allege that Workday—a provider of HR management software that connects job applicants to potential employers—used an AI-based applicant filtering algorithm which disproportionately disqualified applicants based on certain protected characteristics (i.e., their age, gender, or race) in violation of federal antidiscrimination laws. Workday’s motion to dismiss the plaintiffs’ amended complaint was denied in July 2024. And this May, the court granted conditional class certification of the plaintiffs’ age discrimination claims under the Age Discrimination in Employment Act. The class of potential plaintiffs (and Workday’s potential liability) is astronomical: Workday has represented in public filings that approximately 1.1 billion applications were rejected using the algorithm at issue.
Workday is not the only company facing significant liability for alleged algorithmic bias in the workplace. This summer, aggrieved job applicants filed a class action against Sirius XM in the Eastern District of Michigan, alleging race-based discrimination through Sirius’ use of its AI hiring tool. Harper v. Sirius XM Radio, 25-cv-12403 (E.D. Mich.). Moreover, liability is not limited to workplace discrimination. Insurance giant State Farm has been defending against a class action in the Northern District of Illinois, where the plaintiffs allege that State Farm’s use of algorithms in processing claims disproportionately impacts black policyholders in violation of the Fair Housing Act. Huskey v. State Farm Fire and Casualty, 22-cv-7014 (N.D. Ill.). Accordingly, no industry is immune from claims of algorithmic bias.
Misuse
Even if a company’s AI works fairly and as intended, it still may be at risk if its users are unaware that AI is even being used. Although tech giants like Google and OpenAI are no stranger to lawsuits alleging that they deployed AI without their users’ knowledge or consent, the insurance company Humana has also come under fire for failing to disclose its use of AI. In a class action lawsuit pending in the Western District of Kentucky, Medicaid Advantage beneficiaries allege that Humana used AI—as opposed to the determinations of real-life clinicians—to deny post-acute care, and failed to disclose that fact to its insureds in violation of their insurance agreements. Barrows v. Humana, 23-cv-654 (W.D. Ken.).
Notably, the court recently determined that the plaintiffs’ common law claims for breach of contract, fraud, and the like were sufficiently pled to withstand Humana’s motion to dismiss. Accordingly, ensuring that AI is working as intended is only half the battle—companies must also ensure that their contracts and other communications accurately convey to consumers whether and for what purpose AI is being used.
Takeaways
Given the increase in AI-related incidents and the uptick in AI-related class actions, it is not surprising that businesses are growing increasingly concerned about the risks associated with the use of AI. While AI has the potential to bring enormous value to any business, robust compliance protocols and risk management strategies are essential to avoid class action exposure.
Seth A. Goldberg is a partner in Pashman Stein Walder Hayden's litigation, class actions and privacy and information governance practices. Dominique Kilmartin is an associate in the firm’s litigation, class actions and appellate advocacy practices.
Reprinted with permission from the December 15, 2025 edition of the “New Jersey Law Journal” © 2025 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.
To read the full article in New Jersey Law Journal, click here.