The Product Liability & Mass Tort Monitor is a monthly newsletter delivering critical updates, data insights and actionable strategies for navigating the complexities of product liability and mass tort litigation. This month’s issue takes a look at the brave new world of product liability emerging in tech.
For generations, product liability doctrine developed against a backdrop of tangible goods — the defective automobile, the mislabeled pharmaceutical, the poorly engineered industrial machine. That framework is now under significant pressure as two converging litigation tracks force courts to determine whether software-driven digital experiences — social media platforms engineered to maximize engagement and AI systems designed to simulate human connection — can and should be treated as defective products under traditional tort principles. These cases stand to reshape the liability landscape for the technology industry and ultimately extend to other industries that employ these mechanisms.
The first and more developed of these tracks is MDL No. 3047, In Re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, pending in the U.S. District Court for the Northern District of California. With over 2,200 active cases as of February 2026, the plaintiffs allege that major social media companies knowingly designed platforms with engagement-maximizing algorithms that exploit the neurological vulnerabilities of adolescent users. This practice, they allege, caused severe mental health injuries including depression, eating disorders and suicide. The companies deny these allegations.
In a pivotal March 2025 ruling, the court allowed negligent design claims to proceed by applying a functionality-based rather than tangibility-based test for product status. The court also limited Section 230’s protective reach to third-party content claims, distinguishing them from claims targeting a platform’s own design architecture. Although several defendants settled, a handful of the most prominent social media companies are currently defending against the first of 11 bellwether cases. These cases will provide the first real-world test of how juries will respond to plaintiffs’ framing of algorithmic design as a corporate choice rather than a technical inevitability.
A parallel wave of litigation against AI developers pushes into more uncharted territory. Wrongful death and personal injury claims against major AI companies allege that chatbot platforms contributed to adolescent suicide and severe psychological harm. The plaintiffs’ theories center on design defect and failure to warn, alleging that emotionally immersive conversational design, the absence of robust guardrails, and inadequate age‑verification or parental controls created an unreasonable risk of self‑harm for vulnerable users — particularly adolescents — and that safer, feasible alternative designs were available but not implemented. Unlike social media platforms that simply curate and amplify user-generated content, the allegedly defective nature of AI chatbots is the way the chatbots generate new content and reinforce concepts through continuous feedback loops. This raises the novel threshold question of whether a large language model’s dynamically generated outputs constitute a “product” subject to strict liability or a “service” requiring plaintiffs to satisfy a higher causation burden.
In one of the more prominent cases, Garcia v. Character Technologies, the U.S. District Court for the Middle District of Florida addressed whether Character A.I.’s chatbot, which allegedly contributed to a teen’s suicide, functioned as a product or a service. See 785 F. Supp. 3d 1157, 1180 (M.D. Fla. 2025). The court found that Character A.I. was a “product for the purposes of Plaintiff’s claims [arising] from defects in the Character A.I. app rather than ideas or expressions within the app.” Id. In January 2026, Character.AI agreed to settle several suits, including Garcia, though the broader doctrinal questions those cases raised remain largely unresolved. Additional litigation continues to be filed.
Both litigation tracks share a set of doctrinal pressure points that will define this area for years. Applying the Restatement (Third) of Torts’ “reasonable alternative design” test to adaptive, emergent software systems raises questions that do not arise with conventional manufactured goods — namely, whether a “safer” algorithmic design simply means a less effective one. Failure-to-warn theories present their own challenges, as the learned intermediary doctrine offers no refuge for direct-to-consumer platforms marketed to and used by minors. In addition, the continuous-update nature of software products creates plausible grounds for courts to impose an ongoing post-sale duty to warn and implement safety features as evidence of harm accumulates.
The doctrinal trajectory in the social media MDL and the AI chatbot proceedings strongly suggests that product liability law is extending its reach into the digital world. For companies deploying consumer-facing AI tools and digital platforms — particularly those accessible to minors — this litigation wave warrants careful attention to product design choices, on-platform warning systems and age-verification mechanisms. The technology industry’s long-standing assumption of broad immunity from tort exposure is being tested with increasing force, and the first bellwether jury verdicts will establish critical precedents for the cases that follow.
McGuireWoods’ Product Liability and Mass Tort Practice Group supports clients in assessing and mitigating risks, developing strategic responses to evolving laws and regulations, and defending litigation as it arises. The team is experienced in leading national defenses across a wide range of industries, including automotive, food and beverage, electronics, medical devices, chemicals, tobacco, pharmaceuticals, aircraft, trains, power tools, and building products. It also has deep experience handling toxic substance exposure and regulatory matters. Lawyers work closely with clients to navigate complex legal landscapes and protect their interests in high-stakes cases.