Skip links

AI HARM

When AI chatbots cause real-world harm

Some of the most devastating consequences of unchecked AI have nothing to do with data or dollars. They happen in private conversations, late at night, when a vulnerable person reaches out to a machine that responds as though it cares. AI chatbots have encouraged people to end their lives or harm others. They have deepened delusions, fueled dangerous obsessions, and left users isolated from the humans who might have helped them. Children are particularly at risk. These are not edge cases or hypotheticals. They are documented, preventable harms. Clarkson Law Firm is taking legal action. If you or someone you love has been affected, we want to hear from you.

Key Insights

   Chatbots are failing users at their most vulnerable moments.

   AI is being deployed in a regulatory vacuum, and children are paying the price.

   Whether AI causes delusions or amplifies them, the legal stakes are enormous.

Together On AI - AI Harm Law class action
Together On AI - AI Harm Law class action
Artificial Harm Lawsuits Class action
Artificial Harm Lawsuits Class action

The stakes are too high to wait.

As Clarkson Managing Partner Ryan Clarkson has said of corporate harm across industries, companies are engaged in a pattern of privatizing profits and socializing harms. In the context of AI, those socialized harms include families who have lost children to chatbot-encouraged suicide, individuals whose grip on reality was destabilized by AI companions that claimed to be sentient and emotionally present, and teenagers who turned to a machine for support and received validation of their worst impulses instead.

"What you're talking about is this issue of corporations privatizing profits and socializing the harms … We saw it in tobacco and we're seeing it in AI."

Ryan Clarkson - Managing Partner, Clarkson Law Firm

Our AI Harm Legal Team

PRESS & MEDIA

Have you or a loved one been harmed by AI?

FREQUENTLY ASKED QUESTIONS

AI Harm refers to documented, real-world damage caused by the negligent or reckless design of AI chatbot products. This includes cases where chatbots encouraged users to take their own lives, reinforced dangerous delusions, facilitated violent ideation, or fostered pathological emotional dependence.

Victims span a wide range of people, but children and teenagers are disproportionately at risk.

Anyone who has suffered documented harm as a result of interacting with an AI chatbot may have a viable claim. This includes individuals who experienced psychotic episodes, suicidal ideation that was reinforced by a chatbot, dangerous emotional dependence, or other serious psychological harm. Family members of individuals who were harmed or killed following chatbot interactions may also have a claim. You do not need to be certain you have a case to contact us.

A member of the Clarkson intake team will review your submission confidentially and follow up to learn more about your situation. There is no obligation to proceed.

No. Any information you share with Clarkson Law Firm is protected by attorney-client privilege from the moment you make contact. However, continued use of the product or service can create the impression that the product’s misrepresentations did not matter to you. The key takeaway is to be honest about your experience.

A successful outcome can mean financial compensation for victims and their families, changes to the design and deployment of AI products, and a legal record that forces the industry to take safety seriously. Class actions are one of the most powerful tools available for driving systemic change. When enough people come forward, the economics of corporate negligence shift.

Complete the form on this page. The process takes only a few minutes. Describe your experience as fully as you can and our team will take it from there. You are not alone in this, and what happened to you or your family matters.

OTHER AI LEGAL FOCUS AREAS

AI Harm

In some of the most devastating cases, AI chatbots have encouraged vulnerable users to take their own lives, exposing the deadly consequences of deploying AI without adequate safeguards.

AI and Intellectual Property Theft

AI companies have scraped the creative work of writers, artists, and developers without consent or compensation to train their products.

AI Washing

AI Washing

Companies are deceiving consumers and investors by exaggerating or fabricating the AI capabilities of their products and services.

AI in Healthcare

Insurance companies are deploying AI systems to wrongfully deny patient claims, overriding physician judgment and putting profits above care.

AI Taskers and Trainers

The human workers who label data and train AI models are being misclassified as independent contractors, denying them wages, benefits, and legal protections they are owed.

Join our mailing list to stay current with Clarkson’s AI-related cases.

Name(Required)

By submitting this form, you agree to the Terms of Service and Privacy Policy.