Skip links

AI Taskers and Trainers

Hidden workers, real rights

Every AI chatbot that answers a question, writes an email, or generates an image got there through human labor. Thousands of workers known as data annotators, taskers, and AI trainers spend hours reviewing AI outputs, writing better responses, and flagging harmful content so that companies like OpenAI, Meta, Google, and Anthropic can improve their models. These workers are the backbone of the generative AI economy. Most have been classified as independent contractors. Clarkson Law Firm believes that classification is wrong, that it is being used to avoid paying lawful wages, and that the workers harmed deserve accountability and restitution.

Key Insights

   Human labor is what makes AI work — and the people doing it are largely unprotected.

   Worker misclassification is not an oversight — it is a business model.

   The industry's scale makes this one of the largest labor violations tied to the AI boom.

AI Taskers and Trainers Class Action Lawsuit
AI Taskers and trainers misclassification lawsuit

The AI boom has a hidden workforce. We represent it.

The AI industry likes to talk about innovation. What it does not talk about is the army of workers who make that innovation possible. Every large language model that exists today was shaped by human beings who wrote prompts, evaluated outputs, and corrected errors. Those people were paid to do that work. Many were not paid fairly, and most were denied basic protections that California law has required employers to provide for decades.

Worker misclassification is one of the most litigated areas of California labor law. The legal precedent is settled. What was new was identifying that these violations were occurring at massive scale inside the AI industry, affecting a workforce that had no voice and no visibility.

These cases came to us the same way most of our cases do: through ordinary people who reached out to tell us what was happening to them. They described unpaid training periods, tasks that eliminated their pay entirely if not completed within unrealistic time limits, and no ability to challenge or appeal the decisions of an automated system that controlled their work and their income. In some cases, they described being exposed to deeply disturbing content with no support or recourse.

No corporation changes its behavior because you ask nicely. The only thing that moves large companies is accountability. Our job is to deliver it. If you have worked as a data annotator, tasker, or AI trainer and believe your employer treated you as a contractor when you should have been an employee, we want to hear from you.

"We must hold these big tech companies … accountable or workers will continue to be exploited to train this unregulated technology for profit."

Glenn Danas - Partner, Clarkson Law Firm, quoted in TechCrunch

Our AI Taskers and Trainers Legal Team

PRESS & MEDIA

Have you worked as an AI tasker or trainer?

FREQUENTLY ASKED QUESTIONS

Data annotators and AI taskers are workers who help train artificial intelligence systems. The work typically involves evaluating AI-generated responses and rating them, writing better answers, comparing different outputs, testing AI systems by trying to elicit incorrect or harmful responses, and flagging content that violates guidelines. This work is performed on platforms run by companies that contract with major technology companies to improve their AI models.

The classification determines what legal protections you are entitled to. Labor laws in California require employees to be paid minimum wage, overtime pay, meal and rest breaks, timely payment of wages, reimbursement for business expenses, and employers to maintain accurate payroll records. Independent contractors have none of those protections. Companies that misclassify employees as contractors deprive them of those rights and save substantial money in the process.

Clarkson filed a class action against Scale AI and its subsidiaries, including Outlier AI and Remotasks, in California Superior Court in San Francisco in December 2024. In May 2025, Clarkson filed a second class action against Surge AI, also known as Surge Labs, in the same court. Both cases allege intentional misclassification of workers as independent contractors in violation of California labor law, including failure to pay minimum wage, failure to pay overtime, failure to compensate for training time, and failure to reimburse business expenses.

California uses a test known as the ABC test under Labor Code Section 2775. Under this test, a worker is presumed to be an employee unless the hiring company can prove three things: that the worker is free from the company's control in performing the work, that the work is outside the usual course of the company's business, and that the worker is customarily engaged in an independently established trade or occupation.

The lawsuits seek recovery of unpaid wages, overtime compensation, statutory penalties, interest, injunctive relief, and other equitable remedies available under California law. The goal is both financial restitution for the workers who were harmed and a change in the business practices that caused the harm.

California labor law applies to work performed in California and to California-based employees. If you performed work in California or were employed by a company headquartered in California, you may have viable claims under state law. If you are based in another state, contact us to discuss whether your circumstances support a claim under applicable law.

Statutes of limitations apply to labor claims, so time matters. California law generally allows workers to bring wage claims within three years of the violation for statutory claims and up to four years for unfair business practice claims.

Contact us here. The review is free and confidential. Clarkson's intake team will follow up to learn more about your experience.

OTHER AI LEGAL FOCUS AREAS

AI Harm

In some of the most devastating cases, AI chatbots have encouraged vulnerable users to take their own lives, exposing the deadly consequences of deploying AI without adequate safeguards.

AI and Intellectual Property Theft

AI companies have scraped the creative work of writers, artists, and developers without consent or compensation to train their products.

AI Washing

AI Washing

Companies are deceiving consumers and investors by exaggerating or fabricating the AI capabilities of their products and services.

AI in Healthcare

Insurance companies are deploying AI systems to wrongfully deny patient claims, overriding physician judgment and putting profits above care.

AI Taskers and Trainers

The human workers who label data and train AI models are being misclassified as independent contractors, denying them wages, benefits, and legal protections they are owed.

Join our mailing list to stay current with Clarkson’s AI-related cases.

Name(Required)

By submitting this form, you agree to the Terms of Service and Privacy Policy.