A federal judge in California allowed a class action to proceed against Workday last year. The ruling differed from most cases. Workday was the defendant, not an employer. The company Workday makes the AI screening tools the employers were using. That decision pushed AI hiring discrimination from a legal gray area into active litigation.

Since then, the EEOC has issued formal guidance. The FTC has warned AI vendors. Illinois, New York City, and Maryland have passed disclosure laws. AI hiring discrimination, once a niche concern among civil rights attorneys, has become one of the faster-moving areas of employment law.

How AI Screening Became the Default

The Tools Employers Now Run Before Any Human Sees a Resume

AI-powered hiring tools have run in corporate recruiting pipelines for years, mostly without public notice. Resume parsers check submitted applications for keywords, credentials, and career history before ranking candidates against an employer-defined profile. Video interview platforms score recorded responses on word choice, vocal tone, and facial expressions. Predictive scoring models rank new applicants against profiles of employees who performed well.

LinkedIn, Workday, HireVue, and dozens of smaller vendors now run first-pass screening at firms ranging from small employers to Fortune 500 companies. According to SHRM, roughly 85 percent of large employers used AI tools in hiring decisions as of 2024.

The problem emerged as researchers and plaintiffs’ attorneys began checking what these systems learned. A tool trained on hire data from a workforce that skewed young, white, and male does not produce neutral outputs. It encodes those patterns as quality signals. Those signals end up deciding who gets a first interview call. At that point, the question of employment discrimination is already in play.

Why AI Hiring Discrimination Is Now in Federal Court

How the Workday Case Put AI Hiring Discrimination Before a Judge

Derek Mobley applied for more than 100 jobs through Workday’s platform between 2018 and 2022. He is Black, over 40, and has a disability. He alleges Workday’s AI system rejected him across different employers and different roles in ways that showed bias against his protected background. His lawsuit, filed in a California federal court in 2023, named Workday as the defendant. It did not name the individual companies that used the platform.

The court allowed the case to proceed. The ruling put a question directly to federal law: can the vendor itself face liability, not just the employers using its tools. It treated Workday as an employment agency under Title VII, the Age Discrimination in Employment Act, and the ADA. The case is still open. But its early rulings have already changed how employment attorneys approach these claims.

Who the Research Says Is Most Affected

Age, Race, and Disability: The Documented Patterns

Three groups show up in the research on AI hiring discrimination.

Older workers lose out when models treat year gaps, extended career paths, or non-linear work histories as negative signals. A system trained on hire data from a younger workforce encodes age as a disadvantage. Federal law covers this under the ADEA. Age bias claims do not require proof of intent.

Racial bias in AI hiring shows up in peer-reviewed research from MIT and the National Bureau of Economic Research. Their findings show that systems absorb patterns tied to names, zip codes, and schools in ways that produce racially uneven outcomes. Workers facing repeated automated rejections have now sought counsel on race bias claims to assess whether a pattern-based argument holds.

The EEOC’s 2023 guidance called out video interview AI as an ADA risk. Platforms that score candidates on facial expression, vocal fluency, and eye contact put applicants with autism spectrum disorder, stuttering, or conditions affecting cadence at a consistent disadvantage. The agency said employers face liability under the ADA whether or not they understand how the scoring works.

What the EEOC and FTC Said About AI Hiring Discrimination

Federal Guidance Before Federal Legislation

Two federal agencies moved into this space before Congress acted. Each used existing authority to address AI hiring discrimination.

In May 2023, the EEOC issued guidance showing that employers cannot pass liability to a vendor. If an AI tool produces uneven outcomes by race, age, sex, or disability status, the employer faces civil rights liability. The legal theory is disparate impact. It requires no proof of intent, only evidence that a neutral-looking practice hit a protected group harder than others.

The FTC issued a warning the same year, directed at vendors making false claims about their tools. Both agencies acted without any law targeting AI hiring discrimination. So these rules run on statutes from the 1960s and 1970s.

Starting in 2025, several states added a third layer. New disclosure and audit rules took effect in Illinois, Maryland, and New York City under the employment law changes that moved through state legislatures in 2025 and 2026.

What Workers and Attorneys Are Doing About It

The Emerging Playbook for AI Hiring Discrimination Claims

Employment attorneys who handle AI hiring discrimination cases say the first challenge is building a paper trail. A single rejection from a single company does not make a case. But a pattern focused in one industry, or across multiple companies using the same vendor, supports a disparate impact argument.

Getting those records is not easy. Workers rarely know which platform screened them. In states with disclosure rules, including Illinois and New York City, applicants can ask directly.

The EEOC now has staff trained in AI bias claims. Federal filing deadlines run 180 to 300 days from the biased act, depending on the state. Employment lawyers in NJ and across the New York metro area have reported a sharp rise in calls. Workers come in with records of repeated rejections they believe an algorithm drove.

AI Hiring Discrimination: Where the Law Goes From Here

State Legislation Is Moving Faster Than Federal

Several states and the EU have passed AI hiring laws. The EU AI Act requires clear rules and human review of hiring tools before deployment. The UK, Canada, and Australia are still working through their own frameworks.

Federal law in the United States has not moved. Bills in both chambers have stalled in committee. So courts apply Title VII, the ADEA, and the ADA to software those laws were never written to cover.

If the court holds that AI vendors face direct liability for biased outcomes at scale, the cost structure of the AI hiring market shifts. Vendors would then face pressure to audit tools they have no legal duty to test right now.

Frequently Asked Questions

Is it currently legal for employers to use AI to screen job applicants?
It is currently legal in most U.S. states, though not in states that have passed disclosure or audit rules. If an AI screening tool filters out candidates from a protected class at an uneven rate, the employer can face a disparate impact claim. Existing civil rights law covers it whether or not anyone intended the outcome.

How can a job applicant tell if AI screening was used?
In states with disclosure rules, applicants can request that data directly. Outside those states, the signs are consistent: automated rejections within minutes, no role-specific feedback, and matching outcomes across companies using the same platforms. A documented pattern carries more legal weight than any single rejection. The Workday legal process may produce vendor data that helps other claimants build that record.

Can an AI vendor be sued for hiring discrimination, not just the employer?
The Workday case suggests the answer may be yes. The court allowed the lawsuit to proceed on the theory that an AI screening platform can act as an employment agency. Direct liability under Title VII, the ADEA, and the ADA was now possible for the vendor itself. As a result, early rulings have prompted employment attorneys to name vendors alongside employers in new filings.

Before Filing a Claim: What the Current Framework Covers

Several years of guidance, state laws, and at least one federal lawsuit are now in play around AI hiring discrimination. The statutes involved were written decades before this software existed. Whether they hold under the weight of the Workday case is what the next few years will show.

For workers who suspect an algorithm filtered them out, the records matter: rejection emails, dates, company names, and any details about which platform was used. An employment attorney needs that to assess whether a viable claim exists.

U.S. Equal Employment Opportunity Commission, Technical Guidance on Artificial Intelligence and Civil Rights (May 2023)

Federal Trade Commission, Facing the Facts: Misinformation About AI (2023)

National Bureau of Economic Research, Algorithmic Hiring and Discrimination Research

Illinois General Assembly, Artificial Intelligence Video Interview Act (820 ILCS 42)

Leave a comment