Back to all articles
AI HR Compliance

Eightfold AI Lawsuit: What Every HR Team Must Audit Now

March 28, 2026 8 min read
Eightfold AI Lawsuit: What Every HR Team Must Audit Now

A class action filed January 20, 2026 says Eightfold AI secretly scored over a billion job applicants on a 0-5 scale — and filtered them out before any human recruiter ever saw their resume.

If you’re using ANY AI screening tool right now, this lawsuit isn’t just about Eightfold. It’s about whether your vendor’s data practices are quietly putting your company at legal risk.

The core issue: AI screening tools may qualify as consumer reporting agencies under the FCRA — triggering disclosure and authorization requirements most employers never knew applied. Here’s what happened, why it matters for you, and a 5-question checklist to audit your own AI hiring stack.

What the Eightfold Lawsuit Actually Claims

Kistler et al. v. Eightfold AI Inc. was filed January 20, 2026 in California’s Contra Costa County Superior Court. The named plaintiffs, Erin Kistler and Sruti Bhaumik, are both STEM professionals who applied for jobs through employer portals powered by Eightfold — and never heard back.

Their complaint lays out a specific pattern:

  • Eightfold scraped personal data on over 1 billion workers from LinkedIn, GitHub, Stack Overflow, and other third-party sources — far beyond what candidates submitted on their applications
  • That data was fed through a proprietary AI model to generate a 0-5 “match score” for each candidate
  • Candidates ranked below a threshold were filtered out before any human recruiter ever reviewed their file
  • Applicants were never told their data was being compiled, never given a copy of the report, and never offered a chance to dispute errors

The lawsuit’s core legal theory: this process constitutes a “consumer report” under the Fair Credit Reporting Act (FCRA). Consumer reports trigger specific legal requirements — disclosure to the candidate, authorization, and a chance to dispute errors. Eightfold allegedly provided none of these.

The companies named as Eightfold clients in the complaint include Microsoft, PayPal, Morgan Stanley, Starbucks, Chevron, and Bayer. Eightfold counts over 100 enterprise clients.

Why This Goes Beyond Eightfold’s Customers

Here’s the part that should make every HR manager uncomfortable, regardless of whether your company uses Eightfold.

In a parallel case — Mobley v. Workday — a federal judge ruled that Workday acted as an “agent” of the employers using its automated screening tools. The judge’s logic: Workday wasn’t just providing software. It was performing a function traditionally handled by human employees.

That ruling matters because it cuts in both directions. If vendors are acting as agents, employers can face liability for their vendor’s conduct. And according to Jones Walker LLP’s analysis of the Eightfold case:

“An employer’s platform scrapes data from sources the employer doesn’t know about, scores candidates using logic the employer can’t examine, and filters applicants before any human reviews their file. The employer is legally responsible for outcomes it cannot control.”

The numbers on vendor contracts make this worse. Research cited across multiple legal analyses of these cases shows that 88% of AI vendors cap their own liability — typically to the cost of a monthly subscription fee — while only 17% explicitly warrant regulatory compliance. So the vendor sells you on compliance benefits (“reduce bias,” “faster screening,” “consistent evaluation”) while their contract disclaims responsibility if it goes wrong.

That gap lands directly on the employer.

The Fair Credit Reporting Act has governed background checks for decades. It defines a “consumer report” as any communication of information by a consumer reporting agency bearing on a person’s creditworthiness, character, general reputation, or personal characteristics — used for employment decisions.

Eightfold’s AI scores candidates on their “likelihood of success” based on scraped data from public sources. That sounds a lot like a consumer report. But until this case, nobody had tested that theory in court.

The ambiguity runs deeper. In 2025, the EEOC removed several AI hiring guidance documents from its website. That didn’t reduce compliance risk — it created a vacuum where vendors can argue that AI screening isn’t explicitly regulated while plaintiffs’ lawyers argue it falls under existing frameworks like FCRA.

The practical result: you can’t rely on your AI vendor to have figured this out. Most haven’t.

5 Questions to Ask Your AI Screening Vendor Right Now

Pull up your vendor contract before you finish reading this. Then run through these five questions. If your vendor can’t answer 1, 2, and 4 clearly — that’s your answer.

1. What external data sources does your tool use beyond what candidates submit? If the answer is “only what the candidate provides,” get that in writing. If the tool touches LinkedIn, GitHub, or any third-party source, you need to understand what data is being pulled and why.

2. Does your tool generate a score, ranking, or inference about candidates? If yes, you have a potential FCRA exposure point. Any scoring system that influences a hiring decision may qualify as a consumer report. You need to know this before a plaintiff’s lawyer does.

3. Does your contract cap your vendor’s liability for regulatory non-compliance? Read the indemnification and limitation of liability clauses carefully. If the cap is “fees paid in the prior 30 days” and the class action exposure is millions, you know where the risk landed.

4. Can we get an audit log of which candidates were filtered and why? This is your documentation defense. If your vendor can’t produce a log showing human review occurred before any rejection, you have no evidence that your process complied with adverse action requirements.

5. Have you assessed whether your tool qualifies as a consumer reporting agency under FCRA? Not “are you FCRA compliant” — that’s too vague. Ask specifically: have you done a legal assessment of whether your candidate scoring qualifies as a consumer report? And can you share it?

Our Take: The Compliance Gap Vendors Don’t Advertise

Here’s what the Eightfold and Workday lawsuits are really showing: AI hiring vendors have been actively designing their products and contracts to maximize their own protection while minimizing TA teams’ ability to audit what’s actually happening.

They sell you on bias reduction. Their contracts cap liability. They promise consistency. Their algorithms are black boxes. They claim FCRA doesn’t apply to their tool. Their legal team hasn’t tested it.

The Workday ruling — where the court held that an AI vendor was acting as an employer’s agent — should have been a turning point. It wasn’t. Most HR tech vendors didn’t update their contracts or their compliance posture. They issued a blog post and moved on.

The Eightfold case is different because of scale. Over a billion workers. Data scraped from sources applicants never agreed to share. Scores generated by a model no employer could audit.

If you’re using an AI tool that scores candidates, generates match ratings, or filters applicants before a human sees them — the question isn’t whether this affects you. It’s whether you’ve documented that your process was compliant when the lawyer asks.

We’ve watched two expensive ATS implementations go sideways. The pattern is always the same: the vendor’s sales deck promises efficiency and compliance, the legal section of the contract promises nothing. The HR team is the one left explaining it to the board.

Check your HireVue alternatives if you’re evaluating AI interview tools — the same due diligence framework applies. And if you’re comparing ATS platforms with transparent data practices, see our Greenhouse vs Ashby vs Lever comparison for a breakdown of which platforms are clearer about what data they use.

What to Do This Week

This isn’t a “monitor the situation” moment. The Workday ruling is already on the books. The Eightfold case is in court. The EEOC’s guidance vacuum isn’t protecting you.

Map your AI hiring touchpoints. List every stage where AI scores, filters, or ranks candidates. Resume screening, job description matching, interview scheduling, predictive assessments — all of it.

Request a data audit from each vendor. What data does it use? Where does that data come from? What does it generate? Get answers in writing.

Add FCRA compliance warranty language to your contracts on renewal. If the vendor won’t warrant compliance, that’s material information for your legal team.

Document human review points. For every candidate rejection, you need evidence that a human was in the loop. This is your adverse action defense.

Loop in legal before your next AI tool renewal. The cost of a compliance review is a fraction of the cost of a class action.

The full Workday AI decision and its implications for the candidate side are covered in our Workday AI lawsuit analysis — worth reading alongside this piece for the full picture of where the legal risk sits.

Frequently Asked Questions

Does the Eightfold lawsuit affect employers who used the platform?

Employers aren’t currently named as defendants in the Eightfold case — the suit targets Eightfold itself. But the Mobley v. Workday ruling established that AI vendors can act as employers’ “agents” under federal law, which means liability can flow back to the employer. If you used Eightfold’s platform, consult legal counsel now and document your human review process.

Does FCRA apply to AI resume screening tools?

The Eightfold case is the first major test of this theory. Under FCRA, a “consumer report” is any communication that assembles information about a person’s characteristics for employment decisions. Eightfold’s 0-5 scoring system — built on scraped third-party data — fits that definition, according to the plaintiffs. The court hasn’t ruled yet, but don’t assume FCRA doesn’t apply to your tool.

What is Eightfold AI used for?

Eightfold AI is an enterprise talent intelligence platform used by over 100 major employers — including Microsoft, PayPal, Morgan Stanley, Starbucks, Chevron, and Bayer — for AI-powered candidate screening, workforce planning, and internal talent mobility. It uses a large language model trained on over 1.5 billion global data points to rank candidates.

What should HR teams do if they use Eightfold AI?

Contact your legal team immediately. Request a written audit from Eightfold detailing what data their system used on your candidates, how scores were generated, and whether FCRA disclosures were made. Review your contract’s liability and indemnification clauses. And document every human review point in your hiring process going back as far as your records allow.

The Lawsuit Is Against Eightfold. The Risk Is Yours.

The Eightfold case is a signal that plaintiffs and their lawyers are actively looking at AI hiring tools as the next major employment law battleground. The Workday case already proved employers can be held responsible for their vendors’ conduct.

Pull up your AI vendor contract this week. Look for liability caps, compliance disclaimers, and data source language. Run through the 5-question checklist above. Then loop in legal before your next renewal.

The lawsuit is against Eightfold. The risk is yours.

More Articles

Rippling vs BambooHR vs Gusto (2026): Which Fits? HR & Recruitment
March 30, 2026 8 min read

Rippling vs BambooHR vs Gusto (2026): Which Fits?

Rippling, BambooHR, and Gusto compared without vendor bias. Clear verdict by company stage, real pricing breakdowns, and the cons nobody publishes in their comparison chart.

Read More