What Happened In California This October Will Forever Change The Way We Work With AI

On one hand, it makes sense for companies to use AI to scan through hundreds or thousands of resumes to find candidates (though it is ironic given Anthropic advises potential candidates not to use AI for job applications). On the other hand, the use of these tools can be demoralizing for job seekers. How is the AI filtering out candidates? Did it take into account your full history and experience? Were you denied simply for using the wrong word or phrase? Was there something more sinister at play? Answers are tough to come by, but changes may be coming to how these systems are utilized. As of October 1, 2025, the state of California, via the California Civil Rights Council, has placed restrictions on the use of AI for employment decisions.

The new rules state artificial intelligence (AI) and automated decision systems (ADS) may violate California law if they discriminate against employees or applicants on the basis of protected characteristics — such as race, age, religious creed, gender, national origin, and disabilities. They arrive as an amendment to the California Fair Employment and Housing Act (FEHA) and apply to any employers that use forms of automated decision-making (ADM), like AI, machine-learning, and algorithmic processes to assist humans.

If these systems produce biased results — AI can be biased to reflect stereotypes — the companies using them are legally responsible. This regulation also establishes a framework for companies in the state. For example, they must retain records about automated decisions for four years including inputs and selection results. In addition, external agents hired for recruiting are considered "employers" for legal purposes. If a recruiter uses these systems on behalf of the company, the company is liable. That could be a big deal for the top hiring platforms, many of which leverage AI to speed up and enhance recruitment.

What does this new AI regulation mean?

Notably, nothing in California's regulation prohibits the use of these automated decision-making solutions. It merely sets a precedent for how they should be used responsibly. But a way for companies to avoid this, especially with remote work, is to simply not hire in the state — in fact, that's why a lot of companies mark states ineligible, citing difficult labor laws. The real implication of this regulation is that these technologies are being taken more seriously on the federal and regulatory front. If anything, it acknowledges watchdogs do need to put protections in place to ensure the tools and solutions are being used without violating our rights and in legally responsible ways.

Ultimately, on a broad scale, it remains to be seen how this regulation will be interpreted and enforced, especially outside of the state. It doesn't really provide answers for questions like "Will AI actually start taking jobs?" or address how it's permeating every industry. Moreover, it will be interesting to see if other states take a similar approach. In terms of testing to ensure the rules are being followed, the regulation sets guidelines for employers to show they conducted "anti-bias testing or similar proactive efforts to avoid unlawful discrimination" before and after adopting these solutions. In other words, it's up to the employers to ensure these automated decision-making tools are being deployed correctly and not leveraging biases built into the foundation of the system.

Recommended