Sat. Nov 16th, 2024

In summary

State regulators propose rules on evaluating workers and job applicants with AI.

California regulators are moving to restrict how employers can use artificial intelligence to screen workers and job applicants — warning that using AI to measure tone of voice, facial expressions and reaction times may run afoul of the law.

The draft regulations say that if companies use automated systems to limit or prioritize applicants based on pregnancy, national origin, religion or criminal history, that’s discrimination.

Members of the public have until July 18 to comment on the proposed rules. After that, regulators in the California Civil Rights Department may amend and will eventually approve them, subject to final review by an administrative law judge, capping off a process that began three years ago.

The rules govern so-called “automated decision systems” — artificial intelligence and other computerized processes, including quizzes, games, resume screening, and even advertising placement. The regulations say using such systems to analyze physical characteristics or reaction times may constitute illegal discrimination. The systems may not be used at all, the new rules say, if they have an “adverse impact” on candidates based on certain protected characteristics.

The draft rules also require companies that sell predictive services to employers to keep records for four years in order to respond to discrimination claims.

A crackdown is necessary in part because while businesses want to automate parts of the hiring process, “this new technology can obscure responsibility and make it harder to discern who’s responsible when a person is subjected to discriminatory decision-making,” said Ken Wang, a policy associate with the California Employment Lawyers Association.

The draft regulations make it clear that third-party service providers are agents of the employer and hold employers responsible.

The California Civil Rights Department started exploring how algorithms, a type of automated decision system, can impact job opportunities and automate discrimination in the workplace in April 2021. Back then, Autistic People of Color Fund founder Lydia X. Z. Brown warned the agency about the harm that hiring algorithms can inflict on people with disabilities. Brown told CalMatters that whether the new draft rules will offer meaningful protection depends on how they’re put in place and enforced.

Researchers, advocates and journalists have amassed a body of evidence that AI models can automate discrimination, including in the workplace. Last month, the American Civil Liberties Union filed a complaint with the Federal Trade Commission alleging that resume screening software made by the company Aon discriminates against people based on race and disability despite the company’s claim that its AI is “bias free.” An evaluation of leading artificial intelligence firm OpenAI’s GPT-3.5 technology found that the large language model can exhibit racial bias when used to automatically sift through the resumes of job applicants. Though the company uses filters to prevent the language model from producing toxic language, internal tests of GPT-3 also surfaced race, gender, and religious bias.

“This new technology can obscure responsibility.”

Ken Wang, policy associate with the California Employment Lawyers Association

Protecting people from automated bias understandably attracts a lot of attention, but sometimes hiring software that’s marketed as smart makes dumb decisions. Wearing glasses or a headscarf or having a bookshelf in the background of a video job interview can skew personality predictions, according to an investigative report by German public broadcast station Bayerischer Rundfunk. So can the font a job applicant chooses when submitting a resume, according to researchers at New York University.

California’s proposed regulations are the latest in a series of initiatives aimed at protecting workers against businesses using harmful forms of AI.

In 2021, New York City lawmakers passed a law to protect job applicants from algorithmic discrimination in hiring, although researchers from Cornell University and Consumer Reports recently concluded that the law has been ineffective. And in 2022, the Equal Employment Opportunity Commission and the U.S. Justice Department clarified that employers must comply with the Americans with Disabilities Act when using automation during hiring.

The California Privacy Protection Agency, meanwhile, is considering draft rules that, among other things, define what information employers can collect on contractors, job applicants, and workers, allowing them to see what data employers collect and to opt-out from such collection or request human review.

Pending legislation would further empower the source of the draft revisions, the California Civil Rights Department. Assembly Bill 2930 would allow the department to demand impact assessments from businesses and state agencies that use AI in order to protect against automated discrimination.

Outside of government, union leaders now increasingly argue that rank-and-file workers should be able to weigh in on the effectiveness and harms of AI in order to protect the public. Labor representatives have had conversations with California officials about specific projects as they experiment with how to use AI.

By