U.S. civil rights enforcers warn employers against biased AI

Artificial intelligence technology used to screen new job applicants or monitor worker productivity could unfairly discriminate against people with disabilities, the federal government said Thursday, warning employers that commonly used recruiting tools could violate civil rights laws.

The U.S. Department of Justice and Equal Employment Opportunity Commission jointly issued guidance asking employers to be careful before using popular algorithmic tools designed to simplify the job of evaluating employees and job prospects — but it could also violate the Americans with Disabilities Act “.

“We are raising the alarm about the dangers of blind reliance on artificial intelligence and other technologies that we are seeing employers increasingly use,” Kristin Clark, assistant attorney general for the department’s civil rights division, told reporters on Thursday. “Artificial Intelligence The use of it exacerbates the chronic discrimination faced by jobseekers with disabilities.”

Examples of popular job-related AI tools include resume scanners, employee monitoring software that ranks employees based on keystrokes, and video interview software that measures a person’s voice patterns or facial expressions. The technology could potentially screen out people with language barriers or a range of other disabilities.

The move reflects President Joe Biden’s administration’s broader push for positive advances in AI technology, while reining in opaque and potentially harmful AI tools that are used to make important decisions about people’s livelihoods.

“We fully recognize the enormous potential for simplification,” said Charlotte Burrows, chair of the EEOC, which enforces laws against discrimination in the workplace. “But we can’t allow these tools to be a high-tech way of discrimination.”

Holding employers accountable for the tools they use is a “great first step,” says an academic who studies bias in AI recruiting tools, but adds that more needs to be done to control the vendors who make them. Doing so is likely the work of other agencies, such as the Federal Trade Commission, said Ifeoma Ajunwa, a University of North Carolina law professor and founding director of the Artificial Intelligence Decision Research Program.

“It’s now recognized that these tools that are often used as anti-bias interventions may actually lead to more bias — while also confusing it,” Ajunwa said.

Copyright © 2022 The Washington Times, LLC.



Source link