The movement to hold artificial intelligence accountable gains more momentum

An upcoming report from the private non-profit organization Algorithm Justice League (AJL) recommends requiring disclosure when using AI models, and creating a public repository to record incidents where AI causes harm. The repository can help auditors discover potential problems with algorithms, and help regulators investigate or fine repeat offenders. AJL co-founder Joy Buolamwini co-authored a Influential audits in 2018 It turns out that the facial recognition algorithm works best for white men and the worst for dark-skinned women.

According to the report, the independence of auditors and public review of their results are essential. Without these safeguards, “there is no accountability mechanism at all,” said Sasha Costanza-Chock, head of research at AJL. “If they want, they can bury it; if they find a problem, there is no guarantee that it will be solved. It has no teeth, it is secret, and auditors have no influence.”

Deb Raji is an AJL researcher assessing the audit. She participated in the facial recognition algorithm audit in 2018. She cautioned that large technology companies appear to have taken a more confrontational approach to external auditors, sometimes threatening to sue for privacy or anti-hacking grounds.August, Facebook Block New York University Scholars Monitoring political ad spending has frustrated the efforts of German researchers to investigate Instagram algorithms.

Raj called for the establishment of an audit oversight committee within a federal agency to enforce standards or mediate disputes between auditors and companies. Such committees can be established in accordance with the Financial Accounting Standards Board or the Food and Drug Administration’s medical device evaluation standards.

The standards of audits and auditors are very important, because the increasing demand for artificial intelligence supervision has led to many Audit startupsSome are critics of artificial intelligence, while others may be more beneficial to the company they are auditing. In 2019, a coalition of AI researchers from 30 organizations Respected External auditing and supervision create a market for auditors as part of artificial intelligence that builds verifiable results that people trust.

Cathy O’Neil founded a company called O’Neil Risk Consulting & Algorithmic Auditing (Orcaa), partly to evaluate artificial intelligence that is invisible or inaccessible to the public. For example, Orcaa works with the attorneys general of four states in the United States to evaluate financial or consumer product algorithms. But O’Neill said she lost potential customers because the company wanted to maintain reasonable denial and didn’t want to know whether or how their artificial intelligence would harm humans.

Earlier this year, Orcaa audited an algorithm used by HireVue to analyze faces in job interviews. A press release issued by the company stated that the audit found no accuracy or bias issues, but the audit did not attempt to evaluate the system’s code, training data, or performance of different groups of people. Critics say HireVue’s description of the audit is misleading and dishonest. Shortly before the audit was released, HireVue said it would stop using artificial intelligence in video job interviews.

O’Neil believes that auditing may be useful, but she said that it is too early to adopt AJL’s methods in some areas, partly because there are no audit standards and we don’t fully understand the ways in which artificial intelligence harms humans. On the contrary, O’Neil tends to Another method: algorithmic impact assessment.

Although an audit may evaluate the output of an AI model to see if it treats men differently from women, for example, impact assessment may focus more on the way the algorithm is designed, who may be harmed, and who is responsible if something goes wrong. In Canada, Companies must evaluate the risks of deploying algorithms to individuals and communities; in the United States, evaluations are being developed to determine When artificial intelligence is at low or high risk and Quantify people’s trust in artificial intelligence.

The idea of ​​measuring impact and potential harm began in the National Environmental Protection Act in the 1970s, which led to the creation of environmental impact reports. These reports consider factors ranging from pollution to the potential discovery of ancient artifacts; similar algorithmic impact assessments will consider a wide range of factors.

Source link

Leave a Reply

Your email address will not be published.