We Need To Change Discrimination Laws To Stop AI Racism

People wearing face masks following the coronavirus disease (COVID-19) outbreak are seen near a robot at the venue for the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 9, 2020. REUTERS/Aly Song
August 20, 2020 Topic: Politics Region: Americas Blog Brand: The Reboot Tags: DiscriminationSupreme CourtTitle IXRacismAI

We Need To Change Discrimination Laws To Stop AI Racism

Policymakers should amend the Title VII restrictions that are encouraging AI tools to discriminate.

Firms are increasingly adopting machine learning and data mining technologies, broadly referred to as artificial intelligence (AI) hiring tools, to automate recruitment and talent acquisition. According to a 2019 Oracle report, human resource professionals expect that AI will play a crucial and enhanced role in the hiring process, specifically in regard to candidate sourcing, assessments, interviews, and selection.

Top companies HireVue and Pymetrics tout the ability to reduce unconscious biases and promote fairness with their AI‐​powered products, consistent with the aims of anti‐​discrimination laws. Title VII of the Civil Rights Act of 1964 and other anti‐​discrimination laws (such as Title II of the Genetic Information Nondiscrimination Act, Title I of the Americans with Disabilities Act, Pregnancy Discrimination Act, and Age Discrimination in Employment Act) govern employment practices and notably preclude businesses from considering “protected characteristics” during the hiring process. But there is evidence that protected characteristic prohibitions have failed to effectively regulate AI and, as a result, AI hiring tools have the capacity to be discriminatory. To effectuate accountability in AI‐​assisted hiring, policymakers should amend the Title VII restrictions that are encouraging AI tools to discriminate.

Microsoft research study demonstrated the mechanics underlying algorithmic bias and highlighted how seemingly unprejudiced systems can discriminate. The research involved inputting data into a neural network, similar to the predictive analytics used in the employment context. A natural language processing program was tasked to anticipate words in analogies after being trained with Google News articles. Using a popular framework known as word embedding, the model assigned numerical vectors to textual data and predicted analogies from word clusters. However, it also captured biases and displayed gender stereotypes to “a disturbing extent.” For example, researchers observed outputs such as “man is to woman as computer programmer is to homemaker.”

In 2018, Amazon infamously halted its AI recruiting tool because it similarly produced gendered outcomes biased against women. The AI tool was invented to review resumes and sort candidates. Since it was trained with resumes from a 10‐​year period, the AI sought to recreate the gender homogeneity from the technology sector’s male‐​dominated industry and subsequently downgraded submissions where the applicant was identifiably female. Amazon discontinued its AI recruiting tool, in part because researchers could not ensure that the program would stop sorting resumes in a discriminatory manner.

Although algorithmic decision‐​making may theoretically be less biased than humans, the pithy expression “garbage in, garbage out” accurately describes the results observed in both the Microsoft and Amazon examples. Disparities between groups are reinforced when the AI learns from historical data in which those groups were not present. This is particularly troublesome in talent acquisition and recruitment, for which appropriate data to “train” the algorithm with contemporary hiring goals in mind may not exist.

Even if AI tools can be trained to avoid hiring discrimination as defined in law, the algorithms will discover the statistical correlates of protected categories. Non‐​sensitive identifiers, which systematically vary based on group membership, can sufficiently validate biased patterns and act as highly representative proxies for the omitted data. Moreover, AI tools may consider neutral identifiers as inordinately predictive precisely because data points are missing (“omitted variable bias”). For example, zip code is a neutral variable but can be indicative of socio‐​economic status and race.

The legal interpretation of Title VII, and regulations that stem from it, ought to distinguish between human and algorithmic decision‐​making. Legal scholars in the Journal of Information PolicyIowa Law Review, and William and Mary Law Review have correctly rejected the notion of “fairness through blindness” underlying the statutory language in Title VII and found that protected characteristic prohibitions worsen AI tools. Currently, algorithms cannot take into account protected categories even if it improves the dataset and reduces biases. Recognizing such data would give rise to a prima facie instance of disparate treatment and a violation of section 703(a)(1) of Title VII. According to the Supreme Court’s decision in Ricci v. DeStefano, discriminatory intent can be established when there is consideration of protected categories, even when the aims are nondiscriminatory. The application of the disparate treatment doctrine hinders the agility of employers to reduce bias in their AI tools and counterintuitively encourages employers to avoid liability by not improving their AI tools. Given the legal constraints, employers may only remedy discriminatory correlations that are identifiable, despite the impossibility of locating all proxy variables before they negatively impact applicants.

Lifting Title VII protected characteristic prohibitions for AI hiring tools would enable employers to assess when biases occur and proactively attenuate discriminatory processes. Such measures would require assurance that protected characteristics are handled responsibly while preserving privacy, particularly when information flows through third‐​party entities. Disclosure, auditing, and privacy safeguards are a few of the practices that would be necessary to verify that the change in policy is actively reducing discrimination and bias.

Title VII’s notion of fairness through blindness does not restrict all areas of employment policy. Amending Title VII, as applied to AI tools, would have the same effect as demographic pre‐​employment inquiries. The Equal Employment Opportunity Commission forbids employers from asking pre‐​employment inquiries about race unless there is a “legitimate business need,” such as affirmative action and applicant tracking. To that end, employers with over 100 employees are required to track applicant demographic information, albeit participation is voluntary for applicants. Human resource experts assert that using demographic data is paramount to assessing the validity of selection procedures— an argument that holds true for AI hiring tools. The inclusion of protected categories into AI datasets can analogously improve algorithmic decision‐​making and increase the efficacy of AI‐​assisted selection.

Employment law is ill‐​prepared for the widespread implementation of AI hiring tools. If the goal is to prevent discrimination in recruitment and talent acquisition, then prohibitions from Title VII and similar laws must be amended or interpreted differently to effectively regulate AI.

This article by Rachel Chiu first appeared in CATO on August 19, 2020.

Image: Reuters.