Earlier this month, in a critical step toward fighting algorithmic harms, the Equal Employment Opportunity Commission (EEOC) released technical guidance on how algorithmic hiring tools can be discriminatory against people with disabilities. The EEOC guidance is reasoned and well attuned in its underlying goal to meaningfully improve the market of artificial intelligence hiring software for people with disabilities, and other federal agencies should take notes.
That hiring algorithms can disadvantage people with disabilities is not exactly new information. In 2019, for my first piece at the Brookings Institution, I wrote about how automated interview software is definitionally discriminatory against people with disabilities. In a broader 2018 review of hiring algorithms, the technology advocacy nonprofit Upturn concluded that “without active measures to mitigate them, bias will arise in predictive hiring tools by default” and later notes this is especially true for those with disabilities. In their own report on this topic, the Center for Democracy and Technology found that these algorithms have “risk of discrimination written invisibly into their codes” and for “people with disabilities, those risks can be profound.” This is to say that there has long been broad consensus among experts that algorithmic hiring technologies are often harmful to people with disabilities, and that given that as many as 80% of businesses now use these tools, this problem warrants government intervention.
After holding a public hearing on employment algorithms in 2016, the EEOC noted that algorithms might harm people with disabilities and “challenge the spirit” of the Americans with Disabilities Act (ADA). While there appears to have been little progress on this issue during the Trump administration, Trump-appointed EEOC Commissioner Keith Sonderling has been a vocal proponent of enforcing civil rights laws on algorithmic software since his appointment in 2020. Seven months into the Biden administration, and with Obama-appointee Charlotte Burrows taking over as Chair, the EEOC launched a new AI and Algorithmic Fairness initiative, of which the new disability guidance is the first product.
The EEOC guidance is a practical and tangible step forward for the governance of algorithmic harms and has so far been applauded by advocacy groups such as the American Association of People with Disabilities. It intends to guide all private employers, as well as the federal government, toward the responsible and legal use of algorithmic hiring tools with respect to the requirements of the ADA. An accompanying announcement from the Department of Justice’s (DOJ) Civil Rights Division declared this guidance also applies to state and local governments, which fall under DOJ jurisdiction.
How the EEOC sees AI hiring under the ADA
The EEOC’s concerns are largely focused on two problematic outcomes: (1) algorithmic hiring tools inappropriately punish people with disabilities; and (2) people with disabilities are dissuaded from an application process due to inaccessible digital assessments.
Illegally “screening out” people with disabilities
First, the guidance clarifies what constitutes illegally “screening out” a person with a disability from the hiring process. The new EEOC guidance presents any disadvantaging effect of an algorithmic decision against a person with a disability as a violation of the ADA, assuming the person can perform the job with legally required reasonable accommodations. In this interpretation, the EEOC is saying it is not enough to hire candidates with disabilities in the same proportion as people without disabilities. This differs from EEOC criteria for race, religion, sex, and national origin, which says that selecting candidates at a significantly lower rate from a selected group (say, less than 80% as many women as men) constitutes illegal discrimination.
The EEOC offers a range of realistic examples of what might constitute illegal screening, all of which seem inspired by current business practices. For example, the guidance cites a language model that could disadvantage a candidate due to a gap in their employment history when the candidate was undergoing treatment for a disability during that time. The guidance also mentions how audio analysis is likely to discriminate against individuals with speech impediments—a problem that still pervades in automated interview software. As one more example, the EEOC cites a personality test that asked about how optimistic a candidate is, which could inadvertently screen out qualified candidates with depression. Through these specific examples, and the framing of the guidance through the lens of “screening out” candidates with disabilities, the EEOC is making it clear that even if group hiring statistics can make an algorithm seem fair, discrimination against any individual person with a disability is a violation of the ADA.
This more stringent standard may act as a wake-up call to the employers using algorithmic hiring tools and the vendors they buy from. It may also incentivize shifts in how algorithmic hiring tools are built, leading to more sensitivity to how this type of software can discriminate. Further, it may encourage more direct measures of candidate skills for essential job functions, rather than indirect proxies (such as in the ‘optimism’ example) that may run afoul of the ADA.
Offering accommodations and preventing dropout
The EEOC guidance also clarifies that employers need to offer reasonable accommodations for the use of algorithmic hiring tools, and that the employer is responsible for this process, even if the tools are procured through outside vendors. For example, the new guidance cites a software-based knowledge test that requires manual dexterity (such as using a mouse and keyboard) to perform, which might punish individuals who have the requisite knowledge but also have limited dexterity. Asking whether a candidate wants an accommodation is legal, although employers are not allowed to inquire about the person’s health or disability status. The guidance explicitly encourages employers to clearly inform applicants about the steps of the hiring process and to ask applicants if they need any reasonable accommodations for any of those steps.
One of the core concerns of disability advocates is that people with disabilities will be discouraged by digital assessments and drop out of the application process. Using one of the EEOC’s examples, a job candidate might be dissuaded from completing a digital assessment intended to test their memory, not because of their memory, but because of difficulty engaging with the assessment due to a visual impairment. When job candidates are offered a clear sense of the application process in advance, they are better equipped to appropriately request an accommodation and proceed with the process, leading to a fairer chance at employment. The EEOC recommends that employers train staff to quickly recognize and respond to accommodation requests with alternative methods of candidate evaluation, and notes that outsourcing parts of the hiring process to vendors does not automatically relieve the employer of its responsibilities.
How will the EEOC guidance change AI hiring?
The technical guidance on its own will help employers make fairer choices, but the EEOC does not seem to be purely counting on the good graces of employers to execute the changes it thinks are necessary. At the end of their guidance document, the EEOC provides recommendations for job applicants who are being assessed by algorithmic tools. These recommendations encourage candidates to file formal charges of discrimination with the EEOC if a candidate feels they were discriminated against by an algorithmic hiring process.
The charge of discrimination by a job candidate is the first step toward any litigation—the charge triggers an investigation by the EEOC, after which the EEOC first tries to negotiate an agreement, and failing that, may file a lawsuit against the employer. At this point, the EEOC would attempt to prove the algorithmic hiring process was discriminatory and win financial relief for the job candidates. That the EEOC is explicitly welcoming these complaints signals its willingness to file these lawsuits, which may encourage disability advocacy groups to make their constituents aware of such options. In general (this is, without AI), this type of complaint is not rare—disability discrimination is the second most common complaint filed with the EEOC.
It is also worth considering how this EEOC guidance may affect the vendors of algorithmic hiring software, who make many of the key decisions that drive this market. The EEOC guidance is primarily focused on employers, who are ultimately responsible for ADA compliance. That said, the EEOC seems well-aware of practices and claims made by vendors. The guidance makes clear that when an algorithmic tool is “validated” according to a vendor, it does not provide inculpability from discrimination. Further, the guidance notes that vendor claims of having “bias-free” tools often refer to the selection rates between different groups (e.g., women vs men, people with disabilities vs people without disabilities), and reiterates that this is not sufficient under the ADA, as discussed above.
Beyond this direct discussion of vendor claims, the EEOC also suggests that employers should be asking hard questions of algorithmic hiring vendors. The document devotes a section to how employers can interrogate vendors, such as by asking how the software was made accessible, what alternative assessment formats are available, and how did the vendor evaluate its software for potentially discriminatory impacts. This is a clear indication that the EEOC understands the importance of vendors, despite its direct enforcement being limited to employers. The EEOC is helping employers push vendors for answers to these questions, in hopes that this changes the market incentives for the vendors, who will then appropriately invest in fair and accessible software.
The EEOC is leading in the early days of AI oversight
Among federal agencies, the EEOC stands out for its active engagement and tangible outputs on AI bias, although it is not entirely alone. The Federal Trade Commission (FTC) has issued its own informal guidance on how its enforcement covers AI, including that the FTC might circumstantially consider the claim of a “100% unbiased hiring decisions” to be fraudulent. Further, the National Institute of Standards and Technology also warrants mention, for producing an interim document on bias in AI systems, which the EEOC guidance cites. Still, it remains far easier to list the agencies who initiated such policies, rather than those who have not.
There are clear steps that all federal agencies can take. First and foremost, agencies should be reviewing their mandates related to the proliferation of algorithms, especially bias. In fact, all federal agencies were supposed to do exactly this in response to a 2019 executive order and ensuing guidance from the Office of Management and Budget. That this guidance was released in the last months of the Trump administration may explain the relatively lethargic response, as many agencies responded with nearly blank pages (the Department of Energy took the time to write “None” five times).
Still, the response from the Office of the Chief AI Officer (OCAIO) at the Department of Health and Human Service (HHS) demonstrates the importance of this task, identifying eleven pertinent statutes that could feasibly govern algorithms. This includes the Rehabilitation Act of 1973, which “prohibits discrimination against people with disabilities in programs that receive federal financial assistance,” potentially giving HHS a regulatory lever to fight algorithmic discrimination in healthcare.
Going forward, the White House should directly call on federal agencies to follow in the footsteps of agencies like the EEOC and HHS, as this is a necessary step to fulfill the White House’s promise of a Bill of Rights for an AI-Powered World. As for the EEOC, much work remains for the agency to extend and enforce algorithmic protections to other groups, such as racial minorities, women, religious groups, and different gender identities. For now, the agency deserves plaudits for some of the most concerted efforts to protect vulnerable Americans from algorithmic harms.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
The EEOC wants to make AI hiring fairer for people with disabilities
May 26, 2022