On May 18, 2023, the U.S. Equal Employment Opportunity Commission (“EEOC”) issued a non-binding “technical assistance” document that offers employers guidance on the applicability of Title VII to the use of artificial intelligence (“AI”) in employment selection procedures such as hiring, promoting and firing. The guidance comes as the EEOC continues to prioritize its consideration of potential discriminatory policies and practices that incorporate AI, as we previously discussed in a February 2023 blog post.
The document—entitled “Assessing Adverse Impact in Software, Algorithms, under Title VII of the Civil Rights Act of 1964” – defines “artificial intelligence” as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
In the employment selection context, such increasingly common AI tools may include resume screening software, employee monitoring software, virtual assistants and video interviewing software that evaluates a candidate’s facial expressions and speech patterns. Notably, this particular EEOC guidance is focused solely on the potential disparate or adverse impact on Title VII-protected categories resulting from the use of facially neutral AI tools, i.e., it does not address issues of intentional discrimination via the use of AI.
To assist an employer in deciding whether their AI tests and selection procedures impact adversely on a protected category, the document relies on the Uniform Selection Guidelines on Employee Selection Procedures (the “Guidelines”), a set of guidelines that were developed to determine adverse impact several decades ago, and confirms that the Guidelines apply equally to AI-based selection tools. Although the scope of the EEOC’s guidance is limited, it does include the following key points for employers:
- If the use of a selection tool causes a selection rate for individuals within a protected category that is substantially lower (less than 80%, i.e., the “Four-Fifths Rule of Thumb”) than that of the most selected group, a preliminary finding of adverse impact is likely and the employer must examine the AI tool to determine if it, in fact, has an adverse impact. If it does, the employer must show that either the use of the AI tool is job-related and consistent with business necessity pursuant to Title VII, or that the “Four-Fifths” assessment was in error.
- Where an AI selection tool results in disparate impact, an employer may be liable even if the test was developed or administered by an outside vendor. The EEOC recommends that the employer consider asking the vendor what steps it has taken to evaluate the tool for potential adverse impact.
- Employers should self-audit AI selection tools on an ongoing basis to determine whether they have an adverse impact on protected categories and, where it does, consider altering the tool to minimize such impact.
Employers using or considering the use of AI-based tools in selecting candidates and employees are urged to keep a close eye on developments in this ever-changing area. The Labor and Employment attorneys at Murtha Cullina will continue reporting on these developments as well.