AI and hiring — how to avoid discrimination pitfalls
At the end of May, the Equal Employment Opportunity Commission (EEOC), released a technical assistance document that addresses employers using automated decision-making tools, like artificial intelligence (AI), when making employment decisions, including during the hiring process.
Employers increasingly utilize these tools to:
- Save time and effort,
- Increase objectivity,
- Optimize employee performance, or
- Decrease bias.
Using AI helps employers with a wide range of employment matters, such as:
- Selecting new employees,
- Monitoring performance, and
- Determining pay or promotions.
Without proper safeguards, however, their use may run the risk of violating existing civil rights laws.
“As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with the civil rights laws and our national values of fairness, justice, and equality,” said EEOC Chair Charlotte A. Burrows. “I encourage employers to conduct an ongoing self-analysis to determine whether they are using technology in a way that could result in discrimination.”
Want to read more news about discrimination? Alleged harassment of transgender employee leads to charges |
AI in hiring
AI uses algorithms to make computers function more like humans. In the hiring process, AI can:
- Scan a resume and compare it to a job description, or a database of current employee qualifications. Some programs can also pull information from job applicants’ social media profiles and other publicly available data.
- Assist early in the talent acquisition process. For instance, chat bots can “talk” with people who are visiting the jobs page of a company’s web site, asking relevant questions, and getting valuable information from people who haven’t yet applied for a job. This may help HR discover passive candidates, or weed out people who may be considering applying but aren’t qualified, thus saving them from wasting time applying for a job they have no chance of getting.
- Create job ads based on data with wording tailored to attract the right candidates.
- Use algorithms to evaluate the personality of a candidate in a video application based on facial expressions. It can also detect if an applicant in a video is reading from a script or being coached by another person off camera.
Risks of using AI
Because AI uses patterns of past behavior to learn, it can learn bias. For example, if the data used by the recruitment software to evaluate applicants for a sales position includes information about the company’s highest performing salespeople in the past 20 years and they happen to be overwhelmingly young, white, and male, applicants with these same characteristics may be identified as the “best” candidates, possibly ruling out a highly qualified middle-aged African American woman. AI tools can, however, be programmed to unlearn or override certain information to avoid making biased decisions.
Users of AI in recruitment, therefore, should be on guard to make sure biases aren’t “baked in” with the data. AI can help with recruitment, but humans must dictate the parameters of a search.
Another risk is that algorithms aren’t subject to regulatory oversight but are often considered proprietary information protected by trade-secret laws. This means an employer using AI to recruit could be held responsible for information the software developer doesn’t legally have to divulge. Work with reputable technology vendors and discuss potential legal issues when evaluating products that use AI.
Key to remember: Employers must use AI to streamline employment decisions, like hiring. The EEOC reminds employers to be sure to avoid running afoul with discrimination laws.
READ MORE: Click HERE to learn about ways to combat high turnover.