By Kofi Arhin ’23, doctoral candidate in management

Some years ago, Kalisha White applied for a job at Target. She never heard back from the hiring committee so she decided to modify her CV. She changed the name Kalisha to Sarah and changed some of the details on her CV to make Sarah look less qualified for the role than Kalisha. And so, Sarah applied for the same role at Target and — you guessed right — she got a call back for an interview. According to The New York Times, Target lost over half a billion dollars in a class action lawsuit over this issue.

Discrimination in hiring has been a longstanding issue and companies are now adopting artificial intelligence (AI) to support hiring processes. However, there’s a fear that these AI systems may replicate or worsen existing human biases.

In addressing this concern, my research shows the exciting potential of using AI to make hiring equitable — even in the presence of human discrimination. In my study, I used real-world hiring data from one of the major U.S. airlines. In this data, job candidates recorded video responses to interview questions and hiring managers watched these videos and made decisions on whom to recommend for hire.

With this data, I conducted an experiment with two scenarios. In the first scenario, I trained AI models to predict who gets hired based on the historic hiring manager decision and the recorded video responses. In the second scenario, which is my intervention, I trained AI models to predict who gets hired based on the recorded video responses and the similarity of those responses to exemplary answers on platforms such as Indeed and Glassdoor. Using example answers makes the AI models less reliant on hiring manager decisions in identifying the best candidates.

I assessed the equity of the AI models using the adverse impact ratio given by civil rights legislation. This is the ratio of selected underrepresented group members to the ratio of majority group members. When this ratio is below 0.8, it indicates equitable hiring practices.

The adverse impact ratio for Black and Hispanic applicants was above 0.8 with my intervention, while it remained below 0.8 for the hiring manager and standard AI predictions, proving that my intervention works. This illustrates the power of AI to make hiring fair for everyone, and to potentially protect companies like Target from future lawsuits.

 

The content of this post was originally part of RPI’s Three Minute Thesis Competition, which challenges doctoral students to effectively explain their research to a general audience in three minutes.