Recently, in order to sift through thousands of job applications more quickly, companies and job search sites like LinkedIn are using advanced candidate screening programs, or AI hiring programs. These programs combine algorithms and AI to pick out the “top candidates” to consider. Although AI systems have been widely used for years, companies should be cautious about the use of these programs in hiring. Staff of companies utilizing these programs and users of their sites have noticed biases in the selection of candidates. As online applications become more common, it’s concerning how this will impact minorities who may be unfairly rejected by AI.
Obscuring Decision-Making With Tech
A few years ago, LinkedIn’s AI job matching programs were found to have gender bias. Rather than ranking top applicants based on qualifications and skills, the biased program prioritized search histories.
The data interpreted the male candidates as more “aggressively” searching for jobs than the female candidates, so the AI recommended more men than women for roles. LinkedIn claims to have fixed the issue, but it is unclear how. This lack of transparency creates uncertainty for company and individual users who want to prevent discrimination. Another individual discovered age bias in a hiring program by submitting two resumes. The resume with their actual age was rejected, while the one with a younger age was chosen by AI.

Credit: Large majority opposes using AI to make final call on hiring, but views are more mixed on having AI review applications, Pew Research Center, Washington, D.C. (April 10, 2023)
Biased Data In, Biased Data Out
To understand how these biases appear, it’s important to know how the algorithms and AI work together. The algorithms consider a collection of data from applicants’ resumes, cover letters, and search histories, as well as data from companies’ past hires, to identify the most desired attributes. It creates a snowball effect, as biases increase through each step of the process. When the data in algorithms is biased, like having more resumes from one identity group, it results in a biased output. AI then learns to prioritize data that reflects that biased output.
As awareness of how AI programs are utilized by companies grows, many governments and organizations are taking initiatives to protect users. The European Union’s “AI Act” prevents companies from using AI systems in ways that manipulate or distort data. Additionally, Virginia’s “High Risk Artificial Intelligence Developer and Deployer Act” outlines standards to be followed by companies using AI systems. In academia, Sandra Watcher, a professor at the University of Oxford, co-created the Conditional Demographic Disparity test, a public tool for companies or individuals to test their AI algorithms for biases.
Is This Really the Future of Employment?
As a person who has applied to many jobs online, I can’t help but wonder if potentially biased AI programs are the reason I never heard back from opportunities. Leaving the decision up to an algorithm makes me uneasy, as the AI review process is obscured. It is a good step to create more specific regulations, but it is impossible to know if they are being followed if job search sites aren’t disclosing the details of how their programs are designed. In my work experience, the jobs I have landed resulted from persistent communication with real humans. Although humans have biases too, the use of AI with coded biases reaches and affects more people at a faster rate than any biased human can. In the future, I would like to see more companies publicly posting the details of their programs that continue to affect users in unseen ways.

This article was written by a guest contributor, S. Jarvis.

If you are interested in diversity after reading this article, our program may be of use to you. Find more details here!





