SquarePeg is dedicated to ethically implementing AI to remove the unconscious bias and physical constraints that hinder organizations from conducting fair applicant evaluations.
Unconscious bias in recruitment involves the subtle preferences and preconceived opinions that recruiters and hiring managers hold against certain applicants. This is typically influenced by their background, upbringing, cultural experiences, etc.
Even the most well-intended and experienced recruiters and hiring managers suffer from unconscious bias. In recruiting, unconscious bias often expresses itself as “culture fit.” This could be due to an affinity for one name over another, a shared hometown, or because a hiring manager could see themselves hanging out with a candidate after work.
When an organization makes recruiting decisions unconsciously rooted in bias, it leads to a workplace lacking diversity. Multiple studies have found that diverse workplaces are more profitable, productive, and innovative than those that are not.
A 2015 McKinsey report found that companies in the top quartile for ethnic and racial diversity in management were 35% more likely to have financial returns above their industry mean, and those in the top quartile for gender diversity were 15% more likely to have returns above the industry mean.
A Harvard Business School report found that gender diversity leads to more productive companies, as measured by market value and revenue, in countries where gender diversity is “normatively” accepted.
Apart from negatively impacting a business’s productivity and bottom line, not recognizing or preventing bias in the hiring process is against the law in the United States and many other countries.
In addition to unconscious bias, the physical inability to thoroughly and accurately review all applicants due to extremely high applicant volume prevents organizations from properly evaluating applicants.
Google receives, on average, 3.3 million applications per year. While that’s an extreme example, it’s not uncommon for most roles to receive hundreds of applications per day.
The average recruiter takes 7.4 seconds to review a resume for “general fit” and another 30 seconds to two minutes to review the entire resume if the applicant is deemed a fit. Most would agree that 7.4 seconds is insufficient to understand a candidate’s career history thoroughly. However, this practice was born out of the necessity of ensuring a good candidate experience.
Recruiting best practices suggest employers follow up with applicants with a screening decision within days of receiving their applications. This means that these decisions must be made for every applicant regardless of volume. Hence, an average review time of 7.4 seconds.
Like most industries, human resources and recruitment have seen a massive increase in AI-powered technology. Most AI-enabled recruiting products can solve the physical limitation problem by helping employers screen many applicants quickly. However, quality, transparency, and fairness vary widely, and it’s important to understand why.
A recent Bloomberg study found that chatGPT discriminates against certain groups based on their names. This would not be surprising for anyone with a general understanding of how large language models work. Large language models like chatGPT are trained on thousands of articles, books, online comments, and social media posts, all written by humans, and contain the same previously mentioned biases.
While this study exposed an important fact, most would never suggest relying solely on chatGPT to make screening decisions. Unfortunately, under the hood, this is exactly how many low-quality AI-enabled recruiting products function.
While the previously mentioned risks are real, the undeniable truth is that organizations leveraging AI-enabled recruiting products built correctly realize unimaginable opportunities and efficiencies.
For obvious reasons, most products don’t publish details regarding exactly how they are built and function. However, there are still ways to gauge if a product leverages AI responsibly.
Suppose a product publicly discusses the risks and benefits of AI. In that case, it is likely that the company understands the potential risks involved when leveraging AI and is committed to its ethical implementation.
Products that position their AI features as “assistive,” as opposed to “automated” or “powered,” likely recognize that recruitment has always been, and always will be, a human-centric industry and the importance of human oversight when working with AI.
SquarePeg leverages AI models to write job descriptions, parse, enrich, and structure candidate resumes, and create ontologies around job requirements like titles, skills, and industries, to list a few examples.
We are constantly evaluating and improving our AI implementations and inventing new ways to leverage AI to enhance efficiency, reduce costs, and improve accuracy in recruitment.
Publish a job for free and experience for yourself how SquarePeg simplifies the process of recruitment and increases hiring efficiency.