Resource News Center

Navigating AI in Recruiting: Addressing Compliance Risks

Written by Alex Vial | Aug 9, 2023 3:00:00 PM

Maybe you haven’t heard, but Artificial Intelligence (AI) is revolutionizing the recruitment process in Human Resources. Promising greater efficiency and effectiveness in candidate selection, AI-powered tools are now widely utilized to streamline candidate sourcing, screening, and assessment. However, despite these potential benefits, your organization needs to be aware of the significant compliance risks associated with the use of AI in recruiting. Let’s explore these risks and their potential consequences, so you can make sure your organization isn’t on the hook when the EEOC comes knocking.

One of the most concerning HR compliance risks when implementing AI in recruiting is the potential for unconscious bias and discrimination. AI algorithms are trained on historical data, including resumes, interview outcomes, and hiring decisions. If this pool of data contains biases or reflects past discriminatory practices, the AI system may inadvertently perpetuate these biases in your hiring process.

A 2018 study conducted by Dastin et al. revealed that some AI algorithms used in recruitment exhibited gender and racial biases, leading to the potential for unfair and discriminatory hiring practices. You may be thinking; “We aren’t to blame when this was clearly not our intention!” and you may be right. However, that is not how the Government will see it.

The EEOC has already released guidance in May of 2023 warning that employer use of AI in talent acquisition could violate Title 7, via unintentional discriminatory hiring practices, aka “Disparate Impact”. This is when certain groups of candidates are systematically disadvantaged due to what an organization believed to be neutral policies, practices, or procedures.

Ensuring fairness in the hiring process is a fundamental HR compliance requirement. A comprehensive study by Angwin et al. (2020) found that an AI-driven recruiting tool used by a prominent technology company favored candidates from specific academic institutions, indirectly perpetuating a class-based bias. But how does this happen exactly? Well, as we mentioned earlier, AI algorithms rely on historical data for training. Any pre-existing biases present in the data can be amplified and perpetuated throughout the recruitment process. For instance, if historical hiring decisions favored candidates from a particular demographic group, the AI system may learn and reinforce these preferences, resulting in a biased candidate selection process. Such practices expose organizations to risk of discrimination claims, hefty lawsuits, and permanent damage to their reputation.

Another significant compliance risk related to AI in recruiting is the lack of transparency and accountability in decision-making. Many AI systems operate as "black boxes," meaning they do not provide clear explanations for their decisions. This opacity can make it challenging to determine whether the AI algorithm adheres to relevant employment laws and regulations.

According to a 2021 report by the American Civil Liberties Union (ACLU), the lack of transparency in AI recruiting tools makes it difficult for candidates to understand why they were rejected, leading to a sense of unfair treatment and frustration. As a result, companies may face legal challenges or damage to their employer brand.

It is also worth mentioning the data security aspect of AI in recruiting. Organizations need to be aware that studies have revealed AI recruiting tools lacked proper data encryption measures, making candidate information susceptible to hacking and unauthorized access. A data breach could not only lead to legal penalties but also erode trust between candidates and the organization.

While AI offers immense potential for revolutionizing your recruiting process, HR compliance risks must be thoroughly considered and addressed before utilizing it. The use of AI in recruitment can inadvertently perpetuate bias, undermine your recruitment process transparency, and result in severe legal and reputational consequences for organizations. Just like with anything new in the market, it may be best to bide your time until HR professionals and technology developers can work together and refine AI systems to be more transparent, and compliant with relevant regulations. A thoughtful approach to implementing AI in recruiting can lead to better decision-making and a stronger, more compliant organization in the long run.

 

Alex Vial
HR Advisor, HR Services

Alex obtained his Bachelor of Science in Business Administration, with a focus in Human Resource Management from The University of New Orleans. He has worked in a variety of industries, including not-for-profit organizations, Telecommunications/IT, and Solar/Renewables. In his career, he has focused on human resource and legal compliance for companies operating in multiple states, Professional Development and employee trainings, employee engagement, onboarding, offboarding, and conflict resolution. Alex believes the best part of HR is helping companies create pro-employee cultures, increasing retention and reducing recruiting costs. Alex loves tackling new challenges on behalf of his customers at empact and Crescent.

His personal philosophy is “The obstacle in the path becomes the path. Within every obstacle is an opportunity to improve our condition.” – Ryan Holiday