How to Responsibly Use AI-Powered HR Tools

By Indeed Editorial Team
Artificial intelligence (AI) is already playing multiple roles within HR departments at many companies. A recent survey of more than 250 HR leaders found that 73% are using AI in recruitment and hiring processes.

Today, AI tools can be used to help with reviewing resumes and scoring job candidates, sourcing talent for open roles, writing job descriptions, identifying opportunities to promote employees, and even sending automated messages to applicants. 

“You name it, and there’s an AI tool being built today to work on it,” says Trey Causey, head of Responsible AI and senior director of data science at Indeed. 

However, the sophistication of these tools varies, and so does their developers’ attention to risk. Organizations should be aware of the risk spectrum involved with using AI and develop strategies for how to use it responsibly. 

AI has the potential to reduce human bias, particularly in hiring, creating better opportunities for workers while streamlining rote tasks so HR professionals can focus on the more human aspects of their roles. But AI can also perpetuate and even amplify inherent biases — and waste both money and time. 

Here are four steps organizations can take to identify risks and make sure their use of AI is fair, ethical and effective.

 

Get the latest insights on the workplace and hiring, straight to your inbox

Sign Up Now

1. Evaluate the risks and the rewards for your organization

First, ask if AI tools are a good fit for your company when it comes to HR. AI systems can scale up processes, such as identifying and scoring many more job candidates than could be processed manually. 

However, “You can also scale up mistakes and errors, as no system is perfect,” says Jey Kumarasamy, an associate at Luminos.Law, a law firm focused on AI. “Even if you have 90% accuracy, which is being generous, if you are processing thousands of applications, there’s going to be a sizable number of applications that were assessed incorrectly.” 

The starting point for evaluating AI-powered HR tools should be an understanding that the tools are imperfect. “Biases are inevitable, so companies will need to figure out how they plan to address them, or accept that they are at risk,” Causey says. 

While some companies accept the risk because of the productivity boost, others may feel that the potential margin of error compromises their values or creates too much complexity in the face of increased regulatory pressures. 

If you move forward with AI, choose your tools wisely. AI that provides transcripts of interview conversations, for example, is typically a relatively low-risk application (although it may not perform well when used with speech from non-fluent speakers). In contrast, AI that assesses and scores candidates based on their performance in video interviews “is probably the most problematic area because there are a lot of risks and ways it can go wrong,” Kumarasamy says.

Ultimately, AI should augment and improve human processes, not replace them. Before adopting AI tools, make sure your HR team is sufficiently staffed so that humans can review every step of any process that AI automates. Leave critical HR matters to people, for example final hiring decisions, promotions and employee support. Luckily, if AI tackles the mundane tasks, HR professionals will have much more time and flexibility for those duties.

2. Screen third-party vendors that provide AI-powered tools

Once you’ve decided what kind of AI tools are best for your organization’s needs, you might approach prospective vendors with specific questions, such as: 

  • How do they audit their system? When was the last time it was tested, and what metrics were used?
  • Was the testing done internally or by an external group?
  • How is bias mitigated? If they claim their system has minimal bias, what does that mean, and how is that bias measured? 
  • Are there testing metrics available for you to review as a prospective client?
  • If the model degrades in performance, do the vendors provide post-deployment services to help train your employees in configuring and maintaining the system?
  • Are they compliant with current and emerging regulations? “I spoke with a vendor last year and asked if they were compliant with a specific regulation, and they hadn’t heard of it before,” Causey says. Not only was that a red flag, but “it clearly, directly impacted their product.” 
  • Will they comply with any AI audits you conduct? “When you do an AI audit, chances are you need a vendor to help — and that’s usually not the best time to find out that your vendor doesn’t want to cooperate with you or provide you with documentation or results,” Kumarasamy says.

3. Identify and monitor bias

AI algorithms are only as unbiased as the data used to train them. While employers can’t modify how algorithms are developed, there are ways to test out the tools before implementing them. In fact, New York City legislation now requires employers to conduct third-party bias auditing and publish audit summaries before launching AI for hiring.

Organizations can also use a process known as “counterfactual analysis” to see how an AI model reacts to different inputs. For example, if AI evaluates resumes for job candidates, try changing the candidate’s name or the school they attended — does the algorithm rank the candidate differently? 

“This has been done since the ’50s, with sociologists sending resumes to employers but changing just one thing on their resume to see how the callback rates differ,” Causey says. “We can do that with AI, too, and pull in a lot of existing social scientific knowledge about how we can evaluate bias in AI models.”

As you implement AI systems, continuously monitor them to identify and correct any discriminatory patterns when they emerge, and stay apprised of developing research on data science and AI. “When you have humans making decisions, it’s difficult to know if they’re biased,” Causey says. “You can’t get into someone’s brain to ask about why they said yes to this candidate but no to that candidate; whereas with a model, we can do that.”

There’s no standard suite of tests to evaluate HR tools for bias. At a minimum, employers should clearly understand how AI is being used within the organization, which could include keeping an inventory of all AI models in use. Organizations should document which tools were provided by which vendor, along with the use cases for each tool. 

In the best-case scenario, audits will bring together different departments, including in-house legal teams as well as data scientists, alongside external counsel or third-party auditors. There are also publicly available tools to help organizations audit their own AI tools — for example, the National Institute of Standards and Technology (NIST) has set out a four-part risk management protocol: govern, map, measure and manage. 

4. Stay ahead of evolving legislation

The potential risks of automated HR tools are not just reputational and financial — they’re legal too. Legislation is quickly emerging in response to the proliferation of AI in the workplace. 

In the European Union, the proposed AI Act aims to assign risk levels to AI applications based on their potential to be unsafe or discriminatory, and then regulate them based on their ranking. For example, the current proposal considers AI applications that scan resumes and CVs to be “high-risk” applications that would be subject to strict compliance requirements.

In the U.S., more than 30 pieces of state legislation are pending in regard to the use of AI in the private sector, including AI for employment decisions. In July 2023, New York City’s Automated Employment Decision Tool (AEDT) law came into effect, mandating that employers disclose when they’re using AI in the hiring process and perform annual audits showing that their systems aren’t biased on the basis of sex, race or ethnicity. Job seekers applying for roles at New York City-based companies are also permitted to request information about how AI analyzes their information.

Additionally, many existing laws, including anti-discrimination laws and Title VII of the Civil Rights Act of 1964, apply to employment decisions made by AI. “There is a misconception that if a law doesn’t directly address AI systems, it doesn’t affect an AI system,” Kumarasamy says. “That’s not true, especially when we’re talking about employment.” Whether your employment outcome is attributed to a human being or an AI system, the organization is liable for any bias.

While audits are a good starting point, the best way to prepare for emerging regulatory requirements and ensure that your AI is operating effectively and equitably is to build out a larger AI governance program. 

Governance systems document the organization’s principles with respect to AI, and create processes for continually assessing tools, detecting issues and rectifying any problems. For example, Indeed has developed and publicly published its own principles for the ethical and beneficial use of AI at the company. Indeed has also created a cross-functional AI Ethics team that builds tools, systems and processes to help ensure that technology is used responsibly. 

Even with safeguards, the new generation of AI tools are complex and fallible. 

However, putting in the effort to use them responsibly opens the door to building better processes. AI can help humans be more efficient and less biased, but only if humans provide the necessary oversight. For example, there are opportunities to think critically about the parameters that an AI algorithm should consider for job qualification, radically improving the way candidates are evaluated. 

“How do we really get to the core of what it means to be successful in a job?” Causey asks. Skills-based hiring can be less biased than relying on school or company names — something that AI can be tuned into in a way humans might not be. “There’s a real potential for leveling the playing field with AI for job seekers,” Causey says. 

Get the latest insights on the workplace and hiring, straight to your inbox

Sign Up Now
A male presenting person wearing headphones while working from home on a laptop. They are wearing a brown sweater with a white t-shirt while sitting at a table. The couch behind them is bright green, with cream colored pillows, and the walls behind them are a darker shade of green. There are bookshelves behind them with an array of books in different colors.

Discover Work Wellbeing

Developed with leading experts, Indeed’s Work Wellbeing Score measures four key outcomes: happiness, purpose, satisfaction, and stress.
Learn more

Get insights and inspiration for the modern world of work

We’ll be in touch soon with the insights and inspiration you need to lead a thriving workforce.

In the meantime, prepare for changes in the hiring landscape with our exclusive guide, “Boldness: Your Hiring Strategy for the Future of Work.”