Underrepresented groups are more likely to predict AI will positively impact DEIB goals than the average respondent — they are also more likely to have concerns

Key Takeaways 

  • Indeed research finds that HR/TA leaders and job seekers believe AI will have more of a positive impact on diversity, equity, inclusion and belonging (DEIB) goals than a negative one.
  • HR/TA leaders and job seekers from underrepresented groups are more likely to anticipate both positive and negative impacts of AI.
  • Experts say AI should be a companion for HR/TA leaders, not a decision maker.

In a recent Indeed-commissioned study to determine what HR/TA leaders and job seekers think about AI and how it’s impacting hiring, a surprising finding emerged. The Indeed Global AI Survey found that respondents from some underrepresented groups are, in many cases, more likely to predict that AI will positively impact diversity, equity, inclusion and belonging (DEIB) goals than the average respondent. 

For example, globally, job seekers with disabilities are more likely to say that AI will have a positive impact on people with disabilities rather than a negative impact (35% vs. 20%), and they’re more likely to anticipate that positive impact than job seekers overall (35% vs. 27%). The survey found the same dynamic among HR/TA leaders with disabilities. 

A bar chart titled "Will AI positively or negatively affect DEIB goals related to people with disabilities?". According to the Indeed Global AI Survey, 36% of HR/Talent leaders with disabilities believe AI will have a positive impact while 21% believe it will have a negative impact. 31% of HR/Talent leaders overall believe AI will have a positive impact and 15% believe it will have a negative impact.

The same trend also played out among HR/TA leaders and job seekers who identify as gay, lesbian and bisexual, as well as job seekers who are Black or African American — they are more likely to anticipate that AI will help rather than harm DEIB goals. And they’re more likely to see AI as a positive force when it comes to DEIB in the workplace.

A bar graph titled "Will AI positively or negatively affect DEIB goals related to LBGTQ+ issues?" based on data collected in the Indeed Global AI Survey. 39% of HR/TA Leaders who identify as gay or lesbian believe AI will have a positive impact while 17% believe it will be negative. 34% of HR/TA leaders who are bisexual believe AI will have a positive impact while 22% believe it will be negative. 22% of HR/TA Leaders overall believe AI will have a positive impact while 12% believe it will be negative. 23% of job seekers who identify as gay or lesbian believe AI will have a positive impact on LBGTQ+ issues while 22% believe it will be negative. 27% of job seekers who are bisexual believe AI will have a positive impact while 16% believe it will be negative. Finally, 17% of job seekers overall believe AI will have a positive impact while 12% believe it will have a negative impact on LBGTQ+ issues.
A bar chart titled "Will AI positively or negatively affect DEIB goals related to race?" According to the Indeed Global Ai Survey, 28% of job seekers who are Black or African American believe AI will have a positive impact on DEIB goals related to race while 17% believe it will have a negative impact. 23% of job seekers overall believe AI will have a positive impact on DEIB goals related to race while 14% believe it will have a negative impact.

And yet, the opposite is also true. Even though job seekers who identify as gay or lesbian are more likely to anticipate that AI will have a positive rather than a negative impact on LGBTQ+ issues at work, they’re also nearly twice as likely as the average respondent to worry that the impact of AI on LGBTQ+ issues will be negative (22% vs. 12%).

At a glance, these findings from the survey, which included more than 7,000 HR/TA leaders and job seekers from seven countries around the world, may appear in conflict. But according to Jessica Hardeman, Indeed’s Global Director, Employee Lifecycle, it makes sense. Historically marginalized groups are more attuned to both the potential and the risk of AI in the workplace. 

Headshot of Jessica Hardeman in front of a peachy blue background.
Jessica Hardeman, Indeed’s Global Director, Employee Lifecycle

At its best, Hardeman says, AI can unlock new accessibility tools for people with disabilities and help companies train workers in different languages and cultural contexts more quickly. But at its worst, AI can entrench existing biases against those same underrepresented groups. “Are you using AI for good or not?” Hardeman says. “I think that’s what it comes down to.”

Here’s a look at how AI can help you advance your company’s DEIB goals — and how you can avoid common traps that risk setting them back.

How AI Can Further Your DEIB Goals…  

When implemented correctly, AI tools can be pivotal partners for DEIB leaders. Hardeman says Indeed uses AI to help review job descriptions to ensure that the language is inclusive. Her team also uses generative AI to develop animations and voiceovers for training content tailored to different countries. 

AI tools can also help unearth problems related to discrimination that’s entrenched within a company.

AI tools can also help unearth problems related to discrimination that’s entrenched within a company. Diversio, a company that helps businesses track, measure and improve their DEIB goals, uses natural language processing to sift through open responses to employee surveys and identify issues, including those that are concentrated among certain demographics. 

For example, if the analysis finds that concerns about flexibility are concentrated among female employees, Diversio might propose implementing “core hours,” which means not scheduling meetings during times when parents take kids to and from school.

Laura McGee, CEO and founder of Diversio, says AI also has the potential to minimize the impact of the interpersonal relationships that sometimes weigh too heavily into promotion decisions. “So often, advancement in companies is based on relationships and not work product,” she says. With AI assessments, workers can be judged based on what they produce, “instead of who they went to the baseball game with,” McGee says.

These tools don’t just provide more equitable opportunities for employees; they can also help executive teams that “don’t always have enough information” about their staff, says Jenn Tardy, founder and CEO of the DEI training and consulting firm Jennifer Tardy Consulting. “Using AI to identify people within your organization who could be ready for advancement can create more diverse internal pools of candidates for future opportunities,” she says.

… and How AI Can Undermine DEIB Progress 

AI tools are trained on extensive data, including data that can contribute to stereotyping or exclusion. Even the simplest tasks you entrust to AI will be influenced by those inputs. 

HR/TA leaders aren’t naive about this. According to the Indeed Global AI Survey, more than half (53%) say they’re concerned about bias in AI training data. “AI is learning from us. And the voices of us are not equitably distributed or represented,” says Andrés Tapia, Senior Partner and Global DE&I and ESG Strategist at the consulting firm Korn Ferry. 

A bar chart titled "Level of concern about bias in AI training data" displays how various demographics are concerned about AI training data based on Indeed Survey data. 66% of HR/TA leaders with disabilities are concerned while 52% of HR/TA leaders without disabilities are concerned. 63% of job seekers with disabilities are concerned while 52% of job seekers without disabilities are concerned. 67% of HR/TA leaders who are gay or lesbian are concerned while 65% HR/TA leaders who are bisexual are concerned and 52% of HR/TA leaders who are heterosexual are concerned. 61% of job seekers who are gay or lesbian are concerned while 57% of job seekers who are bisexual are concerns and 54% of job seekers who are heterosexual are concerned.

Even something as straightforward as asking a generative AI tool to develop best practices for interviewing may be susceptible to bias. “How much of that is going to be influenced by the dominant group?” Tapia says. Depending on the role, those interview tactics could be unintentionally skewed toward groups that have historically been part of those interview conversations to begin with.

Another risk, Hardeman says, is that over-reliance on AI “can create complacency.” For instance, an AI tool may show that a company is hitting its DEIB goals without asking deeper questions about where people of color actually sit in the corporate hierarchy. 

Tips for Using AI Responsibly to Support DEIB Goals

To help ensure your organization is maximizing the benefits of AI when it comes to DEIB — and minimizing some of the pitfalls — there are a few steps you can take:

  1. Notice how AI vendors address DEIB concerns in their own organizations: Look for vendors that are willing to share information about their own diversity and impact. “Say, ‘What does your engineering team look like? If it’s all the same profile, I think that should raise some red flags,” McGee says.
  1. Don’t fall for “skill proxies”: Be wary of resume filtering tools that place too much emphasis on where someone went to school or what degree they received. These are “skill proxies,” Tardy says, which aren’t necessarily indicative of actual skills.
  1. Keep humans in the loop: AI tools are no replacement for human beings’ lived experiences, Hardeman says. These tools should be a guide, not a replacement, for that human judgment.

Ultimately, Hardeman says, AI can be a useful resource like any other, as long as it’s not viewed as the sole source of truth. “AI should enable you to make decisions,” she says, “not make the decisions for you.”

Hear What Experts Say About AI on "Here to Help"

In the Indeed video podcast “Here to Help,” Ellen McGirt, editor-in-chief of Design Observer, talks to Indeed CEO Chris Hyams about her efforts to amplify diverse voices in the design industry, AI’s impact on journalism, and more. Listen or watch.

Also on “Here to Help,” hear Hyams interview Dr. Safiya Noble, author of the book “Algorithms of Oppression,” about AI as a human rights issue. Listen or watch

And be sure to check out the Best of “Here to Help” in 2023.