The AI Chatbot Will Hire You Now

Smart HR bots can ignore a job applicant's gender, age, and ethnicity. But there's no such thing as bias-free data.
Image may contain Human and Person
Aleutie/iStock

Eyal Grayevsky has a plan to make Silicon Valley more diverse. Mya Systems, the San Francisco-based artificial intelligence company that he cofounded in 2012, has built its strategy on a single idea: Reduce the influence of humans in recruiting. “We’re taking out bias from the process,” he tells me.

They’re doing this with Mya, an intelligent chatbot that, much like a recruiter, interviews and evaluates job candidates. Grayevsky argues that unlike some recruiters, Mya is programmed to ask objective, performance-based questions and avoid the subconscious judgments that a human might make. When Mya evaluates a candidate’s resume, it doesn’t look at the candidate’s appearance, gender, or name. “We’re stripping all of those components away,” Grayevsky adds.

Though Grayevsky declined to name the companies that use Mya, he says that it’s currently used by several large recruitment agencies, all of which employ the chatbot for “that initial conversation.” It filters applicants against the job’s core requirements, learns more about their educational and professional backgrounds, informs them about the specifics of the role, measures their level of interest, and answers questions on company policies and culture.

Everyone knows that the tech industry has a diversity problem, but attempts to rectify these imbalances have been disappointingly slow. Though some firms have blamed the “pipeline problem,” much of the slowness stems from recruiting. Hiring is an extremely complex, high-volume process, where human recruiters—with their all-too-human biases—ferret out the best candidates for a role. In part, this system is responsible for the uniform tech workforce we have today. But what if you could reinvent hiring—and remove people? A number of startups are building tools and platforms that recruit using artificial intelligence, which they claim will take human bias largely out of the recruitment process.

Another program that seeks to automate the bias out of recruiting is HireVue. Using intelligent video- and text-based software, HireVue predicts the best performers for a job by extracting as many as 25,000 data points from video interviews. Used by companies like Intel, Vodafone, Unilever and Nike, HireVue’s assessments are based on everything from facial expressions to vocabulary; they can even measure such abstract qualities as candidate empathy. HireVue's CTO Loren Larsen says that through HireVue, candidates are “getting the same shot regardless of gender, ethnicity, age, employment gaps, or college attended.” That’s because the tool applies the same process to all applicants, who in the past risked being evaluated by someone whose judgement could change based on mood and circumstance.

Though AI recruiters aren’t widely used, their prevalence in HR is increasing, according to Aman Alexander, a Product Management Director at consultancy firm CEB, which provides a wide range of HR tools to such corporations as AMD, Comcast, Philips, Thomson Reuters, and Walmart. “Demand has been growing rapidly,” he says, adding that the biggest users aren’t tech companies, but rather large retailers that hire in high volumes. Meaning that the main attraction of automation is efficiency, rather than a fairer system.

Yet the teams behind products such as HireVue and Mya believe that their tools have the potential to make hiring more equitable, and there are reasons to believe them. Since automation requires set criteria, using an AI assistant require companies to be conscious of how they evaluate prospective employees. In a best-case scenario, these parameters can be constantly updated in a virtuous cycle, in which the AI uses data it has collected to make its process even more bias-free.

Of course, there’s a caveat. AI is only as good as the data that powers it—data that’s generated by messy, disappointing, bias-filled humans.

Dig into any algorithm intended to promote fairness and you’ll find hidden prejudice. When ProPublica examined police tools that predict recidivism rates, reporters found that the algorithm was biased against African Americans. Or there’s Beauty.AI, an AI that used facial and age recognition algorithms to select the most attractive person from an array of submitted photos. Sadly, it exhibited a strong preference for light-skinned, light-haired entrants.

Even the creators of AI systems admit that AIs aren’t free of bias. “[There’s a] huge risk that using AI in the recruiting process is going to increase bias and not reduce it,” says Laura Mather, founder and CEO of AI recruitment platform Talent Sonar. Since AI is dependent on a training set generated by a human team, it can promote bias rather than eliminating it, she adds. Its hires might be “all be smart and talented, but are likely to be very similar to one another.”

And because AIs are being rolled out to triage high-volume hires, any bias could systematically affect who makes it out of a candidate pool. Grayevsky reports that Mya Systems is focusing on such sectors like retail, “where CVS Health are recruiting 120,000 people to fill their retail locations, or Nike is hiring 80,000 a year.” Any discrimination that seeps into the system would be practiced on an industrial scale. By quickly selecting say, 120,000 applicants from a pool of 500,000 or more, AI platforms could instantaneously skew the applicant set that makes it through to a human recruiter.

Then again, the huge capacity has a benefit: It frees up human recruiters to focus their energy on making well-informed final decisions. “I’ve spoken to thousands of recruiters in my life; every single one of them complains about not having enough time in their day,” Grayevsky says. Without time to speak to every candidate, gut decisions become important. Even though AI allows recruiters to handle greater volumes of candidates, it might also give recruiters the time to move from snap judgements.

Avoiding those pitfalls requires that engineers and programmers be hyper-aware. Grayevsky explains that Mya Systems “sets controls” over the kinds of data Mya uses to learn. That means that Mya’s behavior isn’t generated using raw, unprocessed recruitment and language data, but rather with data pre-approved by Mya Systems and is clients. This approach narrows Mya’s opportunity to learn prejudices in the manner of Tay—a chatbot that was released into the wilds by Microsoft last year and quickly became racist, thanks to trolls. This approach doesn’t eradicate bias, though, since any pre-approved data reflects the inclinations and preferences of the people selecting.

This is why it’s a possibility that rather than eliminating biases, AI HR tools might perpetuate them. “We try not to see AI as a panacea,” says Y-Vonne Hutchinson, the executive director of ReadySet, an Oakland-based diversity consultancy. “AI is a tool, and AI has makers, and sometimes AI can amplify the biases of its makers and the blindspots of its makers.” Hutchinson adds that in order for tools to work, “the recruiters who are using these programs [need to be] trained to spot bias in themselves and others.” Without such diversity training, the human recruiters just impose their biases at a different point in the pipeline.

Some companies using AI HR tools are wielding them expressly to increase diversity. Atlassian, for example, is one of the many customers of Textio, an intelligent text editor that uses big data and machine learning to suggest alterations to a job listing that make it appeal to different demographics. According to Aubrey Blanche, Atlassian’s global head of diversity and inclusion, the text editor helped the company increase the percentage of women among new recruits from 18 percent to 57 percent.

“We’ve seen a real difference in the gender distribution of the candidates that we’re bringing in and also that we’re hiring,” Blanche explains. One of the unexpected benefits of using Textio is that, on top of diversifying Atlassian’s applicants, it made the company self-aware of its corporate culture. “It provokes a lot of really great internal discussion about how language affects how our brand is seen as an employer,” she says.

Ultimately, if AI recruiters result in improved productivity, they’ll become more widespread. But it won’t be enough for firms to simply adopt AI and trust in it to deliver fairer recruitment. It’s vital that the systems be complemented by an increasing awareness of diversity. AI may not become an antidote to the tech industry’s storied problems with diversity, but at best it might become an important tool in Silicon Valley’s fight to be better.