Recruiting Apologia: Artificial Intelligence Can’t Replace Recruiters By Jim Durbin

  By Anonymous  |    Monday August 29, 2017

Category: Recruiting, Technology, Trends


Image

The hot buzz around AI is the replacement of human workers with robots or AI. I’m not worried, for several reasons. The generics are that AI isn’t really AI, and by the time it is, it will replace almost all jobs. The second is that the people programming machine learning don’t seem to understand recruiting.

For the purposes of this defense, I’ll use AI/machine learning/and computer screening as the same idea.  

The data points that are gathered are very useful for a system, but I’m highly suspicious of the claims that they can screen better than a human. These claims are based on, quite frankly, terrible screening processes. For a chat bot to work, it has to be fed the right information. Maybe I’m missing the good ones, but I’m not seeing any example of excellent screening. I’m seeing poor screening practices replicated in these bots. An analogy would be a drop-down box in a job board asking that applicants have 2 years of experience. This sounds like a good idea, but in practice, has a value very close to zero. For AI to replace recruiters, it will have to actually replace recruiters, not just automate portions of the hiring process. Gathering data is only useful if it affects the outcome, and is not counted as an advantage of AI.  

This is intended as an apologia, a formal defense of my opinions. I believe that in the process, the action, and the skillset, recruiters cannot be replaced. The most likely scenario is augmentation for the collection of data and the reduction of paperwork. 

 

Recruiting is not the same as hiring. It’s value lies in social proof and time saved. 

The primary definition of recruiting is making an introduction between two individuals who are poorly trained to interview. The social proof of a recruiter introduction is useful in calming fear and creating certainty. In a mature recruiting model, the hiring committee receives screened candidates who have already been vetted. This means the hiring committee should not be asking basic questions of motivation, experience, and suitability.

*All people are poorly trained to interview or be interviewed. There is no formal process, no testing, and not enough actions to constitute good interviewers or good interviewees. Belief in superior interviewing skills for a task that constitutes a very small portion of time worked is delusional. An outcome of getting hired or hiring is simply not a good indicator of interview quality. It should also be noted that a good interviewer (a recruiter), does not translate into being good at being interviewed.  

A computer screen would have to demonstrate superior screening to the hiring committee, but also social proof. Passing a test is not the same as passing a human screen. Unless companies are willing to train managers to ignore their social conditioning, the recruiter introduction will be of more value than an AI screen. I should also note that many companies utilize referral systems and rank referrals as a high quality of hire. If AI were to replace recruiters, this would eliminate the belief in and the practice of referral-based hiring. After all, if social proof is less valuable than an AI screen, then a referral is less valuable than an AI screen. 

 

Defense of AI is an attack on referrals, because recruiting is a form of referral-based hiring. 

Matching speech patterns

Another way humans generate comfort lies in connecting the rhythm of our speech patterns to our brain waves. Two humans speaking to each other literally reach a sweet spot in their conversation where their brain is functioning on the same bandwidth. A person cannot understand you unless their brain can match your speech, including tone, speed, words, and volume to their experience.  Again, humans are highly adaptable in listening to each other. Computers are uni-directional in this manner. The cues that tell us that we are being understood are not present in an AI interface, and cannot be. To be successful, an AI at a minimum has to be able to adopt and adapt speech patterns to generate a good conversation. 

In the absence of this skill, the candidate will be assigning a large portion of their conscious processing to thinking past the screen. Instead of exploring and discussing their value, they are seeking to tell a story that will survive the computer and create good marks in the eyes of an unseen human reviewer. 

 

Consistency is vital in the decision-making process 

A human being making a decision has several well-known triggers. Verbal and written statements to a human being rank highly in terms of creating consistency. Typing answers on a computer is not a trigger. This principle is known as disinhibition. While engaging with an AI bot, the human is not making statements they feel compelled to live up to. Without the non-verbal cues of a conversation (including phone calls), a job-seeker is not making positive statements about the company, the interview or a job. Those positive statements are a major cause of decision making later in the process. Any human-computer interface would have to mimic a large portion of non-verbal clues including facial tics, presence, breathing, rhythm, and mirroring to generate a strong response from the candidate. 

It is possible to program these cues, but the likelihood of mistakes due to what is known as the “uncanny valley” is not being pursued. If a robot interface is too human, we recoil. If it is not human enough, we don’t care what it thinks. Failing to understand behavioral science is a major flaw in AI systems. We don’t recognize how good humans are at screening each other, something that no AI can replicate. 

Current AI’s are sterile, voice recognition is bad, and translation of slang is nearly impossible. 

An automated interface is by definition, sterile. Research into human-like robots and avatars shows improvement in certain kinds of information, but that information has to be carefully curated and applied to a working “AI-interface” that has sufficient voice recognition software and around 40% of the facial reflexes of a human. No chat bot is doing that now because quite frankly, it’s a different skill set than writing an AI logic engine. 

The attempt to match language translation and dialects only works in a laboratory. It’s similar to showing off your fancy software on your desktop configured to run your software. That you can make it work is not the same as testing it in the field. If human beings struggle to understand each other, with accents, lingo, mistaken phrases, and even the way in which we think, how could a computer be any different? When we look at voice activated software, we forget that we have to train the software to understand us, or we have to fit into a comfortable middle ground of dialect. 

The medium matters as well. Chat and text and voice and email and the eventual computer interface don’t match up to expectations. The amount of logic necessary for the AI to learn requires a true AI, which again, is not what is offered today or in the near future. 

 

We don’t know why we hire

Research into quality of hire is simply not conclusive. In order to replace a functioning part of the hiring process, we would have to better understand hiring. We are literally at the leeches to draw blood stage in our understanding of why people make decisions.  Confirmation bias and the role of empathy could be big factors in the success of an employee. In short - if enough people are involved and want the employee to succeed, their chances of succeeding are probably improved. Removing human contact at any stage could very well lead to disastrous hiring. 

And worse, we could be masking the effects with “good data.” Team chemistry is something we can observe, the same way we can observe communication networks or social messaging. There is no one who has, as of yet, figured out to create a network that improves on communication, and there is no one who has figured out to high create high-performing teams. 

 

Pretending that large amounts of data and reason can lead us to successful hires is a nice fiction we peddle to sell books and explain success. 

Screening is by definition one-sided, and susceptible to changes in the market

Finally - screening is a very one-sided view of the hiring process. Companies screen jobseekers, suggesting a power differential that is one-sided. In that design, companies get to choose what they like and don’t like. As supply and demand of qualified workers rises and fall in tune with technology, economic health, and generational changes, the power differential swings back to the jobseeker, and screening is seen as demeaning and useful only to the company. This is always true in high demand positions, where executives, top programmers, and top salespeople don’t feel the need to participate in screening processes. 

AI screening has to be useful to the jobseeker, whether they get the job or not, or it will be seen as a net negative, accidentally leaving out top performers and instead only delivering minimally qualified candidates who are willing to be shepherded into a digital cattle call. 

 

Summary:

AI is fantastic and replacing poor processes. Those involved in paperwork, scheduling, basic screening and process notifications can and will be replaced, but the gains in productivity are actually the removal of loss of productivity from too much data. AI solves the problem of digital application. It does little to solve the problem of recruiting. Recruiting is a human function that is often mistaken for data entry and collection. The industry will shrink, as process recruiters (mostly internal) are replaced by software. This is not a threat to recruiters whose primary purpose is contact with jobseekers. 


Previous Page
Article Search
Category
Authors
Archive