We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess.
— Lazlo Bock, cited by Adam Bryant in In Head-Hunting, Big Data May Not Be Such a Big Deal
Google has moved away from the conventional unstructured interviews and IQ-stretching questions, like ‘how many golf balls can fit in a school bus’, and has moved toward a greater emphasis on ‘behavioral interviewing’. As he put it in that same interview,
Behavioral interviewing also works — where you’re not giving someone a hypothetical, but you’re starting with a question like, “Give me an example of a time when you solved an analytically difficult problem.” The interesting thing about the behavioral interview is that when you ask somebody to speak to their own experience, and you drill into that, you get two kinds of information. One is you get to see how they actually interacted in a real-world situation, and the valuable “meta” information you get about the candidate is a sense of what they consider to be difficult.
The evidence is fairly clear that ‘limiting manager discretion’ and sticking with empirical results — like test scores and well-vetted personality tests — leads to significantly better hiring decisions — as measured by the candidates remaining in the job for a longer stretch — than going with your gut.
Strangely, this may not gibe with our sense of what a more humane hiring regime ought to be. But that’s just our perceptions playing tricks on us. It does no one any good to accept a deep bias about cultural fit or a sense that a candidate reminds you of your first boss. That’s why in the near future we will likely be handing over hiring decisions to AI, and good riddance to bad decisions.