Alooba Objective Hiring

By Alooba

Episode 47
Joe Willage on The Intersection of Data and Gut Instinct in Modern Hiring Practices

Published on 12/15/2024
Host
Tim Freestone
Guest
Joe Willage

In this episode of the Alooba Objective Hiring podcast, Tim interviews Joe Willage, Director of Data Science at Strivr

In this episode of Alooba’s Objective Hiring Show, Tim interviews Joe about the complexities and paradoxes in modern hiring practices, especially within the tech and data sectors. They delve into the balance between objective data and subjective gut feelings, exploring the influences of AI tools like ChatGPT on the application process. Joe shares insights on structured hiring processes, technical and cultural fit evaluations, and the growing relevance of consistent feedback. They discuss the potential of AI to automate initial candidate screening and the importance of creating a robust, unbiased evaluation system. The conversation underscores the challenges and opportunities that arise when trying to harmonize data-driven and human-centric approaches to hiring.

Transcript

TIM: Joe Welcome to the objective hiring show. Thank you so much for joining us.

JOE: Yeah, I appreciate it. Thank you for having me.

TIM: It's our pleasure, and one area I'd like to start with is something I've thought a lot about over the past few years, which I personally find a bit confusing or interesting, and that is that in five years of running a Luber, I've spoken to many data leaders, many heads of data science, and VPs of data analytics. People like yourself who, for a living, help businesses make data-informed decisions that could be in sales and operations, in marketing, and in product, and whatever it is, and in the last 20 years, the rise of data and the increase in the number of decisions we can make using data have been astronomical. And so I feel like we've all bought into the value of this decision-making process; however, what I find is that we tend to abandon a lot of that when it comes to hiring, and hiring descends into more of a gut-feel intuitive vibe check. How much did I like this person? Let's go to the pub and do a pub test. Let's have a coffee chat, you know. It's just I feel like we descend away from all of that kind of data-driven objective mindset when it comes to hiring people, even as data leaders. And so I'm wondering if you have any thoughts on what I would view as a bit of an irony, but maybe there's something to it that I haven't appreciated.

JOE: I think there are things that we know as humans that are not captured in the data right we can design these types of evaluations these types of tests to be as robust as possible and everything that we can think of but it's not going to capture everything and we're often pairing our maybe our subjective take with some more quantitative data And I don't know if that's really wrong. I think the data is not going to tell the whole story in many instances, not just with hiring, so maybe that's not too bad, but I think we also hire for someone that we'd be working with on a day-to-day basis, and we hire for people that typically we would get along with and wouldn't mind interacting with, whether in an office or remote, having many hours, many, many meetings a week. So I think we're hiring for someone that we'd like to work with, and whether that's the right thing to do, maybe that's a little bit trickier, but I think that's some of the human nature that's part of it.

TIM: Do you feel like we're overconfident at all in our ability to intuit some things? Do we overplay the gut feel elements, or perhaps aren't there things that we could be measuring in the hiring process to unpack some of this gut feeling into something that's still a little bit more objective, a little bit more quantifiable, even if it's not perfectly objective?

JOE: Yeah, we're probably overestimating, I would imagine, as hard as it is to look at the data and then try to couple that with some sort of non-quantifiable intuition, right? How do you balance those two, and how do you give the proper weight to one versus the other? Really difficult. I think we talk a lot about culture fit and maybe recognizing that there are some things that aren't part of the standard interview process that we're going to evaluate and maybe start to quantify and start to find out ways to measure that so at least you can recognize here are things that are part of my technical interview, here are things that are part of my not-my-soft-skill interview, and start to have all these things mapped out in front of you, and then that way you can also use them to evaluate all of your candidates, not just when you remember.

TIM: Yeah, I think that's such a—best as I can use the word or phrase—game changer, which has maybe been overused, but I'll go with it anyway, such a game changer to even do that level of, Let's have a scorecard. Let's at least think at the start of the process about what exactly we're looking for. Agree on that and then have a scorecard to represent that, even though I feel like that is a big step forward from just, Pure hey, let's start interviewing. Send them to these different interviewers to get a general gut sense. Like, at least if you've narrowed it down and agreed upon those criteria, I feel like that could help avoid A lot of hiring processes have been completely derailed into arbitrary decisions at any stage of the process.

JOE: Yeah, and I think it's tricky, too. You risk sounding robotic and making the interview process very cookie-cutter, but I don't know if that's a bad thing. I like to tell candidates when I'm interviewing them up front. Okay, if I sound like a robot, it's because I am reading this question off of a piece of paper because I want it to be objective, so you're going to get the same types of questions that other candidates will get, and of course there will be follow-up discussions and questions that are going to be different, but we try to make the whole interview process from who a candidate is interviewing with and what types of questions they're asked. We try to make that very consistent as well as a rubric for how we evaluate answers to different types of questions.

TIM: And it's so helpful once you're sitting back there and maybe comparing a set of candidates, which might be done in a discussion with other interviewers instead of just it being a free-form free-for-all of what did you like and not like about a candidate, which can descend into bias immediately. You have numbers where you can compare them across the different criteria you've already agreed upon, so I think it just helps center any conversation back to objectivity and rationality in a way that if you didn't do that, you probably wouldn't realize how much you're missing out on by having these conversations because a lot of the time what I've noticed is the feedback from interviewers is rarely Off the chart, like they could come up with reasonable comments about a candidate, might say Oh, they're a little bit quiet, or I felt like they're a bit introverted. I felt like they weren't quite being fully honest with me, or I'm not sure that they're going to fit in with this team. All of those comments sound innocuous and seem reasonable on the surface, so unless you had on paper exactly what you're looking for, it's impossible to then counter those comments from interviewers back to the actual criteria. So I feel like, yeah, having this all written down and committed to is just such a dramatic improvement in making the hiring process as accurate as possible, in particular, as well as certainly fair.

JOE: I think also something you mentioned, getting all of the interviewers in a room and having that discussion, is something that is really helpful, and being able to get the different perspectives and maybe hearing, like, in person, it's reading interview notes is one thing, but I think hearing it and having the other colleagues, the other interviewers, Really talk about how they found the candidate in that interview. I think that's super, super enlightening and something that I try to do for every candidate. I think that really adds a lot of value.

TIM: And what about with the evaluations by the interviewers in these different interviews? Are they done independently of each other? So if you had, let's say, two interviewers in an interview, do they score a candidate independently? Like, how do they come up with their ratings for those different criteria?

JOE: Yeah, so it's usually a series of interviews, some are sequential, some are maybe unparalleled depending on scheduling, but each interview is designed to focus on one thing in particular. We have interviews for data manipulation, interviews that focus on collaboration, and interviews that focus on analytic execution and expertise, right? So each one of those interviews is designed for a very specific reason and has evaluation criteria and a rubric for evaluating each of the questions, so even if interviews are going on one before the other, the order doesn't really matter so much as the fact that all the interviewers know what they're looking for; they are looking for what their questions are. And they know how to score the responses to those questions.

TIM: And that's obviously very helpful for the interviewer themselves because they're not then left thinking Oh my god, I've got this interview. Let's say it's like a marketing leader. It's got the interview with a data analyst. They're not suddenly trying to come up with their own criteria or come up with their own questions. The fact that you've laid that out for them is just making their life easier anyway, I would have thought.

JOE: Yeah, and the other interviewers are part of that creation process, right? I might come up with a few questions to start with, but then I'll ask them, Do you like those questions? Would you feel comfortable asking them? What else is important to you? Because it's not an arbitrary interviewer; it's the person who is the expert in that field, and they're interviewing from their perspective. I'll usually give some starter questions and ask them to add their own questions as well so they feel comfortable throughout the interview.

TIM: You mentioned when we chatted the other week that you had overhauled your hiring process a couple of years ago. What I'd love to know is what you were doing, what prompted you to change, and then also if you're thinking about maybe changing again, just with the rise of AI and the kind of different market conditions we have now compared to two years ago.

JOE: Yeah, we did a pretty large overhaul about two years ago. What prompted the change was we were getting—we hadn't had a role open for maybe a year or so—opened up a new role, had been using the same interview process from 12 months prior, and we started to realize these responses, like the interviews are going well and candidates are giving good responses, but they don't really match what the job description is anymore. We have a take-home challenge that we issued to candidates, and the use case in that challenge wasn't really representative of the actual job description. The challenges that we were facing on a day-to-day basis aren't really representative of what was on the job, so we can overhaul the whole thing, assign new focus areas for the different interviewers, and assign new interviewers when it makes sense to. Rewrote all those questions. revamped entirely the take-home challenge to be really representative of what that candidate would have to do when they're on the job.

TIM: Yeah, I was just going to say, yeah, what about now in the current market climate? Do you feel like it's still on point, or maybe would it be in need of a refresh again?

JOE: Yeah, I think the take-home challenge is still relevant to the types of day-to-day work that candidates would have to do when they're on the job. The interview questions are things that are related to communication, data storytelling, and overcoming challenges. Those things are also pretty relevant still. I think when we start to see a gap between either of those things between the types of responses we're hearing from candidates and what we would expect for someone actually on the job, when there's a gap there, I think is when we'll start to revisit it when something needs to change.

TIM: One thing we've heard a lot of in the last few months, particularly in markets like the United States and Europe, is that companies seem to be getting a very high volume of applicants through job boards, and they're also saying that not only are they getting a high volume of applicants, but the CVs themselves are looking increasingly similar to each other and increasingly good in inverted commas in the sense that they match the job description. And maybe a sense also that they're less truthful than they used to be; maybe it's because they've been written or enhanced by ChatGPT. It's like some further deviation from reality is a pattern you've also noticed.

JOE: No, because I love a CV, I would love to see more of them as far as the truthfulness. I think that's maybe a similar problem; if someone's going to be untruthful on the resume, it's probably the same as the CV. Maybe ChatGPT is writing the CV, and they're more prone to untruthfulness. I'm not sure either way. I would love to see more of them because it tells me a little something more than I would get out of the resume, and ideally it will tell me what makes that candidate a unique fit for the role so it's easier to read. I can, hopefully, see some personality in it. So I and if it's written by AI, I don't really care; I just want to know what makes you a unique fit for that role, and if you use ChatGPT to help format your sentence structure or make you seem clever, great, at least like you tried more than someone who came up with no cover letter at all.

TIM: And so then you're focused on the end outcome irrespective of how the candidate got there in terms of leveraging an AI tool to help them in their process, which makes total sense. The feedback we've been getting from other people is that they feel like candidates might be exaggerating more than they used to. Maybe it's just because they've outsourced the development or optimization of the CV to an AI tool, and either they're not really scrutinizing the output that comes back, or maybe it's also just that I haven't lied on my CV because I didn't even write my CV; it's the AI tool that wrote it, and they're disassociated in their heads. What's happening? Do you feel like that might end up being a challenge?

JOE: Yeah, I can foresee that being an issue again. Hopefully we have candidates that are thoughtful enough that either they're writing the letters themselves or at least they're proofreading them and editing them. It's really hard to fact-check eventually when you get to phone screens and interviews, and then you know, then you can weed out what's fact from fiction, but at that point you might have spent a lot of resources and time that might be hours into the interview process. So I do see that being a challenge, and hopefully there are ways that we can identify that more upfront in the cycle.

TIM: In my experience in data hiring, or to be honest with any hiring, especially for technical roles, is that the screening stage is very challenging, I think, for a couple of reasons. One is when CVs and maybe cover letters and maybe some application questions come in, you have to take the candidate's word for what they're saying. It's not really validated data; it's their views, their opinions, their perception of themselves, and so anything that relies on just taking someone's word for it is, I feel like, almost doomed to be quite inaccurate, but then often the second step is similar in the sense that for most organizations, they'd have, let's say, a talent initial phone screen would be the first check that they would do in the process. but the challenge there is that we have this setup where we have talent teams who are nontechnical, and they're expected to do hiring for roles that they themselves aren't or don't do, which I find is one of the major flaws in hiring, and it's almost like we expect too much from talent teams in some sense because If I was tasked with, for example, hiring a lawyer, I would have no idea what I'm looking for; to be completely honest, I don't understand the nuance between a convincing lawyer and a contracting lawyer or whatever, and the best I could do is maybe ask a friend who's a lawyer for some tips, or if someone gave me some guidance on what to look for, then I could hack my way through. but honestly all I would be able to do is descend into evaluating them on soft skills or how much I like them or those kinds of things. I feel like those two initial screening steps have a lot of challenges, which then make them quite inaccurate. Do you agree? Do you have a different perspective? I'd love to get your thoughts on that.

JOE: I agree, and I think it's an issue to some extent in the earliest phases when a recruiter might be going through a resume, seeing, Is this candidate a good fit or not? I think then it's more about the objective things that are on the resume: how much experience do they have? Did they at least list the tools or some of the skills that we're looking for, whether or not the recruiter has a deep understanding of what that is? How senior is this person? What kind of degree, if that's important? Right? Things like that. And so I don't think it's as much of an issue then. Where I think it becomes an issue is in the next stage, which is usually a phone screen with a recruiter, and their candidate might be using terminology that the recruiter is just not familiar with, different technologies. For someone who's interviewing it's probably second nature to to talk about different tools nowadays you hear everyone talking about the different LLM models they're using and all the different vendors and companies and for someone who's not spending a lot of time in the world like a recruiter they might not know vendor X is for skill Y or works in this type of is this type of technology There you might get into these issues where you're not sure if some of these things are a match. What I find usually happens is the recruiter is passing me a lot of notes, and they're saying, Here's how I think the phone screen went, and here are all the notes I took, and then you tell me if we should move them to the next stage or not. Yeah, ideally, like you mentioned, you would be doing the steps instead of a lot of those early steps instead of the recruiter, and so I find that it's a little bit of both: the recruiter is doing those phone screens, but then I'm usually reviewing a lot of the notes and providing some input to go along with their phone screen as well.

TIM: What about cultural fit? This is something I've had such mixed feelings on over the years because, from my perspective, I feel like a challenge with cultural fit is that it is often not measured, although maybe it is somewhat measurable. And I feel like it's the area of the hiring process that has the most likelihood to descend into bias and unfairness in a way that maybe the technical evaluations don't because some of those things are just a bit more inherently easily measurable. You can give someone a coding test. You can give them a technical interview and dig into the details of what they've done, but cultural fit is such a vibe feeling to it that it's very easy for someone to evaluate someone along cultural grounds and say, Oh, I don't think there'll be a good cultural fit here because I just didn't get the sense that they would fit into our team culture. stuff that kind of sounds fine and innocuous but is maybe just hiding someone's personal biases against that particular candidate. What are your thoughts on having cultural fit interviews? Are they essential? Are they a mess? Can we do them better? I'd love to get your overall thoughts on this.

JOE: I'd like to think that they're being done in good faith and not just to hide biases to see if this person would really fit in, would they, would I enjoy working with them, would they enjoy working with us, but yeah, that quickly gets you into a slippery slope and is really tough to measure. Something that I like that we've done at Strivr is flip that notion of cultural fit on its head and we talk now in terms of cultural add right is this candidate a cultural add versus a cultural fit We don't need people that fit; we already have the people that are part of this company. We need people that are going to add something to the culture and bring something different that we don't already have, right? So we're always looking for additions, whether that's in the types of backgrounds, types of education, or anything else, right? So it's less about fit and more about what's new that this person can bring to the table.

TIM: And I'm interested in that and how it manifests itself. For example, if a candidate in an interview was very direct, very argumentative, and really supportive in their opinion, was giving, like, their two cents and giving feedback, and that was atypical for the culture, like it was not normally done in that way, would that be perceived as a cultural ad? Or, yeah, how would that kind of thing come across? Is that different? Is that good in that sense, or is different bad?

JOE: We also have a core set of core values not specific for the interview process but just as a company, and part of that is we want individuals that aren't afraid to voice their opinion and aren't afraid to speak up, right? If you're in a meeting with six people and everyone's just nodding along, Yes, when not everyone really feels that way, are you going to speak out? Are you going to be able to say something? I'd appreciate it from that sense. At the same time, we do need respect in the workplace, and if you're especially in an interview, if you're yelling or talking over or being disrespectful, then that's probably not going to be a great fit. I don't know if that's even a culture fit thing; that's more of just, do you have common courtesy? Are you going to be respectful? Can you handle working with coworkers on a day-to-day basis?

TIM: I feel like it's still Let's just say if we unpack this particular example in more detail, some, let's say, level of directness, I'd say in Australian Anglo-Saxon type of culture, it's quite indirect. People are rarely going to just tell you exactly what they think. It'll normally have this layer of bullshit over what they're trying to say. Whereas maybe someone who's Dutch or maybe American, in my experience, is a little bit more direct, a little bit more forthright, and they're going to be a little bit more honest. I would view it as honesty; maybe that's a good way of putting it. What some person would view as honesty and transparency, someone else would view as rudeness. And so is there again that chance that you could be selecting for someone from a particular cultural background because what you're looking for is a proxy for their upbringing in a sense? And yeah, is it just still one of those things that's got some level of danger to it in what we think we're in our efforts to try to improve diversity? Sometimes we're almost going down the opposite path, and the things we're selecting for are actually preventing us from getting the very diversity we're after.

JOE: Yeah, that's quite possible. Here's where I think having more of a panel can really help in talking with other interviewers. If the candidate has made it to that stage where they're having several interviews, at the end of that process, all the interviewers can get together in the room, and hopefully your interview panel is made out of people with some diversity, and people can talk about their own backgrounds and provide their own take, and maybe I'm saying something like, I found this person really argumentative. and then maybe someone from a different background said I didn't really notice that maybe that's because here's my experience and my background and So I think having a forum to discuss those types of things is important so I'll place again a lot of value in that the group conversation at the at after the interview stage

TIM: What about feedback? So, I'd say the single biggest gripe of candidates in a hiring process is lack of feedback, getting ghosted, and getting useless feedback. What do you feel is a fair level of feedback to candidates? I assume it varies depending on the stage of the process that I had. I assume it varies depending on what data you have, like what feedback is even possible to give. What are your thoughts on the current state of feedback and if there's any way that we could improve that soon?

JOE: It's not good. It's not good right now. I think that most of the feedback is, unfortunately, We're not going to continue. Like, even I've talked to folks who are very late. This is the penultimate interview, or we're five rounds in. The interview didn't go well, and it's just an automated, Unfortunately, you're not the right candidate, or We're moving somewhere else. We're moving with someone else at this time, and like, that's not great, right? I think the further you get, the more time that you've dedicated, you deserve to know a little bit more about what happened. Do you need to get all the way into the level of detail on, Here's the question I asked, and here's the answer you gave, and here's where it didn't match the rubric? Maybe not that level of detail, but at least the theme of what went wrong, right? Was it something around not enough collaboration, or maybe you skipped an entire part of the process, like an exploratory data analysis? Maybe you didn't state assumptions or things like that, but responses that are more thematic rather than getting into the nitty-gritty.

TIM: In your view at the moment, are there particular barriers to providing the feedback? Is it a reluctance? Is it concerns over legal issues? Is it a data collection? What do you think is holding back feedback? Let's say that interview stage where you in theory would have some meaningful feedback to provide, like what are currently the main challenges?

JOE: probably motivation and time, right? If a candidate is, you've decided that they're not moving forward; there's not a ton of incentive for you or for the recruiter or the hiring team to put even more time in that candidate. I do think it's the right thing to do, yes, but it's hard to incentivize people. Everyone's so busy doing their day-to-day jobs, and you say, By the way, can you also give this explanation of what went wrong? Or not what went wrong, but on where you're not moving forward with that candidate. That's just one more thing for them to do, and that's not helping that role be filled at the end of the day. Tough to incentivize people, I think.

TIM: Yeah, I've certainly seen and felt that directly myself, especially if I was really focused on hiring and I had a lot of interviews in one day. If my data collection and scoring rubric for those interviews wasn't perfect in real time and I had seven back-to-back interviews, By the time I get to that seventh one, the first person who didn't perform really well is almost like a distant memory. And yes, my personal motivation to follow up with them has immediately dropped, especially because, yeah, you're focused on the winners. Let's say of the process I find also The efforts to provide that feedback are often quite clunky because you'd have to have the feedback available in a digestible form immediately, whereas at least when I've done interviews, it's normally in slightly strange notes that made sense at the time to me and maybe no one else. and also maybe they're too direct or harsh. Some of it's also like a train of thought that has been going on in the interview process, and there's probably some intermediate layer to then synthesize that feedback into something that's more digestible, which previously would have been manual. I wonder whether this would be a great use case of AI to be sitting on top of those bits of feedback to then put that in a format that could be shared with the candidate. What do you think?

JOE: That's an awesome use for AI, right? I think that's one of the few parts of maybe the interview process where you can really rely on AI as writing emails summarizing, like, my notes tend to be really chain-of-thought and random, and if I were to send them to somebody, even someone on my team, they would say, I don't know what this is. So I think summarizing notes is a good starting point, but it's probably not the end either. There's still likely a human in the loop that's important, right? And reviewing, okay, here was the summary, oh, here's what was missed, here's what this got completely wrong. Here's what it hallucinated and just added. and I didn't even write this at all, so AI definitely could be a great tool for summarizing handwritten notes with very little prompting, but I think there's at least some level of effort to customize and check that for accuracy and maybe personalize just a little bit, but yeah, that can take something that would have previously been 30 minutes to organize those notes and then write it into an email and turn that into three minutes. That's a big win.

TIM: And maybe the trade-off is not even that because at the moment it's the choice of No feedback in a lot of the cases, or effectively no feedback, just a yes or no: you made it or you didn't make it, which is pretty useless versus maybe perfectly or not perfectly accurate AI-summarized feedback, so maybe that's what we should be focused on: like, why not have good enough feedback that's maybe got some holes in it versus no feedback? I suspect talent teams or businesses maybe wouldn't see it that way; they'd be a bit more risk averse and concerned about those maybe one-in-a-hundred scenarios where it hallucinates and says something that's completely off-topic because that is the kind of thing a candidate is going to put onto LinkedIn or Glassdoor pretty quickly, but maybe if it was delivered in a way that it was like, By the way, this is an AI summary of some handwritten or human-written notes; it's not perfectly accurate. Maybe there's going to be an expectation from people that we know that AI is not perfectly accurate; we won't hold it to the same standards as a human, so maybe that's where we'll end up, at least it's good enough, and it's better than nothing.

JOE: Better than nothing. What might be difficult is just getting to a point where we act on it, right? Even if it takes three minutes and we agree on three minutes, sure, I can do that, but actually putting that into practice, especially when we're so hardwired to just send that automated email, we have a process, right? It's disrupting the process for something new, and overcoming that hump might be its own challenge.

TIM: I feel like where we're going to get to personally in a very short period of time is that humans will be increasingly less involved in the hiring process, almost like step by step in the funnel. Because I'll give you my view: I feel like humans do more harm than good in hiring, and I feel like AI is just so painfully close to solving a lot of these, especially the screening steps. Taking CVs taking application questions, technical tests, matching that against job criteria, and then maybe some basic introductory interviews as well, where it's almost like an information collection process, and then even in later stages with human-to-human interviews. Right now we have a technology to do the transcription accurately. You could be scoring against those questions; maybe the human doesn't even have to do the scoring anymore. That data is already sitting there that could be shared with numerous people. You can use that for decision-making as well to move someone to the next stage of the process or not. I feel like the only gap at the moment is just the software to be built on top of that AI to then automate all of this. But I feel like we're just painfully close, and at least the technology is there; we just need to build the applications now. Do you view it that way, or are we further away from that than we think, or than I think, anyway?

JOE: I think we're very close. Like you said, it's the data that's there, right? It's just building the right prompting, building the right application on top of it. To get what we need out of it, and I would not be surprised if it's been done already and we just don't know about it and different companies that have the resources to do that. But I think we're close, and I think it's also like anything AI-related: a little could be a little dangerous, and let's not put all our eggs in that basket. Some questions, even those with a rubric, are just by their nature a little bit more open-ended and might take human evaluation to say, You know what? That didn't exactly fit the rubric that we wrote on paper, but I see where they were going. It was close enough to the rubric, or maybe they interpreted the question a little bit differently and answered it a little bit differently, so I'll have to adjust and make a new pseudo rubric based on how they answer that question and use that rubric as a way to score, so I think there'll be instances of that where AI might be a really good tool but not the decision maker, right? There should still be a human in the loop for making ultimate decisions and reviewing the accuracy.

TIM: But it will also be interesting to see what happens with other factors, like just the legislation. So I know, for example, in New York City, they have the automated employment decision technology legislation or something like that, which basically means you can't, or you have limited ability to, make automated employment decisions. If you wanted to use a tool to make any kind of decision, it needs to be audited and published in the job book. You need to do a lot of preparatory work to be able to use such tools, and I feel like there's a lot of fear around AI, which is completely justified. I would argue that in the hiring context, hiring is so broken in so many different ways that it's hard to imagine how AI would make it worse. like we've just laid out some of these issues around All the biases that can creep into every stage of the process having to read hundreds of manual CVs like an AI I feel like could do a better job in almost every step of the process so that the legislation would need to the legislation which is well intended to prevent bias maybe it's having the opposite effect because the people who've written this legislation maybe don't understand the current bias and how bad that is And that might be a barrier to these tools being used at scale quickly as opposed to a technological barrier or a data barrier, which I feel like is almost, but if not, solved. Where do you, yeah, could you see that being a challenge, or will companies just circumvent it somehow, perhaps?

JOE: Yeah, I think it's a challenge, but we have to remember that AI is an artificial intelligence, and it's really just trained on all of the text that's out there that was written by humans, and it has the same biases that the humans have who wrote the text that it was trained on, right? So it's maybe not one-to-one but pretty similar mapping, so the biases that we're going to have are some of the same biases that are inherent in AI, and a lot of these LLMs do that. Does that mean that we should ignore it and say, Let's just use the LLMs we agree with? We know that they're going to be biased, and that's okay. No, I think we probably still need to audit, and there's still some work to do. I believe they're getting better; will they pass the point of being objective? Hopefully, but I think that'll be hard to tell.

TIM: AI sometimes feels a bit like magic in some ways when you see it and interact with it if you had the proverbial magic wand. Joe, how would you fix hiring? Is there any particular thing you would love to click your fingers and suddenly have magically solved?

JOE: Okay, I would love to see all resumes structured in the same way, and I talked about creativity on the CV. Great, that's like a paragraph or two. I can read that, but the resumes are so wordy and have so many terms. They're so long, and all of them are different from one another, and I would really just love everyone's resume to look the same so I could get oriented to it, and I know here's the project that you've worked on, here are the tools that were used, here's the impact that it had, here are the teams you worked with, and next resume here's the project you worked on Here are the tools you used, and here are the people that you worked with, and be able to have more of a comparison like that because resumes are so different it's hard to compare if you need to compare them, and that's where I think again the CV could really help someone stand out.

TIM: So yeah, you want to have that structured ability to compare them. I guess that's just the same as the rest of the hiring process you laid out, where you have this consistent set of questions. consistent evaluation criteria If the CV itself was laid out in a consistent way, you could more easily do those comparisons across those different things. I guess.

JOE: yeah or at least if the resume is laid out and then let the CV be a chance to express some some more creativity more individuality

TIM: One final question. Joe, if you could ask our next guests on the objective hiring show one question, what question would that be?

JOE: I would love to know if they had or if they could design something that could identify really high-potential talent, really high-potential candidates. What metric would that be? What sort of objective metric would they have? And is there a way that you can guarantee that's objective and that it's not biased? because that would be I don't know if it's a magic bullet, something close to it.

TIM: Because, yeah, the challenge we have is, I guess, evaluating someone's current skills, at least technical skills, is pretty straightforward, but current soft skills may be a little bit more subjective, but you can do it. But yeah, how do you map out where they could be? That would be amazing. That would be worth its weight in gold, I should think.

JOE: Yeah, if you've got any Yes. I'm all ears.

TIM: The only thing I could point to is maybe the obvious, which is that the best predictor of someone's future performance is intelligence, so something like an IQ test is what a psychometric psychologist would point to as the best predictor, but it's only one factor, and I think it predicts 20 to 30 percent of the end outcome. So maybe that's good enough, but it's certainly not the full picture, and one idea we've played with in the past was, and it's something that maybe AI could also help to measure, is the rate of change in someone's career to date. I feel like there's a very big difference between someone who's been promoted four times in a year versus someone who's been promoted four times in eight years. So maybe that's like a career trajectory metric that could maybe be extracted from a CV or LinkedIn profile that might give an extra factor because if they'd be promoted a lot, maybe that indicates they've got like a high growth mindset or something. Beyond that, I wonder if we're getting into brain scan territory. You could scan someone's brain every few months to see the number of new neurons created; maybe that's an indicator of the area of the brain that's growing, exposing themselves to new risks and new learning opportunities. I don't know if there's

JOE: Now we're talking. Yes, how much will that cost us? I would love one of those. Can we get a time machine while we're at it as well? Just where is this person in 12 months, and then go?

TIM: Yeah.

JOE: day, and would I hire that person?

TIM: Joe It's been a great conversation today. Thank you so much for joining us and sharing with us all of your wisdom and insights on hiring.

JOE: Yes, I appreciate it. It was a fun conversation. Thanks, Tim.