In this episode of the Alooba Objective Hiring podcast, Tim interviews Guru Singh, Director of Data Science
In this episode of the Objective Hiring Show, host Tim converses with Guru, the Director of Data Science at BranchLab. They discuss Guru's journey from India to the U.S. and his experiences in the fast-paced startup environment. The core theme of the conversation revolves around the importance of a hunger for success and continuous learning. Guru shares insights into what he values in candidates, emphasizing self-driven learning and a growth mindset. The duo delves deep into the role of AI in hiring, its potential to reduce human biases, and its current pitfalls, highlighting examples such as Amazon's failed AI hiring tool. They touch upon the importance of feedback and transparency in the hiring process, and Guru offers his perspective on the balance between immediate hiring needs and long-term growth potential in employees. The episode concludes with a discussion on the future of AI in recruitment and the importance of unbiased and data-driven hiring practices.
TIM: We are live on the Objective Hiring show. Today we're joined by Guru. Guru. Thank you so much for joining us on the show today.
GURU: Hey, thank you, Tim. Thanks for inviting me.
TIM: And where I always love to start with guests is just to hear a little bit more about yourself. Who are we talking to today?
GURU: Hey folks. My name is Guran. I go by Guru. I'm currently working as a director of data science for a startup called Branch Lab. I've been working in the field of data science for almost seven years now. From working for startups to big tech like Apple and EchoStar, I believe in providing scalable solutions that have impact.
TIM: And tell us a little bit more about it. Branch Lab. What's that company all about?
GURU: So Branch Lab is about, I idealist pharma marketing. We are the first solution in the United States providing idealist marketing for the pharma sector, which means we do not know who the individuals are. So it's the most ethical way to market. Medication to customers, and it's based on all the statistical and machine learning models that we have native to our system.
TIM: Nice, and how do you enjoy thestartup? Life can be a little bit chaotic at times, but also kind of fun as well. I find.
GURU: Oh, definitely. It was a bit of an experience because I started my career with a startup itself, but I. Nothing prepared me for Branch Lab. It's a really fast-paced environment. Yeah, it's just taking the initiative. Also. Looking at it, like ownership is a bigger part of being in a startup life than being in a corporate life. So yeah, it's, in all, it's been an amazing experience, and I'm so far loving it. One thing I value most about people is their hunger for success. It's not like what you have achieved and you are satisfied with it, because life is a journey and there's always something, something out there to learn, something out there to achieve. So it's a constant progress and that. Also, I've tried to implement data science in my life, be it data science or powerlifting. I also compete in U-S-A-P-L, so I'm always chasing numbers, man.
TIM: That's good. And my, so I'm based in Sydney, and Australian culture is pretty different from American culture. I. In my mind, at least, I, the kind of character you're describing, I feel like is quite American in a sense. Like that kind of go-getter, even though they've won once, they want to win again. A kind of thirstiness, particularly for business—is that a fair characterization? Like, is that actually true? Does it really, the land of opportunity in that way? I.
GURU: Definitely. That's why I guess I'm here. I moved from India like six years ago, and I've been loving it because it's all about the mindset switch, and it's like hunger to, like, achieve more and get better. But at the same time, I believe there should be a good balance between work life and your personal life. So. Once you find that you are zen or Ikigai, I feel like you're just golden.
TIM: And is your zen powerlifting? Is that part of it?
GURU: I would say using data to solve problems. So I involved that in my program, a powerlifting program, optimizing my day. So it's just like data fascinates me so much that everything around it just has data. So everything fascinates me, what I do.
TIM: That's great. And what is it about the data that appeals to you? Is it the fact that you suddenly. See patterns you otherwise wouldn't see if you didn't have the data. And you can actually see that improvement based on reviewing and optimizing your performance.
GURU: Oh yeah. So one thing about data: it never lies. So it's going to state, Spit the facts no matter what. And a good data scientist or a good data professional should look at data for what it is. So at that point, there's nothing positive or nothing bad about the data. It's always, I would say, a hint, or I, for lack of a better word—I don't have any better word for that—but it's like it shows you the journey and how you need to pivot your business or your personal life. So it's like a benchmark that's been set, and now you have, you know, how you performed in your personal life or in your professional life, and then you just grasp that data and optimize your way ahead.
TIM: Have you ever felt the sensation of, like, the data is sometimes almost too confronting? Like, especially when it's about you personally. You're tracking, like, personal metrics about powerlifting and other things. Have you ever found, oh wow, that like that number's not as good as I hoped? Has it ever had an effect on your kind of thought process?
GURU: Yes. I would. I would be wrong if I said it never affects you, as we are emotional beings, but it's more about looking at the brighter side in all scenarios. So I would try to interpret it like, oh, I figured this thing out maybe with my powerlifting or just like me going out partying. I've tended to know that it ruins my sleep for the next three days. Even though I won't stop partying, I have this tool that helps me analyze that. It's just like realizing certain things and then optimizing towards them. It's like you make sure that you have less workload the next week, so you get to enjoy your party life and also be able to perform. I'm sorry. So yeah, it's just about finding balances, but yeah, you still get pushed at the. Like reality is out there, so no matter how you take it, it's not going to change unless you work towards it.
TIM: I agree completely, and I personally feel like, in general, companies don't necessarily use enough data when it comes to recruitment and hiring. I'd say the typical company and hiring manager would have a spectrum from data-driven to intuition-based. I feel like most people are further towards the intuition side of the spectrum, even actually analytics leaders. Something that's surprised me over the past six years. Is it like, and I'm trying to, I've, I'm always trying to think about what the reluctance is to use data. Is it because it's about people that suddenly there's almost an ick or a reluctance in a way that there wouldn't be for marketing, for sales, for operations, for things that are almost decoupled from the person? And there's this weird feeling we have when we're starting to say, Well, this person's objectively better than this one. And these dimensions, like, we feel like we can't reduce humans to those numbers or something. What? What do you think?
GURU: So I think we have to, like, any problem, like, we need to identify the root cause for it. And I think the biggest root cause around this whole hiring process is we are social creatures. I. We seek acceptance, and also we look for and also provide acceptance. So in that aspect, the emotional part, it plays a really strong role when it's like looking at the candidates. And surely there are some biases. And if it were a perfect world, I would want to remove all the biases. But sadly, it's an ongoing process. Nobody's free of bias. So there are a lot of biases that affect the hiring side, but one thing I would like to bring up around this is a person that fits well in the company culture and has basic skills to learn. I think no candidate is a bad candidate, at least if they meet that criteria. Because skills can be taught, company culture and company values and team values cannot be taught.
TIM: You'd mentioned that ability to learn, that growth mindset. I guess some people would call it being so important. And I'd say if I were to summarize the conversations I've had on this podcast, I reckon that would be the single biggest factor that people are now thinking about when they're hiring someone, even more so than their measurable technical skills, for example, kind of growth mindset, willingness, and ability to learn. Do you have any way to coax that out in an interview? How can you really tell? Who has that and who doesn't? Any suggestions?
GURU: Yes. So I, I'll look for certain points, and this is just my personal watchlist. If a candidate, first of all, the biggest thing I would look for is self-driven, driven learning. I don't care about what kind of schools you have been to, but if you had taken courses that were not part of your curriculum or you were a self-taught professional, I would respect that. And it shows, like. You did it out of passion, not out of curriculum. And then there's like how excited they are about the work and the job itself, like descriptions, because we are data professionals. You've been in the field for, like, six years. I, I know you've previously managed teams, and now you are heading this startup. We cannot shut, shut up about things, you know, like about data. We can talk all day, and it won't be a problem. So I, I tend to look for candidates who are, like, so excited about talking about their work and also asking good and pinpointed questions about the job. That shows that they're already taking an initiative. And also coachability in the sense that I would purposely try to ask them a question that I would know they don't know the answer to, and I'm seeking an answer, which says, Hey, I don't know that I can get back to you with research, or I would love you to explain it to me on the call." Like, no interview is perfect. There's no one benchmark, like one side, no model. That one model fits all, you know, machine learning. So it's just like an ongoing communication where you can gauge the initiative and the candidate.
TIM: Yeah, that's an interesting tactic to ask them something that you know they don't know. And so when you've asked that, like what would be an example of not a great answer where they try to bluff and pretend like they know, for example, or they just can't handle it, they don't have the humility to admit that they don't, or.
GURU: Yeah, so personally, I interviewing guys used to have, like, this five-second rule, and I have to, and, like, obviously we all use the tricks, and this is a good trick for all the aspiring job applications and those who are looking for jobs: you always ask them to repeat a question when you're not sure about the answer. 'cause that gives you another five seconds to process. But after that five seconds, if I was still not sure about the answer,. It's always better to accept the fact and try getting a different question from them, which you are more comfortable talking about. So that's something that I've noticed with the candidates: they think they should know everything, which is not the case. Even their leaders. You would have limited knowledge, and it's collaborative knowledge in the industry. Because you work as a team, you don't work as an individual, and you don't have to know everything. So it's just like accepting the fact and not trying to push some just utter words, which don't make any sense. I would say people should be avoiding that, and that's the worst answer, which I don't expect from good candidates.
TIM: Yeah, I agree completely. It's an especially bad tactic if, as a candidate, you're being interviewed by an expert, like I can maybe imagine bluffing your way through a talent acquisition or HR interview. But if you're interviewing with your manager, you know they know what they're asking about. You know, you don't; that's maybe not a great effort to just wing it, and because that's going to backfire. 'Cause you're going to look pretty silly, I think.
GURU: Definitely, and it's just the amount of pressure the candidates have. Sometimes they know the answer, but they're still unsure about it. So, like, I wouldn't call anyone out, but I would just say it's like following the five-second rule, and if you are not sure after those five seconds, it's better to admit to that. And interviewers are smart enough; they already have backup questions, so you'll be fine.
TIM: You also mentioned the appealing candidate trait of having. Proactively gone into the field themselves. It's not just something they had to do as part of their degree, but they kind of took it upon themselves. I can remember one candidate we hired a few years ago as an engineer who kind of fit into this category. He was doing, I think, an electronics degree at a university here in Sydney. He'd actually dropped out of that. I think a four-week web development bootcamp and, in just those four weeks, he had managed to learn enough, as well as on his own, to ship an app into the iOS store. It was some kind of music aggregation app, something that, like, hooked into Spotify and, you know, looked at your data and then gave you recommendations. Something like that, if I remember correctly. And we interviewed him, and he came and showed us the app, and, you know, he was really proud of it, which was cool for me. That spoke volumes because, you know, he had enough effort to go off and do that himself and actually get something shipped, like a working, live product. Not the most complicated one, but it was out there, and that's amazing. Like nobody. Asked him to do that. He didn't have to do that. He clearly wanted to do it. And I thought, well, that's great. Like all he has to do is that every day in our company with other features, and that that's the job as an engineer, really, is shipping stuff again and again and again. And so, yeah, I completely agree with you. Looking for those kinds of characters is, is, is really great.
GURU: Yep.
TIM: One thing we should definitely discuss is, of course, AI, particularly in the hiring context, but I'm interested to hear about your personal life. Have you started to use, you know, ChatGPT in particular, these kinds of large language model products for anything in particular, and if so, what?
GURU: I feel like I've been having more conversations with AI these days than with actual persons. So I feel like everyone should. It's not the giant people are building or the demon; it's like it is out there to actually help you out. So anyone who's not using AI should try using it because if it's going to replace anyone's job, it's the people who wouldn't know how to use AI and leverage its knowledge.
TIM: Yeah, I agree completely. And I feel like if I had a candidate now, particularly for some kind of technical role, like an engineer or a data scientist or someone close to the field, and they said, No, I've never used AI; I don't believe in it," That would almost be like someone saying they don't use a computer. You know?
GURU: Exactly.
TIM: I, I still use paper and pen. Well, yeah, you can, but you're probably missing something.
GURU: Yeah, definitely. I, I totally resonate with that.
TIM: And, and so, anything particular that you use it for? Like, I've, I've used Che Peti a lot recently for language lessons. Actually. I use it as a tutor for Italian, and it's amazing that it can just, you know, suddenly start speaking to me in Italian and giving me little quizzes, and oh, a,
GURU: That's amazing. That's a good use case because I'm trying to learn Spanish, so
TIM: Oh, there you go.
GURU: I should definitely try that. I, I do have, like, personal chats, buildup. I, one role assigned to AI, am as my mentor. For my goals.
TIM: Oh,
GURU: There's another role as a fitness coach where, like, I ask him day-to-day questions. I do have a coach, guys—shout out to Tanner—but I still would leverage AI to ask for, like, trivial questions, which I want more insights about. I put the plug in. My personal programs do it and ask it, like, how do I navigate through certain steps? And in general, like one thing I've learned from an expert is they generally have a chat with AI about some topic that's been, like, in their mind; they've been thinking out loud, but now they've started interpreting and talking with AI. About the topic. So it kind of gives you a more neutral approach if you say so. But you have to be pretty careful with the prompts you provided because, meanwhile, while it's answering your question, it's learning from your prompts. So it can hallucinate. So, that's why we have prompt engineers and all the RAG, L, and M stuff. But in general, like, yes, I use it for almost every aspect of my life these days.
TIM: Yeah. That's amazing. And one thing I noticed myself. In using large language models, I felt like I was more receptive to its feedback than I might be from a human in some scenarios because I know it. There's no personal stuff between us. It's just a large language model, and so I was almost willing to learn from it more than what I would from a human, just maybe my own failing, but I was a really interesting use case, I think.
GURU: No, that's why I have him as my mentor, because it's just easier to get your policies from someone who's unbiased. So I totally resonate with that too.
TIM: Yep. And what about then in the hiring context? Like, have you started to dabble with—I'm not sure if you've been doing much hiring recently, but have you started to dabble with any AI hiring tools at all?
GURU: So at Branch Lab the answer is no, because we are a small company and we are big on company culture. So the resumes are getting vetted personally by the individual teams. As it comes to, like, resume screening or something. If we think about it, there are really pros to this side, and there are a bunch of cons to this AI hiring. So I don't know if you want me to delve into that because I can talk about it for quite a minute.
TIM: Hear that. That kind of pros and cons
GURU: Yes. So AI has been a game changer, especially for large organizations where the number of resumes is in the thousands. It can easily help you parse through those resumes. So the first use case is a high applicant volume. Second, now you want to identify data patterns and patterns and the whole applicant side. So it enables standardization. It also, like, can take care of inclusive hiring, but we do have a use case or an actual policy with Amazon's AI, which actually failed to be unbiased and was preferring male candidates over female candidates. And it, it was just like. The AI model for Amazon was a disaster because I was trained on data that had a majority of male candidates. So it started using the names that were sounding more male as a bias and giving them preference over female candidates. Similarly, there's this, which is one of the cons, but. I think L'Oréal reduced their assessment time from 40 minutes to four minutes with the help of AI. There's a bunch of companies that have done the same things, but on the other side, if you look at first Amazon's example, the pitfalls are that AI isn't inherently unbiased. It learns from the historical data, and if the data reflects any biased hiring practices, AI would mirror and even amplify them in many cases. 'Cause it's, it's just training on itself. Just mentioned the Amazon AI tool. Also, I think there was research from Washington University that said certain AI models had favoritism—85% favoritism to male names over female names—and nobody trained those models. It was the historical data. And generally certain tech jobs are male-dominant, and that's how the bias was created. So, in short, I would say it is. AI is more augmentation than automation. So you cannot completely rely on AI because there are skills that cannot be measured. And I feel like for those skills, tools like Luba play a great role where the candidates are like, specifically, they find candidates that specifically cater to your skillset. because it's a more personalized approach. Even though, like. We, we would say target targeted testing or skillset analysis. But at the same time, like, there's a human approach needed for AI to be used in a condition where it's more—it's not, so, it's not like the magic bullet, but a powerful tool that is being used responsibly.
TIM: I think it's worth delving into the kind of screening problem in a bit more detail because I feel like that's where there's. Because that's where 99% of candidates get rejected. Then any improvement in that process, whatever's happening at the moment, I think could yield pretty special results. And so I wanted to share an example of an experiment I've seen. Mm-hmm. been done in different markets It's where the researchers basically apply to hundreds of different roles using different applications, and they split the applications into different buckets. And the only difference between the resumes is just the name. And so what?
GURU: Okay.
TIM: Is their testing if there's some kind of bias against people from a certain background or a certain gender or, or what have you? Anything you can infer from the name. And so one that was done in Australia, I think. Three years ago, the University of Sydney was to apply to, as I said, many, many jobs and compare people with a first and last name, first name, sorry. Anglo-Saxon first name, Chinese last name, Chinese first and last name. So three groups. They applied to thousands of jobs and measured the callback rate, and the only real difference between those buckets of Fresh Mace was the name. So they kind of isolated that variable. Reasonably well.
GURU: Yep.
TIM: That if you applied for a job with an Anglo-Saxon first and last name, you get a 12% callback rate on average; with a Chinese first and last name, only 4% callback. And so this is before AI is involved; this is all human-based screening that's being
GURU: Yep.
TIM: And so that means, yeah, if you had a Chinese name in Australia, you have only one-third the chance of a callback. an equivalent person with an Anglo-Saxon name, which is dreadful. It's so
GURU: Definitely. Yes. That's unfair.
TIM: That is unfair. It is, yeah. Beyond unfair. And I personally feel like one way out of this might be to use larger language models to do the screening of the resumes. I think it could be done in a. Reasonably unbiased way. I wanted to just share the idea with you, like we have a minor feature that we implemented along these lines. I'd like to get your feedback.
GURU: Yep.
TIM: The way we did it was basically to say, Resume comes in. We have the job description, so we have those two things on either side. With the resume, we extract the skills, the experience, the whatever, so we're left with that. There's no other information left. After we've extracted that, there's no knowledge about who the candidate is or anything like that.
GURU: Yep.
TIM: Just like a text match between the two documents. Then we score, you know how well it matches across those criteria. Come up with an overall number. That's it. I feel like that should be reasonably unbiased, with maybe the only thing being if there's something inherent to the way people write that might indicate who they are or something like that. But otherwise, I feel like we've nullified most of the potential bias from the person. What, what, what do you think of the approach?
GURU: Actually, it's a pretty genuine, and it's a, it's. It seems like this approach would be working, but at the same time, we have to do the analysis to just be able to call it out and see how it performed in the market. Also, the biggest thing is it comes to any kind of data garbage and garbage out, right? So when it comes to, like, looking at the resumes, we have to consider whatever we are feeding it and training the LMS on. Do they inherently have a bias? Maybe it was like big tech, and all of them got approved, right? No. Having big names generally doesn't matter if you would be a perfect fit for a new company. So that's one side. And I, I'm curious to know about, like, what are your thoughts about the humanistic touch or the. Cultural fit, like, have we, like, tried to find some keywords that we want to see for looking at the cultural fit? Like, is our LLM kind of like working towards that, or right now we don't have that capability?
TIM: Yeah, that's a great question. So in the, like, pretty basic MVP we built of this, it was just focusing purely on matching job descriptions. Against resume and wasn't going on. Hey, like, also look out for these things, you know, if they, if they've worked at this kind of company, this matches, or if they use these kinds of words that demonstrate X, and therefore give them an extra point or something. It was just a pure like text match.
GURU: Text match, right?
TIM: Fuzzy text match, like using a
GURU: Mm-hmm. Model, not a Yes.
TIM: A direct text match. I would've thought. The cultural fit question, though, is normally something that's going to cover, normally something that's going to be evaluated in an interview and maybe isn't even part of the criteria at the
GURU: At the screening, pro? Actually, yes. I kind of went off direction with that 'cause. Just data is so fascinating, and I wanted to get more information. I guess, but one thing that I just thought about is, like, once we are masking the names and we are passing and we are just matching it to the job description itself. Sometimes there are candidates like, and yes, this is not a regular fuzzy match, right? This is a fuzzy match, so. Sometimes candidates can have data science. Like I generally do in my resumes, I would just keep data science. So that implies I know machine learning and I can do end-to-end pipeline work. I. What is the success rate with resumes that have, like, concentrated information? Because in all my life I've tried to keep my resume as concise as possible. What is the feedback, or what is the data looking like for the resumes that were concentrated? Not many keywords, but they covered the holistic side of the business.
TIM: Yeah, that's such a great point. And yeah, not something we've analyzed with our feature, but this is something that's come up on the show before because I was kind of speculating that maybe. If companies are going to move more towards automated resume screening using a large language model, suddenly the idea of having a very succinct resume, like, oh, don't make it more than two pages, keep it really brief. That's thinking of a resume where the audience is a human. But maybe now we need a resume where the audience is a large language model, in which case you have to be a lot more verbose and, like, incredibly detailed. It can then do the summarization itself. so
GURU: Yep.
TIM: There would be some advantage to having a more detailed impression.
GURU: Yeah. So who knows, maybe you would revolutionize how resumes are being built, you know? But I, I see a pretty good use case around it, and I feel like the advantage of using AI models is you can put as much information as you want because it can, like, synthesize that and condense that and see if you're a good fit, rather than being a human recruiter where you have a two-page-long or three-page-long resume. You just get bored in text, like you just get lost.
TIM: Yeah, exactly. And there's just. There are so many different biases that can come into it, some of which we're aware of, some of which we aren't. I'll tell you a story of at least one of my known biases. So I can remember trying to hire a product analyst. Years ago, at the last company I worked at, for this company, we had a five-sided indoor soccer team at our local university. We played every semester, and we just kept narrowly missing out. Like we'd lose the grand final, I think, five times in a row. We just needed one extra really good player. And I saw this resume come through, and I saw the hobby section of this guy who was Brazilian.
GURU: Mm-hmm.
TIM: And I have this thing I love Brazilians, so straight away he's like elevated above other people in my mind, which is
GURU: Yes.
TIM: But then in his hobby section it says semi-professional footballer from Brazil.
GURU: What else do you want?
TIM: Amazing. Exactly, yes, you can do A/B testing, and yes, you can play football, you know. Job done. And so he got an interview. We didn't end up hiring him, but he definitely got an interview at least partly because he played football, which obviously has nothing to do with his user behavior analytics ability and was quite unfair for the other 599 candidates that had applied.
GURU: Yep.
TIM: And that's just one tiny example of one thing I was conscious of. Not even hundreds. I, I'm not even conscious of.
GURU: Exactly. And biases exist in every aspect of our lives. And it's just human nature, so it's not something that we can just get rid of. It's an ongoing process where you have to process, if you like someone, why do I like them? Is it because of their work, or is it actually that I like them because they are more susceptible to what I like in my general or my thinking? They're more like a better fit for my thinking. So yeah. But I, I have tons of such examples. Like, even like when you. Look, if I were to bring even smaller things, like people you talk to at the gym, you, or whatever kind of workout you are in, you would have a bias to like that person more. because you share a hobby together. So CrossFitters generally hang out with CrossFitters. Powerlifters hang out with powerlifters. And I've been on both sides because I used to compete in bodybuilding before, and then I switched to powerlifting and. The whole mindset has changed. Now I see a bodybuilder, I'll be like, Yeah, he's good. But if I see a powerlifter, it's more like, oh, what's up? What are your numbers looking like? How do I improve this lift? So we are prone to biases. That's one of the things that it's hard to change.
TIM: It is. It's hard or even impossible, probably. I can think. of some candidates, again, I've interviewed in the last few years where if the first few minutes of the call where you're kind of just warming up, if we started talking about football
GURU: Yep.
TIM: And the English Premier League, this and this and this and this, and then I would be so clouded by my love of football and suddenly my association with this candidate that it becomes very hard to then make a rational decision. It's so, it becomes so emotional at that point.
GURU: I hope the candidates who are watching this video know whenever they're applying for a Luba, they do their research on football.
TIM: Yeah, yeah,
GURU: And they figure out what your favorite club is, and that's it. Like they're already at least 20% ahead of other people.
TIM: Yeah. E. Exactly. And this is why I think. I think the biggest benefit of AI in hiring personally is not going to be kind of around the periphery of, oh, it helped me write a JD better, or it summarized my interviews, which is helpful. I think it's going to be in the core decision-making of who to hire, which at the moment I think most people are very reluctant to consider. But like if I'm sitting there going, Well... I've just given a candidate a 30% boost in my mind just because they like football. What other things have elevated or reduced people's views? And say how I just need to hire the best person. I don't want to be clouded by my own judgment. You know what I mean?
GURU: That's, that's such a good use case. And I, I feel like you never say never when it comes to technology. When cell phones came in, we were both part of the generations where people were like, who wants the phone on them? All time. And then smartphones came in, and it's like, it's now, it's like a norm. You don't have a smartphone. People would just look at you weird. So, AI is evolving by the day if we're talking about years for that technology to evolve. AI is like, there are changes coming in every week and every month, and companies are competing. So. I would never say never. And I feel like I would hope in a perfect world, AI would be so unbiased when it comes to, like, just training for interviews that it also pinpoints you and your biases, and that would help me as a being a better person, I feel like.
TIM: Yes, I think so. Particularly if we'll get to a point where, yeah, even just. Even just something as simple as every interview being recorded, transcribed, and summarized, and then being able to look at the end of the hiring process across three or four interviews that the candidates had, and then the AI may be giving the interviewer feedback. Or, by the way, make sure you mention this thing because the candidates mention how important it is in three of the four interviews too. Pick up their kids from school, so make sure you mention, yes, they can do the school pickup. Like all those, I can think of probably hundreds of little things like that that otherwise we might miss, that surely could make the process better if done right.
GURU: Definitely. Yes. Also, what are your thoughts about, and because we are kind of delving into, like, feedback when we talk about it, are we also implementing some kind of, like, a feedback mechanism for our AI can for, for, for the candidates by AI? Or is that a thing for the future?
TIM: For us in our little beta feature, that's for the future; for the skills test, we have that. And so we always encourage our customers to leave the feedback on. So basically, as soon as a candidate finishes a test, they get instant scores, where they've done well, where they need to improve, how to improve, et cetera. And I certainly hope that, yeah, for the. The CV screening tools that lots of companies would be building at the moment would give them the ability to share the feedback with a candidate. My thought process is maybe companies might be reluctant to open up that transparency and say, Well, you know, we've rejected your resume. Here are the five reasons we've done that. Here's how we scored you, which should be available because that's what the large language model will have. It should have that output. but
GURU: Yes.
TIM: Should, but. I could see how companies might be reticent to do that. As far as in a, particularly in a market like the US, companies seem to be worried about maybe getting sued if they say the wrong thing or what have you. Is that something you've seen? I.
GURU: So first thing, the EU is the leader when it comes to, like, AI rules around AI. So anything that involves hiring or any AI EU is pretty much the norm, like you have to be pretty clear about explaining to the candidates or anyone that there's an AI involvement in it. When it comes to us, I don't have much insight on, like, how the industry trends are going on, just on the AI hiring, but I could see potential reasons why, because I. If the feedback is unbiased, it might hurt someone's feelings too. So, so it's just no. I, I think in a perfect world it'll come to a stage where, like, we would be able to provide, like, certain insights where a candidate figures out what they need to improve. But for now I feel like just helping candidates because it's so stressful to look for jobs, and I know certain people who've been, like, searching for jobs for. For almost like six months now. It's like having some, some transparency, which might even be just where your application is at, who's reviewing it, not just the person; maybe HR is reviewing it, the hiring manager's reviewing it. Just a status update now is a first step towards that transparency, and eventually I feel like we can build a perfect world where. We are able to provide candidates a true and honest insight into what we think about their skillset. And someone's opinion doesn't gauge your potential. So you might like, I, I don't know what, what sports we want to talk about, like football. Let's say Ronaldo doesn't, like, doesn't perform every game he does, but let's just say he can have a bad game. That doesn't make him a bad player or mess; he can mess up a game. But that doesn't make him, like, not the greatest player. I don't know who you're inclined more towards, but that's one thing. Like candidates would also have to get to a level where they're accepting of their feedback. 'Cause no feedback is bad unless you take it in a negative way.
TIM: Yes, and it's from such a small sample size as well. Like an interview might be 30 to 60 minutes with one other person. And I know from doing recruitment often it's like a subtle thing that a candidate might do or say that then seems to cloud everything in this negativity. Like, I can remember just a few months ago working with a company where, in the very first interview,. A candidate got asked to self-rate themselves in the sequel, and they mentioned that they thought they were a 10 out of 10, which I wouldn't say I'm a 10 out of 10, even if I thought I was, because I'd realize that's going to come across as appearing a little bit arrogant or like I can't learn anything. But it was just that one answer to that one question that carried through the entire hiring process with them, the perception that they're. Uncoachable and arrogant, which was not necessarily true at all. It was just all extrapolated from just this one word, this one number that they gave once.
GURU: And that's a big bias if you think about it. Surely that would never be a good answer for me, but I would like to try to gauge the person because. What like knowledge is a C? Nobody's going to be perfect, but what if they, they, they are someone who? Regularly dive deep into it, and they think they can come out of any problem. 'Cause from a candidate's perspective, they're already like job markets. Currently, the current scenario in the US is like job markets, and the average time of people getting hired, from what I know, is almost like three months now. The whole hiring process is generally like 32 days, but. People are saying there are polls out there where they're saying they apply for a hundred jobs and maybe get one interview. So the market's pretty tense and competitive out there. And in that sense, like candidates do, they get fearmongered, and they think, If I'm not the best, I might have to, like, put my best foot forward, and they might be really confident with SQL. I feel like that's where recruiters should not get biased. And AI would help in the sense that it's like, don't take it as arrogance, just take it as like, this man or this person is willing to do whatever because they think they're like. Close to an SME, and at that point, if you have that mindset, any problem can be solved. I would've rather given them an SQL query, not, not not setting them up for failure, but it would be challenging enough to show them the reality, and they would've kind of gauged their performance on that. But. Yes. At that point, like the tools or solution that Luba is bringing up, it would play an important role. Because for AI, that's one data point, but that doesn't just take away all the other good work the candidate or the applicant has.
TIM: We'd mentioned earlier, yeah, the kind of stressful candidate market at the moment and how maybe providing a bit of feedback, having a bit of transparency, could help reduce that a little bit. I did just speak this morning to Moon, who heads up talent acquisition for Greenhouse, the applicant tracking system. They started rolling out a really interesting feature whereby they have these four different badges that an employer can get on their job ad to kind of verify they've done things in a certain way. And one of them is over ghosting. So because Greenhouse has access to all their customers' data, they can see exactly. Down to an application level, which companies have gotten back to a candidate and which haven't. And so I think they have this 95% metric that, okay, if you've hit more than 95% feedback, then you can put this badge on your job description. If you don't, then it can't go on there. I think that's a really great way of doing it because what gets measured gets monitored, doesn't it? This doesn't involve any complicated AI. This is just tracking things and being honest.
GURU: Definitely. Yeah. And I think hiring is a two-way street. So you have, even though you might be a founder, like, of a pretty well-formed business, at the same time, you still have to seek confidence from the potential employee that's coming to you. And also providing them confidence in yourself. So it's a two-way street where, like, you cannot—you cannot just any relationship when it comes to that. I would say in general, this is my philosophy around it: every relationship, you can only take so much out of it. Until you start giving it back. Like if you're not putting much back into a relationship, you cannot get much out. So the companies need to understand, like if they want their candidates to be transparent, to be honest, and to fit in the company culture, they have to do the same, and they have to provide feedback. Like you can be. X amount of, like, you can have X amount of work at that point, you automate it, but you'd still need to have clear communication. 'Cause ghosting is one of the most frustrating things when it comes to, like, hiring. And I think there was research done by LinkedIn about ghosting. And I don't know the statistics about it, so please call; don't call me out. But it was like 70% of the candidates pointed towards, like. The hardest part about, like, off-the-job searches is just being ghosted, and I think the world could be way better if we were just transparent and let people know a simple piece of feedback.
TIM: I hundred percent agree. With that. And actually I have maybe a connected bit of research, which was, yeah, between LinkedIn and also, I think, Greenhouse, we were involved in talking about. Not necessarily ghosting, but ghost jobs, as in like fake jobs. And they found that roughly 18 to 22% of jobs. What they classified as ghost jobs, which are not necessarily completely fake but are ones that didn't end up with anyone getting hired at all, for which there could be many reasons. One of which is just the role gets pulled. Like, the company's maybe not going as strong as they thought they would, and all these kinds of things. But this is one of the things that prompted Greenhouse to start doing more transparent things where you as a candidate can then know a little bit more about the company's track record, which I think, yeah, is a really good improvement.
GURU: I feel like, yes, companies should have some kind of accountability when it comes to that. And luckily, at all the employers I've previously worked for, I haven't experienced, like, ghost jobs. So I cannot comment on that. But as a job seeker, if I were in the market, it would be really bad for me. And I would, I would be so, I don't know. I don't have better words. I would be just pissed about it. And yes. So any company that's doing that should be held accountable, and shout-out to such platforms that are bringing those transparency things out.
TIM: Guru, if you could ask our next guest a question about hiring, what would you choose to ask them?
GURU: It's a tough question. Just let me process because there's, because the biggest question is biases, but I feel like we are kind of close to solving that problem. One thing is when it comes to hiring, there's vision, there's a current time hire, like the recent request. What do you want to get fixed? And then there's a long time to hire someone who you would nurture and who you want to see as a growth leader. So how do I want to know, like, and this is I. This would help me because I'm hiring right now. It's like, how do leaders look at candidates from an immediate hire perspective and also from a long-term hire perspective? Because sometimes the requests can be so pressing that you have to have that person. But do you? How do you gauge them for a long-term employee of the company?
TIM: Yep. a great question. I don't think that's one we've asked anyone yet. That kind of short-term versus long-term horizons, that's a great one. I will ask that to whoever our next guest is sometime early next week and be interested to see what they have to say. Guru, it's been a great conversation today. We've covered a lot of ground. We've deep-dived into a bit of AI and hiring and a few other areas. We've learned a little bit about you, so thank you so much for joining us and sharing all your insights with us today.
GURU: Oh, thanks a lot, Tim. It's been a pleasure, and I look forward to keeping talking to you.