Alooba Objective Hiring

By Alooba

Episode 28
Gio Lanzani on The Impact of AI on Hiring: Enhancements, Challenges, and Ethical Considerations

Published on 12/1/2024
Host
Tim Freestone
Guest
Gio Lanzani

In this episode of the Alooba Objective Hiring podcast, Tim interviews Gio Lanzani, Managing Director of Data

In this episode of Alooba’s Objective Hiring Show, Tim interviews Gio discusses the transformative role of AI in hiring, differentiating between automation and true AI. Gio emphasizes the importance of blending human touch with technology in recruitment, cautioning against over-reliance on AI-generated content by candidates. The conversation explores the benefits and challenges of using AI for CV parsing, the importance of human traits in interviews, and ways to mitigate biases in hiring. Additionally, Gio shares advice for candidates on making their applications stand out and highlights the significance of authenticity and genuine interest in the hiring process.

Transcript

TIM: Gio Welcome to the objective hiring show. Thank you so much for joining us.

GIO: Thanks, Tim, for having me.

TIM: No worries. It is our sincere pleasure, and I think the best place to start is the place that everyone is starting and talking about these days, which is AI. So AI is impacting so many bits of society about the way we think about the way we consume information, and I'd love to zone in on how you've already seen it impact hiring. Have you started using AI in your hiring process? Have you seen candidates use AI as part of their application process to your roles?

GIO: I've actually seen two trends, so one is the automation part, which is not really AI, but people still think is associated with it a lot, and the other is, let's say, proper AI, let's say the machine learning in the background. And when it comes to automation, I think that's a low-hanging fruit because, of course, you can automate a lot of things in your hiring process, from filtering based on some properties of the candidates applications and some questions that you ask when you apply, which I think is still super useful, and it also contributes a lot. to give you good data about your hiring process because once you are automating stuff, probably the data quality goes up because suddenly things get filled automatically, and that's really crucial when you want to tweak the process, so this automation really improves the metrics and the insight that you have in the process. the second part is AI, and I think we The way we use it currently is mostly on the computer vision parsing side of the applications we receive, and it probably, you know it or not, but the type of CVs that we get in have all sorts of shapes and forms, and having a model basically automatically extract the relevant information for us speeds up the triage of new candidates. and that's really helpful. Again, it's one of those things that there's not a lot of controversy around whether to use it or not, but it saves you so much time, and on the side of the candidates using it, I think they are, but probably in the wrong way, and what I mean by that is that I'll have an example when I was hiring for some roles in the Middle East, and among the requirements we had programming languages like Python and technologies like Apache Spark and Databricks. and so you would ask people, Do you know Python, and do you know Databricks? and if they were at that stage, they would say, Yes, of course I do, like years of experience, and then you gave them, like, live, a very, like, tiny code snippet to solve this trivial problem that would take probably 30 seconds to anyone who was a bit experienced with Python. and you would see that they would crack out answers given by ChatGPT, and I know that because I myself use ChatGPT to check what would a candidate that uses the tool give us an answer, and I was getting the same type of answers, so basically they were using AI to To fix that, to fix their lack of experience, I would immediately see that because I have a good idea of what the AI answer looks like versus what the answer of a human is, how would they solve that? So that's, yeah, I think they're using it the wrong way.

TIM: that's a great example there for candidates and I feel a similar thing is candidates realizing how much experience hiring managers have with reading CVs with reading applications and the difference between chachapiti generated content and human generated content at the moment is very obvious especially once you've read a lot of it and especially obvious if you're a candidate and you apply to a role where some of what you apply with is your own written content and other stuff you've copied and pasted from chatGPT if the hiring manager is comparing those side by side it is very obvious two different things have written that content so yeah I feel like candidates should be a little bit more practical in how they use it and then maybe also just not as as much of as a blunt instrument like use it sensibly to them apply faster or do things they already know better can you feel like can you think of good examples of where candidates should be using Chat GPT in the hiring process

GIO: A tool like ChatGPT or other generative AI tools is a great tool, and if you look at software development, data engineering, and engineering in general, I really think of it like a craftsmanship kind of job, and so every craftsman and every craftswoman, it's Of course it's about the tools, but it's also about how to use the tool. So if I'm giving a tool to create glasses of water, just regular glasses, I probably wouldn't know how to use the tool, and I would make really crappy glasses, but if you gave the same tools to somebody that is good at it, they can really make beautiful, beautiful glasses. And if you give them a good tool, a very good tool, they probably get faster and quicker, so they spend less time on the repetitive tasks and more time on actually the craft itself, and I think about Gen AI tools for coding in general along the same lines: if you are not good at the basic stuff, you're going to have a very hard time using the tool properly. And what you will get is that you get, you know, code produce that does not quite feel right, and when I say that, I think it also happened in with the previous example that I spoke about these interviews about in the Middle East, so you were seeing when I checked the code written by Chat GPT, it was correct in the sense that you would run it, and given the input that I gave that the program had to expect, it was producing the correct output. However, when you look at it, the way the code was crafted was really hard to understand for a human, like it was not written in a—how do you say—idiomatic fashion. and it means that when people had to, I imagine you would go into the real job, and you will start cranking code that other people cannot understand. You create a lot of technical depth because you have this kind of pocket of code everywhere that maybe they run correctly, but when you want to update them or just understand to write some tests around them, you just do not understand, or you have 10 times the cognitive load to understand what the piece of code does, and it's any kind of—it's very detrimental to the whole process. and so on. On the other hand, if you already know how to use the tool, which is your original question, then you also know what to do with this output and how to interpret this output, and is it like an autocomplete on steroids, which is also super useful because it saves you time for all the boilerplate things that we write every day as programmers, and then you fill it in. with your knowledge on top of it, so I again think it's an enhancer, and it's not an enabler if you're not good with something.

TIM: I wonder whether also the danger is how subtly wrong it is about something where if you know nothing about the topic, it passes a first smell test pretty well. It looks fine, but it's still nowhere near a hundred percent done, so if you don't have any knowledge of the topic and are using it for something completely outside of your knowledge base, it's almost giving you a false sense of confidence, would you say?

GIO: exactly

TIM: And so in terms of candidates then, using this in the hiring process, are there any stages or any things that are off-limits, like you mentioned, okay, using it in a real interview to substitute for the fact you simply don't know the skill, like you don't have the fundamental skill that seems like an issue? Are there any other bits of the process where you think I would rather candidates not use ChatGPT at all or a large language model at all?

GIO: When we screen candidates, we are looking for a set of skills, and if they have them and they can work for us once they work for us, and they do, if they have skills and they do, and they're doing assignments or they're doing work, you can use ChatGPT to make your work better and faster, but still you need to use it as we said, as we just said, in a conscious manner if you master the underlying problem space. So I am okay when they also use it like that during the hiring process. If they have an assignment, we have a take-home assignment during the process and do the initial exploration to do some brainstorming to get some new a nudge in a new direction. I think it's great if they use it like that because that's how we use it in our daily practice. But once you start going beyond that, again, you find a wall that's too high to climb for you. I don't think ChatGPT will get you there. You already need to be able to climb that wall. ChatGPT will get you there faster, but if you're not, if you don't have the underlying skills, I wouldn't put a kid to climb a steep wall where they can hurt themselves. They have to be trained. They have to be a bit more prepared, and then you get very good shoes or whatever to do the climbing a bit faster, but you have to do your homework.

TIM: Now, of course, that's what we say now, but the rate of change is pretty staggering, so who knows what these tools could do in a few months or even a couple of years time? Maybe by that point, actually, the usage could be different; it could be this tool that then someone with no knowledge might be able to rely upon.

GIO: We'll see how things evolve, and again, the speed of change is high, so it's important to be aware of how not only the hiring process changes but also how the work itself changes, and maybe one day we'll have to, as one of the skills we'll test for is good prompt engineering. if prompt engineering is going to be super important for us to do our job

TIM: yeah and I think what's really helpful and what I've seen a few companies do recently is just get people to almost start from zero and think about their day to day jobs what they do and how they could try at least try to incorporate chachapati Claude or any other tool into that and rethink how they're doing things because if we are at this like fundamental shift in technology then it's probably a lot of stuff that we've been doing a certain way for 100 years that is about to be obsolete and so if you're not thinking consciously about hang on am I really about to manually write this entire email from start to finish maybe I don't have to do it that way anymore And it's hard to think like that because old habits die hard if you've literally been doing something a certain way forever you almost need to be shaken No, you don't have to do it this way. Maybe LLMs could help here.

GIO: exactly

TIM: And speaking of that, so you mentioned initially your own use cases of AI in hiring, and you split it out into AI and then just sensible automation, which I think is now A little bit overlooked because maybe it's not as sexy or interesting as AI, but God, there's a lot of simple things you can do to automate things on the AI front. You mentioned using computer vision to pass CVs. So that's something you're doing currently. I'd love to hear a little bit more about that.

GIO: Yeah, so right now it's basically built into our ATS system, the applicant tracking system, and, yeah, we just, let's say, enjoy seeing work because before you had to, again, you had to extract a lot, and we collect a lot of fields, again, to speed up our decision process but also to give us more insights into the data and the underlying data to be more data-driven in our hiring process. So these tools that recognize automatically this tool that recognizes automatically certain information and populates the field, pre-populates the fields for us, it's really super important, and of course you still have false positives; you still maybe have to do a quick check, but just the fact that you don't have to copy and paste or do some again, a lot of tediousness is taken out of that step.

TIM: Yeah, it's the magic of being able to convert unstructured data to structured data because then once it's structured, then the sky's the limit on even basic to advanced analytics that you can do.

GIO: Exactly, yeah.

TIM: And okay, so using this computer vision tool, is this something you've built and embedded into the ATS, or is it an offering from the ATS itself? Like, I'm keen to hear a little bit more about it.

GIO: It's an offering from an ATS itself, so we are Yeah, I think it's something that you have to turn on, and you have to configure a bit, of course, the mapping of the fields and those things, but the hardcore modeling is taken care of by the ATS itself.

TIM: And so what is the end output of that? Is that giving you, like, a score of an application or stack ranking them or categorizing them in some way?

GIO: no The major output is that it's pre-populating fields, and then we use it during the hiring process, either taking a decision whether to continue or not with the candidate or to focus on a particular area, so it's not taking decisions for us; it's giving us insights, which again, the hiring process is very people-oriented, of course, because we're dealing with people. So I think I would not. I don't think the tooling is ready to start taking decisions automatically unless it's an easy filtering mechanism. Do you speak to that touch? Yes or no? What's your English proficiency? I think those are—I wouldn't even call them decisions. I would call them rules rather than decisions, but then to go one step further and say, does this particular set of skills or the combination of skills that the candidates have match what we have in a fuzzy way and more like a statistical way? I think we're not ready for that, and I think in the press we got probably a couple of weeks ago in the case of a hiring manager that submitted his own CV to their system, and he got rejected for some roles that he was a hundred percent qualified for, and some confusion ensued in the HR department of the company, which maybe it was even Microsoft that was leveraging this. So it was, I think, a high-impact case last

TIM: Yes yes I have quite mixed feelings on this myself because it's I feel like at the moment our expectations over AI and technology are too high and we're focusing on the one negative example and missing the thousands of the current negative examples of the current way to do it so I'll give you an example of that there's been lots of studies around the world looking at CV screening and your odds of getting a callback for a role depending on your name on the CV and so in Australia there was one a few years ago the university of Sydney did which they got thousands of different CVs group them into three groups first group had a white first name white last name second group had white first name Chinese last name third group had Chinese first name Chinese last name they then applied en masse to thousands of different jobs different industries different levels all around Australia and then measured the rate at which they got a callback and the only difference among these three sets of CVs was just the name they were otherwise equivalent quality experience blah blah blah The first group of CVs got a 12 percent callback rate; the third group of CVs got a 4 percent callback rate, so if you apply to a job in Australia with a Chinese name, you have only one third the chance as if you apply with a white name. So that's like the current situation, which is obviously very unfair beyond belief, and so that's the current situation, and so when I think about AI coming into this and, at least in theory, being able to do it in a much more objective way to stick to the criteria, to not get tired because you've read 500 CVs, to not introduce your own bias and discrimination, I feel like there's such a huge upside that we shouldn't focus on the one or two examples that are going to be in the media that are going to dissuade people, if you know what I mean. What are your thoughts on that?

GIO: When you think about discrimination, you have to build some safeguards into your process, and so it's the fact that you remove the parts of the CVs that could be used in a discriminatory manner. I think it's super important. I believe Google does it. There are people that write the evaluation, and they have to strip out some details. And then there's another committee that looks at that. I think that's one way to remove the bias in your process, but it has to be conscious, right? You have to consciously do that, and even if you have AI to aid you, you're still going to use the name or the gender of a person applying that. Even if you use AI, you will still get a bias even if you use AI, but the data was trained on was biased data; you will get biased results in your modeling. So it's important again, whether you use AI or not, to think about the ways it could go wrong and then put up some guardrails around that, and it's also important to know what the guiding principles are that you have as an organization and also the guiding principle of about the The job itself, so if you have, again, if you sometimes have to discriminate based on some conditions if you operate in a certain market, so, for example, if you are a consultancy and the majority of your customers, they would not hire a consultant with some sort of disease on the maybe on a HGH or whatever if you know that 95 percent of your market does not hire consultants for that but they do hire internally it does not make sense to to to have a very non discriminatory hiring process because it it wouldn't make anyone happy if you have an if you know that a company said they have a no on a neurodivergent have harder processes to to do it's good that you are aware of it during your hiring process and I think that's where the human part is very important because then you can have a conversation with it with the applicants and not have a straight rejection and it's something that we often do sometimes the candidate does not fit with the company that we have but we have customers that are actually looking for those profiles and for them it's not an issue We actively put them in contact with the hiring managers for these customers that we have, and it's always appreciated. Right when you have an honest conversation, you explain what it is, you explain why it doesn't work, but then you also think about a solution. How can you help? How can I help you anyway? That's really appreciated, and that's why I think that sometimes taking a whole humanity away from the hiring process is just wrong because at the end of the day, people were not just numbers and structured data; we are very unstructured, and all this unstructuredness sometimes is not captured. by an automated process, so it's important to keep that human touch always in your process, at least for us.

TIM: Even if the human touch is vehemently biased and discriminatory, even at that level, hypothetically, if an AI could be created to do it in an unbiased way, would you still, though, favor a more human-based approach?

GIO: The thing is that, again, sometimes your process can be discriminatory because your requirements discriminate, right? If I say that you need to write Python, I'm not discriminating against the people who write Python because I want to discriminate, but they just are not a good fit for the job because they won't be able to be successful there. and the thing again is to be aware again if your thesis is that it should always have a human touch. It's also important that you have this bias training for the people that are involved in the hiring process, which is also a thing that we do, so the recruiters and the hiring managers, they have these bias trainings to understand what is their implicit bias. How do you counter that? How do you make the team, and because it's not a single person that hired but the whole team, also diverse to get a broader look at a candidate and a CV? And again, the thing that there's a lot of if you start thinking about it and embedding in the process that you can have bias and you need a how do you say some evaluation steps need to be completely transparent and unbiased and Everybody gets a fair chance; that's super important, and that's why we also give a lot of time when you have, sometimes, face-to-face or live interviews through the chat, and it's important that once there is a human involved, you give the same kind of set of rules. So you know what questions to ask. The majority of these questions are always the same. You have clear guidelines. It's hard to evaluate a question so that you don't have to, because even if you try, even if one person is even if one person wants to be or tries to be unbiased, sometimes we interpret questions differently. So giving some clear guidelines on how to interpret them and ensuring that everybody interprets them in the same way is super important, and again, how do you ensure that? I think collecting the metrics and knowing what happens at a particular stage with a particular interviewer is one of the first steps that you have to take to correct that, and even when having, again, Even when having AI models, if you don't give thoughts about it and if you have bias in your hiring data and you like it, even if you don't have biases, if you just have very little data about a particular group, yeah, the model is not going to be super good at evaluating them because there was not enough signal in the training data to have a significant impact on the model itself.

TIM: I feel like a lot of this overall problem could be solved with just consciously bringing forward and writing down exactly what the criteria are in as black-and-white terms as possible, so I feel like a lot of where hiring tends to go wrong and goes awry is that people are looking for different things in a candidate. They have their own conscious thoughts of what they think a good candidate should be; they have their own unconscious thoughts about what they're avoiding, and if we just committed everything down and said, No, here's the 10 criteria we agree on 100%: we're not hiring this person unless they meet these 10 things, whatever they are, could be soft skills; could be values could be technical skills because then it's on paper, and then you wouldn't get scenarios like people with a Chinese name getting rejected either through unconscious bias or discrimination, and then I would have thought again in theory at some point some kind of automated system could go, We ranked these people. We looked at the interview answers; we looked at their test score; we looked at their CV; we looked at their LinkedIn profile. Here's how we rank them against these 10 things: Here's who we think you should hire, because I feel like then the only thing that could happen is maybe we missed a criterion, like we forgot about something, or there's something that's just completely immeasurable that is important that an AI could never understand, like some kind of gut feel intuition. Yeah, do you feel like the end process or the end state is going this way, or is this not going to work for some fundamental reason I'm missing?

GIO: Yeah, so again I think it depends a lot on the type of candidates that we hire. In our case, we hire candidates for a consultant profile, so the interaction that they have with the customers working in the team under stress is important, the kind of looks that they give you, and how they make eye contact. I'm just finding it hard to right now to codify it properly in the sense that a lot of it is basically how do you do it, how do they make you feel if you're a consultant and you are and you work for the customers? You might think, What kind of criteria is that? but if they don't, if they don't make the other comfortable, it becomes very hard to make a good job because, again, when you start being uncomfortable, then you have this kind of kicking in that you start mistrusting a bit the other party, and I think when I think AI is, let's say, it's not there yet to play, maybe they can see it like maybe they can spot it when you want to see a conversation, but you still need a conversation, a human conversation going on where you basically Let's say simulate or replicate a real-life setting because, in the end, that does not hold for all the roles, but for a consultant, you have a lot of real-life settings that are important to that. We have a feeling again. Speak about feelings so that leaves a lot of room open that you have a feeling. How will they perform in real life? So, can we take it out completely, or can we get AI to help us? Absolutely we can get AI to help us. Let's feed it the recording of this video and maybe see things that we don't see. But there's still this human aspect that is very important.

TIM: yeah absolutely and I wonder if I when I hear this I sometimes feel like it's a catch all or this final thing that really matters and then you just laid out a perfect example if you're hiring consultants and part of the value to your business is how much revenue they bring in and how much their books in the clients then obviously the client has to like them otherwise They're going to get repeat business you're not going to make as much money it's like a one to one relationship between how likable they are and how valuable they are to your business so that is like a perfect to me a perfect criteria to rank someone on I wonder if it's just something that could be almost explicit in the process So we've got all these things that we can measure. Here's this other thing we're going to call likability; it's worth 30%. Everyone in the hiring process is going to give this person a score out of 10 for likability. Then I feel like at least it's unpacked, and it's explicit in the process as opposed to, Oh, I just got a negative feeling about them. I don't quite know why you just said no. I'm going to rate them on likability. What do you think of that approach?

GIO: I think it goes back to the process, like how do you score people, and that's why we standardize a lot in our hiring process to be able to uncover these softer correct skills or characteristics that are very important in your real life job. and again, it's important that you have a standard. How do we phrase a question? How do we create the most welcoming environment for the candidates? So it's not completely unbiased, but you try to give everybody the same playing field, but it's also a case where still the experience of the interview also plays a big role, right? even though if you have everything codified, the fact that you maybe you've done it a hundred times, even if a candidate comes across as maybe as nervous in that particular moment, you know where the nervousness is coming from, and maybe there's nothing to do with some intrinsic things, but maybe they just told you that something happened at home or they have something on their mind, and that you take that into account and knowing how to take into account it again, it's something that we as humans are still, I still, I think we still have an edge on on on AI that that sort of yeah that sort of evaluation capabilities even though again we so we also have we lose that edge in some other areas like when it comes to being perfectly perfectly objective when it comes to some candidates

TIM: Yeah, the AIs aren't in control yet. We still have our domain. What I'll be interested in seeing, actually, is because I'll give you a related example. I saw recently a use of AI which was in farming and so to us of course as humans the difference among human faces is massive like you wouldn't forget the difference between your children's faces or your friend or whatever like it's we can identify things very obviously as different whereas if we looked cows look more similar to us but AI have been trained to easily differentiate Cals faces that to us look identical and apparently the distribution of chaos space is actually quite large comparable mathematically to humans and so I wonder whether AI is going to start to generate new insights that we haven't even contemplated before that might add extra information like just data points that we're not even capable of processing that would be amazing I think

GIO: I think that's especially valuable when you start collecting more data, and by that I mean when you have an interview like the one we have, I can pay attention to so many things; I can either look straight into the camera or I can look straight into the monitor where I see you. and I have to think about what I say, have to say, so it's I have a kind of a limited bandwidth to pick up signals, but when you put an AI in it, it has the bandwidth that is almost unlimited, so they can pick up way more data about a conversation than I can do, and you can do while we're having this conversation, right? It's about not just where you look but how you look, how you behave. Is that something that, on your shoulders, is like a muscle? Are you the muscle that is tense for a millisecond? Are you blinking too much compared to the PU stem minutes? Something that I wouldn't be able to count how many times I blinked during the interview And if my blinking frequency changes, but I can do it, it could reveal a lot about not just the interview but also the interviewer, and suddenly, if you could, I don't know, if you could do it from just the camera, but if the camera could, you know, in fear, your heart rate, I cannot do it by just looking at you, what's your heart rate, but if a camera can do it, and it's so much information that is valuable for the hiring process. like if it's if a simple question makes your heart rate jump, then maybe, yeah, you don't know maybe a lot about a question, or it makes you uncomfortable, or even let's say if it's a coding question like the one we said before and you start making stuff up and maybe you fake it through the question, but I would see Hey, wait a second, but the heart spike is 280-something vicious going on. If you're an expert in something and you get a simple question about it, you should be relaxed when answering that, and if the eye detects that it's not, it could be like a red flag, or maybe the candidates came over as very nervous or stressed to the interview, but the eye said all the vitals were actually pretty constant. So it was your biased interviewer and not something with the interviewee.

TIM: One thing that I was reminded of as you were discussing that was I'm not sure if you've used AI yourself with this, but to give feedback in a nonthreatening way, like I was actually sharing some transcripts from this call or these calls that I've done in chats with your team and asking it to tell me the top five things that I could improve as an interviewer, when I read that, not only did it seem reasonable, but also it was completely unthreatening. I hadn't had to watch it myself and imbibe all my miserable mistakes. I didn't have to ask someone for their opinion and then feel defensive that they're judging me. I got this unjudgmental feedback from a large language model, and I imagine that could be especially helpful in hiring as well.

GIO: Yeah, absolutely. I think that's it. Again, the human connection is important, so we take it very differently if it's a human that gave us feedback or not, and I think there's also a very big difference when you can feel it when you give feedback to a candidate and the difference between giving it in person right after the interview or calling them afterwards or sending them an email. and it's just like that It feels completely different, and again, probably the email from a human, so to say, is the one that puts you on the defensive the most because you don't have a connection; you cannot, maybe, some nuances do not come over, and again, speech is very different than the written word, and the written word by us is very different than from an impersonal system. So that's, yeah, that definitely has an impact.

TIM: With these large language models, content can be created at stupendous scale very cheaply, and companies are all competing with talent for the same pool of talent. Can you think of any ways that companies could start to stand out and get their jobs in front of the right candidates using some of these large language models?

GIO: The thing is that everybody has access to them, and when you say, How do they give you a competitive edge when everybody has access to the same tool?" I think that the way that you have that you can stand out as a company is to actually Maybe it goes back to what we said before, which is to do the work in the sense that a lot of the time we have candidates, and they've either heard about us, or we point them to us. The occasions where their future colleagues, for example, have been on stage at a conference and the fact that they can go online and watch what their future colleagues are like, how they look, and how they are, What kind of work do they do? What kind of culture do they project to the audience? That's actually a great way to stand out, but again, if you think about it, there's a lot of work to do because I know some companies where they basically only put their leaders on stage. But that's not representative, and what I want to do a lot is that we actually encourage everyone to be on stage, to be on a podcast, to write books as well, so we just have colleagues that six or seven colleagues that published a book with PACT, the fundamentals of analytics engineering. Again, that's a great way to communicate what the culture of this company is, how you stand out, and when you start getting two offers and you have one of them that has always, you know, They make all this, let's say, information—let's call it information or knowledge. They make it public. Yeah, it started to become a differentiator, and it started to stand out, and you start to think about, Hey, wait a second. Many things that I can read about a company or that they write to me could be generated, but this stuff is real, so I think that having this connection with the real stuff As opposed to the generated things, it makes companies stand out.

TIM: And as you say, it's doing the work. It's got to be authentic and high-quality stuff.

GIO: It's not a shortcut if everybody knows it, so if GNI gives you a shortcut but everybody knows the shortcut yet, then everybody gets access to the same

TIM: I wonder if now flipping it around a little bit, I wonder if candidates should be thinking about this more if the current world is one where there's an increasing number of applications per role. Candidates seem to be applying en masse to more roles. They feel like maybe they're not getting a fair go sometimes going through that traditional job application process. Thank you very much. If they had a bit more of a public presence that had some content that had a bit of thought leadership, then that would be a way for them to stand out as well.

GIO: So I think at the beginning of my career, more than 10 years ago, I was giving a good because, of course, you can stand out when you have a public presence. You just, if you want to know a good somebody, can speak in front of an audience, and you have recordings of a conference. That's amazing. That's an amazing way to do that, but not everybody had that occasion, especially at the beginning of their careers, and they also probably don't have a lot of referrals to point people to, and what the tip of being given is motivation Why do you want to work for a company? It's a big thing, and you can read so much on the website, but it's still pretty dry, so when I read a cover letter and people have just looked at a website and they might be generally motivated, of course it gives them an edge, but the bigger edge is that they don't want to have to try to make contact with somebody within the company to try to get a chat like the one we're having before starting the application process to know, Do I really want to work in the company? and then what I hear during this chat I then put in words in the cover letter. Yeah, that's a whole different level, so even for me as a hiring manager, I know that a candidate went to extra lengths about wanting to work for us because, again, they did the work, and now this immediately makes them stand out. and, but it's important that you are that you truly are motivated about working at a particular company if, and if your strategy is saying, I'm going to send my CV to a hundred companies, probably you don't have a—it doesn't really make a lot of difference who's going to respond because you're sending it to 200, and you're probably going to interview with everybody that says, Come over. But if you have, let's say, a top five, these are really the companies that I admire. I want to work with them, and you start reaching out to people there; you start talking about it, and it's also not just good for the company; it's also good for you because, of course, the company has a high cost. and whether you fall out during the, let's say, the probation period, but for you, the cost is just as high, let's say you relocated from one country to the other to work for this company because they accepted your, you know, your application, but then two months in you don't like it. and it's just you're miserable the whole day. It is a tremendous cost for you as well. Maybe you had family; you had to relocate them to the new school for the kids, and two months in, you're there, completely disillusioned, and you don't know what to do again. Doing the work before also saves you big time there—not just the company but also the person applying it. I think for me it was a great tip, and it's something that I applied again at the beginning very fruitfully.

TIM: Right, so it's almost de-risking it as well for Canada; like you might have your top five target companies, but you might not know enough really to know whether or not they are the right places for you to work at. You doing your research, you might discover, you know, that two of them are not for me. the other three might be

GIO: Yeah, and it's de-risking and increases your chances at the same time because, again, I don't think any hiring manager upon reading a cover letter where the candidate says that they had already a chat or a coffee with one of the colleagues that can cite real examples that are not on the website that are not out there It's again It's a big plus because even if you have the same skills, if your motivation is just higher, then I know that retention is going to be easier because you're motivated, you're intrinsically interested in the company, you're interested in the mission that they have, and the value that we have in the culture that we have. so it's just it's a better it creates a better match

TIM: Yeah, I've got a real-life example of this for candidates that might be helpful. So, a recent client we are working with has a product, like a B2C product, an app. It's a calling app, basically, and a no-brainer for the candidates before they'd had their first interview with this client was to download the app, play with the signup process, do a trial call to someone that you know, and come in with that first call with any level of feedback or understanding of the product. Oh, I used it. I really liked these things. Oh, have you thought of adding this feature or experienced this bug or whatever? Like, some genuine evidence of some level of interest will always resonate because it's authentic. It's really you can't fake that.

GIO: Oh, absolutely. I also hear about candidates that download The yearly investor report, they go through it; they cite it during the interview, and again, it's not just doing the work, but it's also about you being genuinely interested in what kind of company am I getting myself into. And it's something that is not part of a job application but just part of my own interest. I also read these investor reports and just seeing how the company writes the clarity of thoughts that they show there, again, it's something that maybe you could say is a bit fuzzy, right? What is clarity of thought for me is different than what it is for you, but again it creates a match between myself and the company, which is, again, super valuable.

TIM: As a final question, geo I'm wondering if there's anyone in the data or hiring space that you personally learned a lot from that you'd like to give a shout-out to.

GIO: No, absolutely, so it's my ex-colleague Luke Fatima. I'm going to write it down for you offline. I'm using the Dutch pronunciation. I think that he helped us at our company. I think he came in some six or seven years ago, and he is the one that basically put the automation on the recruitment process, putting all these guardrails to be able to inform ourselves and prevent ourselves from having a biased approach to how we evaluate the candidates, and a couple of years ago you also wrote the one the best corporate recruiter of the year award here in the Netherlands. so it's a really It's really somebody that, to me, he had this vision about how do we approach recruitment the right way again using automation where we can using all this tooling where we can to make our job easier at the end of two four at the end of the day giving a better experience for a candidate making them ambassadors to the company for the company even if they do not end up working for us. So I think this is something that is really underrated. If you get 100 CVs a day, you maybe don't care that they're ambassadors, but if you put in the work and if you are thoughtful about it, having 100 ambassadors a day is a great asset because it happened to us more than once that these people were rejected during the hiring process. and then they went to work somewhere else, and there they needed our services. the services that we provide, it just said I know that we didn't, it didn't work out for us, but the experience was so good that it was so thoughtful, and you just did a great job of selling the company to me even though you at the end there was no match Then I want to work with you, and it didn't happen just once or twice; it happens continuously, so every month or so we get a request for somebody that was in the hiring process maybe five years ago, and so maybe like a long time, they remember for a long time, and we get this really fruitful work collaboration afterwards. So that's why I nominated Luke, because from him I learned that you have to be you have to show your human traits during the hiring process; you cannot just automate everything, and maybe an email is the only resource you have to communicate the feedback to a candidate if they don't pick up the phone and you cannot reach them or whatever. but that's the last resort. Try to talk to them; try to establish that connection and make them, even though they didn't make it, proud of where they got and the fact that no hard feelings; it would just have not worked out.

TIM: wonderful Gio It's been a great chat. We'll have to leave it there, but yeah, really insightful thoughts from you. And again, just so much thanks for you and your time today.

GIO: Now thanks to you, Tim, and thanks for having me, and thanks to everyone for tuning in.