Alooba Objective Hiring

By Alooba

Episode 15
Lukas Streit on the Future of Data Roles and AI in Hiring: Skills, Automation, and Bias

Published on 11/18/2024
Host
Tim Freestone
Guest
Lukas Streit

In this episode of the Alooba Objective Hiring podcast, Tim interviews Lukas Streit, Head of Data & Analytics at DeepImmo

In this Podcast Tim and Lukas explores the evolving landscape of data roles and the integration of AI in the hiring process. It emphasizes the increasing automation of repetitive data tasks, the continuing need for contextualizing data within business frameworks, and the shifts in skill requirements towards softer skills. The conversation also delves into the challenges of making hiring more data-driven, the impact of AI on CV screening and candidate selection, and the balance between technical expertise and soft skills in hiring. Personal anecdotes and case studies highlight practical experiences and challenges in optimizing hiring processes through data and AI.

Transcript

TIM: Lukas, great to start with a bit of future naval gazing. AI is changing everything, it seems, in the world. What about for data roles? What do you imagine a data analyst, a data engineer, and a data scientist might be doing in three or five years time?

LUKAS: Yeah super super interesting question and it's of course difficult to fully accurately predicted I'm sure we'll have quite a few surprises going forward in any case in my opinion a lot of things are getting automated right you can see it with AI but you can also see it just with things like data integration platforms tooling which enables you to get data together into one place without Much of any manual labor still being required what I think is that a lot of this this manual or boilerplate work that the data specialists do will get automated and it will get easy to a point we don't really need that high of a degree of workforce that anymore but What I still think is super super important and will continue to be so is putting the data into the actual business context Approaching the edge cases that you may have with data integrations

You may be integrating with a very company-specific source system that has its own quirks and situations where you need to get accustomed to it, so yeah, on the other hand, for the analytics or more and more data product side, there's always—and I think there always will be—a huge angle and a huge value creation and contextualizing the data properly into the business context.

Yeah, while standardization and automation are possible and will continue to improve for charting data, integrating data, and giving you a simple overview of what's happening, I think the biggest challenge that will remain the subject of data specialists is the more tricky part of getting it right: looking at the right data and communicating it properly to your stakeholders and management, and that probably will not go away too soon.

On the other hand, you will probably benefit a lot from getting more efficient with these because you don't get hung up as much with the standard parts of the work.

TIM: Yeah, I feel like there's a lot of tedious bullshit in any data role, and yeah, it seems like a great opportunity for AI to take some of that data wrangling and just a lot of the kind of day-to-day stuff from data engineering in particular ends up being just pushing data between different systems and making sure that it's represented fairly.

And if we can automate a way that I think we can do something more interesting, do you suspect then that, like the blend of technical versus soft skills that the average person might have, it is going to change because it sounds like maybe a lot of the day-to-day stuff might be automated away?

and you're talking about interfacing with the business communication just focusing on the right problem and like knowing what question to ask; they sound like a bit more soft skills and technical skills, so could you imagine some of these roles just having a different mix of technical versus soft skills compared to now?

LUKAS: I can definitely see that being the case. Honestly, I think even today it is already these data specialists that have the necessary soft skills and the necessary overview to put their own technical skillset into the best, the most efficient application possible that creates the most value. People who don't really understand the business context or the goal of the solution that they're implementing to the necessary degree can have the best technical skill set.

But in the end it may still be that their work falls flat in the actual practical application, but yes, with more and more of these manual bullshitty parts of the job falling away, I do see a certain shift to the soft skills happening, although I think there will always be a need for a certain technical and data savviness that you need to bring to the job to be in an actual position where you can validate and control and challenge the outputs that any automatization or AI tool might give you.

TIM: Yeah, I think that's a really important point because I feel like a lot of these large language models are exceptionally good at presenting something that, if you didn't have any knowledge of that area, would look great, but it's only once you start using it for things you actually are an expert in that you realize, No, this is superficially right, but that's wrong.

That's a lie. That's a misrepresentation. You forgot about this, and so got the imagining AI at the moment being used en masse by people for things that they're not an expert in is scary, to be honest, and I feel like it's because it's so convincing. Like if it was obviously wrong to anyone, it was just gibberish, then it would be discounted, but it's how it's subtly wrong, which, yeah, is worrying.

Another thing I was just thinking about then as we were discussing that was if, let's say, a data role in a few years time is relatively more of a soft skill domain expert kind of role, would we expect to see different types of people becoming data people? Would you be, You've worked in the finance bit of the business for a couple of years; that domain very well, or You've worked in, I don't know, the customer success bit of the business?

and now you've almost become a data analyst powered by this AI that's doing the kind of tedious stuff for you, and you apply your business lens or your kind of soft skill lens. Can you imagine you have the blend of people being a bit different?

LUKAS: I can see this happening as I think it's quite clear that the hurdle to actually dealing and working with data lowers the more we benefit from these two links. It will most certainly be the case that the less effort it takes, the less reskilling is required for a specialist in another field to approach data.

We will be seeing this more and more often, people approaching closer to the data domain. But this only works as long as the data is still sufficiently superficially reasonable and there's a decent output that you can actually contextualize into a business context. What I see happening quite a lot, at least currently, is AI also falling flat in times in the preprocessing stages, be it data integration or data transformation, where there may be some specific situations. some architectural issues maybe that you're facing historically that just make the standard data processing as you would love to apply it and as AI usually applies it as well not work out because you have to factor in certain issues that come with the data from the get-go

and in these areas I think it is and it will remain to be necessary to have someone has the technical background technical expertise and most importantly experience in the field that comes from having done it multiple times themselves having stumbled a lot of times about these various pitfalls that you might encounter in specific situations to actually be able to correct and adjust what automation might give us so that even on the superficial data that falls out at the end of your pipeline or processing journey That even there there's reasonable data to a degree where it's valid and correct as an output

TIM: I feel like with data you can predict what's going to be wrong with it once you've had enough experience; you know what to look for. Okay, I'm going to check this. I'm going to look for outliers. I'm going to look for when there shouldn't be negative values in the raw data. I'm going to look for

There's been a transformation, and I reckon the decimal point's been removed. It's going to be an integer. You have enough experience; you just know probably what's going to be wrong, and you can test for it yourself. I wonder if AI will make the same mistakes, or if they will make like a new set of mistakes, and we might almost have to develop a sense of that gut feel for what AI is going to do wrong and make sure we explicitly check that.

What do you think?

LUKAS: No, that's a very interesting question. I guess when we think about it, I work with AI in a lot of situations where I have it put out standard code pieces for work that may be complex but easily standardized in a way where I trust AI to create it. Speaking from that experience, there's definitely a sense of false security that sometimes comes with the solutions that AI generates where you may not directly see on the surface that something actually went wrong.

But you have to dive quite deeply into the code or in the earlier processing steps in the data context again and then the data processing journey to actually see where something went wrong. These situations definitely arise. I still think a lot of the standard framework of how you would approach a data set and validate that the end result is actually valid will still apply.

But you probably have to fight both the inner laziness that sometimes prevents one from actually validating and checking everything and, in addition, the sense of security that AI gives you when it produces a solution that at first glance looks easy, smooth, and superficially correct.

TIM: It's interesting. It's not only the As you say, the output that kind of checks off The kind of smell check initially, especially if it's in an area that you're not an expert in, but at least for ChatGPT in particular, is the kind of assertive confidence with which it gives its answers.

like it's very rarely prevaricating, saying I did my best and here's what I think, but there's a lot of shortcomings with this approach. It's not here's the answer; let me know if you need more information, and I wonder whether even forcing it to be a little bit less certain about itself or maybe there's a certain parameter you can use to get a little bit more or a little bit less confidence in its answers.

I feel like I would appreciate that sometimes.

LUKAS: Yeah, I guess that's one of the things we appreciate most about colleagues that may be very skilled, have long-term experience, and that we consider to be experts in the fields, and the thing I am most valuable as is when one of these persons actually talks about a solution, suggests an approach, and is very open about self-doubts in, Will this actually work?

Is this consideration correct? I'm not fully sure about this part here; maybe we should dive deeper and check into this. It's super important that we actually do so that we actually do validate and check these topics, and it's something we don't see a lot from at least the language models I have been using so far.

Yeah, I wonder why we got to that point where it comes out of such a broad chest of sometimes wrong solutions.

TIM: I wonder if it's because it's trained on public data where you're not really wishy-washy; it's listened to a lot of politicians blogs; it's listened to a lot of very confident-sounding people. I wonder if at some point it's trained on, like, the internal dialogue of company employees. Now everything's recorded on Zoom and whatnot.

Maybe then it will add a layer of nuance to its discussion or its thought process that currently is lacking. Who knows?

LUKAS: Maybe that, maybe thinking about it, maybe it's also the human on the rating on the validation and scoring side of the model because, of course, if you just—it's a natural thing that if you have an expert or you rate this AI model that is supposed to know everything or at least as much as possible

You instinctively rate and answer better if it comes out of a confident voice, at least as long as you don't check it in detail.

TIM: Yeah, there's a bias there, isn't there? Shifting gears a little bit now towards hiring, and we'll talk about AI hiring and AI being used for things like CV screening for things like interviews now just being interviewed by an AI bot. There's a lot of talk about candidates using ChatGPT to tailor CVs to job applications and to apply on masters.

There's a lot of talk about AI, but it's interesting because I feel like traditional hiring is just not even at ground zero when it comes to being data-driven. Most companies flounder along, measuring almost nothing in the hiring process, based a lot of decisions on very subjective criteria, and so what I'd like to talk to you about is just data-driven hiring to begin with, like before we even get to AI and all that fanciness. Have you ever experienced any projects or been working on anything to make hiring a bit more objective, a bit more data-driven in an organization?

LUKAS: So far, nothing in terms of AI interviews or AI CV screening as of yet, but regarding standardization and data-driven tracking, I've had quite an interesting project in the past as a prelude. Maybe I think these kinds of standardizations and data-driven approaches work best when you have some situation that can be broken down to certain standard components or to certain generic factors, which may be tricky if you're a small company searching for a very specific role. You don't really have an internal data track record; you don't really have any data to compare things with.

In these situations it's challenging, albeit probably still possible with some good ideas, but what we had the luxury of is we had a situation where we were searching for a large number of employees in basically what is a very similar, if not even identical, role, except if you think the only factor is maybe a bit of seniority and then the actual location.

This wasn't in my previous company, where we were active in the German personal finance and the broader sense, and we were looking for employees who would do the financial advisory/sales, depending on how you would define it. It had been acquired by a big investor and had very ambitious growth goals in mind, which gave us the target of hiring lots of good people quickly but also the requirement that it shouldn't be too expensive.

And of course these people should stay on board for the long term. It's a very easy situation, of course, but yeah, what we took to heart there is we tried to do this more data-driven, and we had a good template because our company was very strong already in performance marketing and online performance marketing and lead business for the actual end customers, where we already had very good and detailed data-driven approaches to figure out which campaigns cost us how much and how much conversion we get.

Where did the customers come from? We have been tuning every detail there from day to day for a big volume of leads. We went and took that approach and applied it to our actually hiring a funnel for employees for this kind of role where we're looking for people, and it was actually quite a fun project.

We started integrating our hiring tooling to get in the data over what kind of sources we have for applications, be it headhunters, organic traffic onto a job posting page, recommendations that people come from, or just job postings on specific sites that we put online.

These were our sources that we looked at, and then in the next step we worked with the state and the application of all these sources; we combined it with the costs that we incurred with each of them, and we started basically measuring our application process as you would do with a lead funnel in performance marketing, going from the initial stages—first application screening, first interview, second interview, and so on—until you make an offer and ideally sign the person.

We looked at all these factors and conversion rates over the different funnels to actually optimize and improve this whole hiring process in the end, and in theory, I think it's a very cool and very exciting idea, and in practice, you of course have lots of challenges with this kind of approach, but it was definitely quite cool to see this in action and to see some of the

initial insights that dropped out of this as it usually is with this kind of data exploration, and that is this project you don't need to dive into the most detailed investigation from the get-go; you usually want to just start combining the data and looking at it in a very rough way. You already see some low-hanging fruits in terms of improvements that you can make, and such was the situation for us there as well.

TIM: That's awesome, and yeah, I love the analogy to marketing because it's just a funnel like anything else, so why couldn't you apply the same principles? I feel like this is one thing that's generally lacking in talent acquisition. I feel, as a very broad statement, that not a lot of people end up in there from

scientific data, technical kind of backgrounds, and so I feel like there's a monotonous thought process in how companies approach hiring that, for myself, is a similar kind of thing. I was a data analyst; I hired all these data people, and that was when I saw, Jesus, this process is broken. It sounds like you use a similar approach, and the marketers as well, having people come from another domain or another business and looking at how something's being done is very powerful, I think, because it gives you just this completely different lens that you otherwise wouldn't have.

And so that's a really great example. You briefly touched on the fact that you have some challenges, so I would love to hear if you could share any of those challenges.

LUKAS: In this specific situation, as with so many, getting the data was quite a big one. At least with leads, with what I was used to, we usually had at least quite a uniform way of getting the data in. Right people create click-throughs on some form on a website; they become a technical lead.

Everyone is similar in how they are represented and structured, and you have a nice starting point to work with the data with hiring, which is, of course, a very intense people business and a very big relationship business. In times like this, it's not quite as simple because not everyone comes in via online channels.

You have lots of candidates who come in via personal recommendation, and then you have sales managers who know a guy who's in the town next door. They talk to them, they get a handshake, and you're going while you're still looking at your dashboard and don't even see the application show up. This stuff is, of course, quite tricky to work with, and it can be very critical because if you don't get it right, then your whole insights from the project won't be really worth a lot.

In that regard, my learning was that it takes at least as much time to get the business connected and conformed to a certain standard way of taking in at least some rough data on applications on updating the status of an applicant. It takes at least as much effort to get that part as it takes to actually integrate some data sources and chart the figures, which loops back nicely now that I talk about it to the whole automatization part as well. So that's one big example where you really need to align the business first to get the data in a quality and in a way where you can actually work with it and gain value from the insights that it might generate.

TIM: I'm interested in who was driving this project because it's a talent acquisition HR project in theory, because that's the domain. How heavily were they involved? How did you work with them? Did you receive pushback from them? This is quite an atypical way that a talent team would normally think about things, at least in my experience.

and so I'd love to hear any insights around that.

LUKAS: we floated the idea once or twice at earlier points in time without going much into detail on this issue in the end it was from my perspective to a degree just the clear need for a good process and optimization in in that area it was a very challenging hiring task and that the HR team had and so they started looking for solutions where then there's interfacing with my time data team came into place and they're We then were able to push the project together of course data leading on technical implementation ideas for presentation but also data standardization and HR leading from the hiring side bringing them know how from there and also helping with the connect into other areas of the company

For example, the sales managers who somehow have to get the data into the tooling as well—cool, so I would say it's been a collaborative, a joint effort led in parts by the need for optimization and also just by the promising potential that data-based optimization might bring.

TIM: A private equity investor coming in and giving what sounded like an impossible remit because it's hiring lots of people very quickly and making sure they're very good and making sure that they don't get fired or quit early on.

It's hitting normally; it's like a trade-off between speed and cost, and cost keeps the cost low as well, so it's speed, cost, and quality, and normally your levers, but they're like not low cost, high quality, and high speed, so at that point I'm assuming the talent team or HR team is thinking, Shit, we have to figure something out here.

And is that kind of the origin? Is that like the driving force for why this happened initially? Was it just a like they had to do it in a sense?

LUKAS: Yeah, in a sense, so right when you have certain goals, you may be put in a situation where you need to look for a solution, and in areas you might not have explored yet before, that may also take some effort in the short term that also takes time away from other tasks in the meantime.

So it's a bit of a gamble on that, that there was the need for improvement and hence the idea for this experiment and solution.

TIM: So that's a great example of a data-driven way to do hiring and to take this marketing analytics lens and apply it to hiring. At the end of the day, it's just a funnel like anything else.

so that makes a lot of sense and is amazing. Most companies don't apply that level of rigor to your point; maybe sometimes it's not as easy because of the data challenges you mentioned, and this approach, of course, would result in better returns if you're hiring a very consistent set of people again and again in a high volume, whereas if it's like a random person here and there, maybe with a sample size of one, you can't really have you The most data-driven approach, but generally I feel like the way traditional hiring is done is quite subjective, and in data or in any other space, typically the first stage of a process is screening with a CV.

99 percent of roles is just human, or maybe now an AI looking at a CV and saying, Do I think this CV and this application are a good fit for this job description? Do you feel like that is a problem? Do you feel like companies overemphasize on paper, and are they missing out on maybe some really good talent?

LUKAS: It's, in my opinion, always very tricky to get it right when looking at the CV, especially nowadays, where I would assume lots of people start, of course, applying certain embellishments and formalizations with the help of ChatGPT. You definitely have to go further than the CV if you want to figure out the right potential candidates.

But at the same time, of course, handing in a CV and getting your application in is a good way to get people through this speaking-in-the-funnel analogy again, getting people through this initial door of giving you their credentials, your application, so that you can work with them and explore a bit deeper how their skill set and fit for the job actually look like.

TIM: As you mentioned, candidates are probably using ChatGPT to generate or at least do a first pass at a CV creation or maybe a CV optimization.

and that's certainly what we all hear: candidates would always lie on CVs. This is before AI existed; there's always some level of embellishment, some level of exaggeration, or a complete lie. I wonder now if that's going to be exacerbated because it's almost Hey AI system, you create the CV for me.

If they haven't been the one to actually write the lie or the bullshit, you can see how they could disassociate from it a little bit, or just the simplicity of, Oh, I asked AI to create a hundred different CVs for a hundred different roles, you're not really going to be able to check all of those in fine detail.

So I wonder whether we'll actually see an increase in the embellishment or kind of fakeness of CVs, so then what are we going to do? Because if the CVs are even more bullshit than they currently are, how are we going to actually screen them? Can you imagine what the future is going to look like?

and how are we going to select who to interview if the CV step is now even more impaired than it has been?

LUKAS: Yeah, it's quite an interesting challenge. You don't want to be interviewing 20, 20, or 30 people every week to actually dig into these points. Of course, if you do the interview, it's usually possible quite easily to get down to the actual truth. But to save time beforehand, it's honestly one of my biggest pet peeves.

It has been over the last months and years already of how to properly break down the set of people to actually invite for the interview. We've played around a little bit in the past with sending out challenges, so data challenges are sometimes specialized for engineering roles and sometimes more for the analytics side very early on in the process.

and we've also played with very quick and short screening interviews just to get this initial eye contact connection to get the gut feeling factor for that person basically, but from all that, at least in my experience, we didn't find the Holy Grail yet in early optimization of who to actually invite.

and

TIM: Yeah, there's no silver bullet at the moment, and I feel like a lot of these decisions are just trade-offs, and depending on the current market conditions, I noticed companies approaching it differently, so as an example, I don't know, 2021, the peak of zero interest rates, somewhere in like Australia, locked borders.

We didn't have migration for two years, which for a country that relies very heavily on skilled migrants was a big issue, and so for some roles, like software engineering and data scientists, there were like no unemployed software engineers in the country; like everyone was employed because there was such a shortage.

LUKAS: You were afraid to scare people off.

TIM: Yes.

LUKAS: That's

TIM: So at that point, companies were saying, We can't do a test; we can't even do two interviews. It's like a pretty much quick chat job offer because there was no one else to choose. Now the narrative is so different; it's, Oh, we're going to give this challenge or this test to a candidate because we want them to prove they're actually interested.

Which is just the supply and demand thing, so I guess you have to change your approach depending on the market, but it's interesting; like if you've been doing hiring long enough, you see these cycles go through, and you think, Oh, okay, I understand what's happening now.

LUKAS: Yeah, that's quite the interesting perspective, as a scarcity definitely leaves you more open to compromise, and then the earlier screening sessions, but in the end, for me, it's been that I never really felt rewarded when making these compromises early on or making hiring decisions under time pressure, especially back when the markets have been tighter in the supply area. My approach usually was to consciously plan a lot of time.

For hiring processes for these positions, I always like the analogy to apartment hunting here in Munich, where I'm stationed, but similar to a lot of other cities as well, you're in a pretty bad position if you need to find an apartment within one month; it gets super stressful, and you waste a lot of time.

and in the end you most likely get the apartment where you won't really be happy with it in the long term, but if you're in the lucky situation where you can take half a year and you don't have any time pressure, then you can really take it slow. be picky Be aware that the market is tight and still count on the fact that given enough time, you will find the right person.

And that's usually been my approach in the past as well, which I've been quite happy with in these market circumstances.

TIM: Yeah, 100%, and I feel like it's one of those things where if you are probably an expectations and mindset thing, as you mentioned, just knowing how much time this is going to take because if you go into it and you've got this full day job and you're like, Oh, I have to squeeze in a couple of interviews each day, and then you've done eight of them in a week…

and you're like, Oh, I'm sick of these interviews; seven of them were terrible. You almost feel compelled to think Oh, let's just close this role and hire this person, but that's such a trap because if they join in on the wrong person, you're fucked. Okay, that's such a dreadful situation, and once you've done it enough, like once, if you've had a few bad hires, regretted hires, hopefully that has scarred you enough to go, Okay, we can't have that again because it's dreadful for everyone.

The candidate is unhappy, like they're getting performance management or quitting or fired or something that's awful for them. They've just lost their job. Unless you're a very confrontational person, then if you're their manager, having those conversations is also awful, especially if you don't have experience with it.

LUKAS: You bear responsibility for that hiring decision as well.

TIM: Exactly. It's partly on you, whatever percent that is—5%, 10%—it's your decision, and you hired them, and it didn't work out for whatever reason, and so if you've experienced that firsthand, anything you can do to avoid that is probably worth it.

LUKAS: and it's a win, right? Better for the candidates, better for you, better for the team, and the company

TIM: What about focusing again on the traditional hiring approaches? So you've mentioned manual CV screening, which I feel like could be improved with AI because at least AI could do it automatically or basically in real time. Almost AI could do it in a consistent way in a way that humans can't.

Again, I've been there reading a hundred CVS; by the time you get to the hundredth CV, you are bored of reading CVS, and you don't want to see another CV. There's no way that the hundredth candidate was treated the same as the first. Again, in theory, AI machines could give you great examples.

So I remember distinctly trying to hire a product analyst years ago and seeing their hobby section, and it said on there that they were a semi-professional footballer in Brazil, and we had a five-a-side team at our local university where we just could never quite win this competition. We always were just a grand final shot.

and I thought, Oh, amazing! This is the guy we need to join this team, and so I guarantee you he got elevated in my head above everyone else. He got an interview because he was a semi-professional football player in Brazil, which has, of course, nothing to do with his ability to do A/B testing and user behavior analytics.

So the fact that I saw that is just so unfair for all the other candidates who didn't get that because they didn't have that hobby, so I feel like AI could improve that step a lot, and then also traditional hiring—you have interviews, which I feel like most companies do in a very Oh, let's do a pub test getting a vibe check.

Let's go for a coffee. Like it's a very gut-feel way of approaching it, I think it is so open to bias. In your experience, have you seen any ways that companies are trying to reduce that bias or minimize the bias from either the screening step or the interview step?

LUKAS: Yeah, so far nothing too elaborate. I think in the end it's definitely the case that a lot of bias goes in; as you describe, you see the CV, your eyes drift to that section. Probably the first thing you notice was the fact that he's a soccer player that Yeah, yeah, yeah, that sticks with you; that primes your opinion.

and at that moment you need to either travel back in time and not have seen this information at all; I think that's what AI might help us with: filtering out the important parts and obstructing your view from what shouldn't really concern you as much as other topics, and that's, of course, always a bit scary.

and it takes a lot of trust in the AI to actually feel comfortable and not seeing the full picture is something that I, to be honest, would see myself struggling with because I would always have this concern: am I not seeing something that I should be seeing? But the other thing is, once you have that bias in, you need a check, right?

You need some external influence that confronts you with it that makes you reflect on it, and if you get this right, and you ideally have an external influence, I'm saying because I think it could be both a person or it could be the AI. If you have that and you reflect on it, then I feel comfortable saying that you probably will be able to formulate a more objective view.

In the past I always just found it the most helpful to get other colleagues into these interviews and, ideally, ones that have different interests and different motives than you, especially people from other teams of your company, maybe from the marketing team of the roles of performance marketing or marketing data-focused ones.

maybe someone from HR if the goal is to make more HR projects with that candidate getting that person in, having that second view, at least from what I saw already directly helped a lot by getting this different viewpoint and having this discussion afterwards and realizing I am shit; I didn't think of this, and that's actually a very valid point.

Maybe you also didn't see some of the more positive attributes of right bias, which can go in both directions, and then that other person might help open your eyes to that as well, but you can probably also try this out with ChatGPT or with any language model to get this kind of sparring session you would be doing with a colleague to maybe challenge your own view and opinion on that profile and that CV a little bit.

TIM: We could get it in on those conversations because a lot of companies would do the interview, and so that's often online; it could be recorded as a transcript, but they'll often then have, after the interview, either straight after or maybe at the end of the week, this kind of collaborative discussion about, let's say, the five candidates who are at a certain stage, and there's a lot of back and forth, and it's just such a That's a great area because you're talking about the pros and cons of the candidates and trying to compare them across criteria across different things.

I wonder if AI could almost listen to that transcript or read that transcript and give people feedback on, Hang on, so a common one for us that I've seen a lot would be that there's a lot of scope creep or, like, moving the goalposts on what you're actually looking for. It's so easy to forget what the criteria are.

and so you'll have feedback like, Oh yeah, they were really good, but they seemed a bit shy, which, okay, maybe they were, but is this role meant to not be shy? There's actually a criteria we're looking for, so you get a lot of kind of valid-sounding stuff that's perfectly reasonable but shouldn't really be considered.

And so I wonder if AI could almost hold us to account a little bit by being in on those conversations. There are so many companies now that would use those meeting recorders to take transcripts and do summaries like that. That could be a value add. What do you reckon?

LUKAS: Definitely think so, so of course there's all kinds of discussions going on over bias in AI itself too, but I definitely think that for these purposes, AI can be a great tool to actually get objectivity in to control your bias and to center you in on the original goalposts, and that you said because it's a computer, it will probably not depend on what it sees if used correctly.

Yeah, that's definitely something that I think is going to be employed more and more often as we go forward.

TIM: Thinking about it again Now I wonder if people would be more receptive to receiving a nudge from AI versus a human because it depersonalizes it. It's, in a sense, a bit more objective; you're not starting a confrontation. Yeah, could we be more receptive to receiving feedback as long as we trust the AI gets to a point where we think it's authoritative? Because, yeah, maybe it doesn't have that ego impact that it would if a human's telling you.

LUKAS: I definitely think so. I think this is actually a super interesting point, right? Because you don't have this interpersonal issue with ego that comes in when you get actual, critical, and to-the-point feedback into a direction that might not be yours and that you might not be too comfortable with.

It can be taken to the extreme. I heard of AIs that are used for some, I would say, soft psychological consulting self-reflection at the end of the day, where you tell the AI, I did this and that and behaved like that, and you get some kind of sparing input on this. I've never tried it myself so far, but it's definitely something I'm curious to try out as well.

And I think it's maybe very valuable input because, of course, this kind of feedback and these kinds of pushes are oftentimes just what we need.

TIM: Yeah, I'd used chatty petition recently for a Spanish lesson where I asked her for feedback on what I was doing right and wrong, and I found it was pretty cutting. I delivered it in a friendly way that I received and accepted, so yeah, I wonder how it would go for a bit of cognitive behavioral therapy or hiring feedback.

I'll have to check that out. One other question for you: I hear both sides of this, but one side I hear is that for hiring managers, they'll tend to favor, if I think of the people I've spoken to in the last three months, I'd say 80 percent would say I now more heavily favor soft skills in a candidate.

I feel like technical skills are less important; maybe that's the development of AI making some of those tasks easier. Who knows? And they're almost a bit dismissive of the technical skills and almost implying that anyone can learn technical skills and that they'll just upscale magically, which I get.

and I feel like it's harder to change your personality and soft skills than it is your technical knowledge, but then if I actually sat down there and thought about how long it took me to be able to write SQL, even just that language, or the art of data analysis or data wrangling, like to know enough to not make horrible errors every day and miss glaring things that now I would always look for

That is a craft and a trade in itself. Do you feel like sometimes hiring managers forget how long it took them to learn the actual trade of being a data person?

LUKAS: I think that's a very human and relatable thing. Once you work with a tool for years, you have your experience, and things start to feel easy, and at some point, you may start to project this onto how long it might take another person to actually onboard and to learn the ropes with certain technologies.

I think that this is a big factor, and I've seen myself prone to this oftentimes as well. I think it's definitely a factor of the technical and tech managers hiring managers with a technological background, where you already bring that experience to the table for all different reasons.

it may also be a factor for non technical managers who just don't have the background at all to understand the complexity of the technologies that the employees need to be dealing with and it's further heightened or accelerated in recent years and my feeling by the scarcity and the market that we already discussed a little bit of two these situations that may become quite attractive at least on a gut feeling level to not have to go through this super difficult hard challenging long winded hiring process to find that one person who actually has the technical skills that you look for but instead compromise a little bit on those with the perspective that yeah I know that this person is smart is also well spoken has all the soft skills in place so I'm confident that they will learn

and the necessary skill set, but it's also from my experience a fallacy that you should be aware of as a hiring person because, in the end, to keep it short, it's more effort for everyone involved, and again you're most likely not left with the perfect fit for your position anymore.