Alooba Objective Hiring

By Alooba

Episode 91
Jan Beitner on Hiring Blind and Cutting Through the Noise in Data Leadership and AI Recruitment

Published on 2/6/2025
Host
Tim Freestone
Guest
Jan Beitner

In this episode of the Alooba Objective Hiring podcast, Tim interviews Jan Beitner, Data & AI Director at Inflexion

In this episode of Alooba’s Objective Hiring Show, Tim interviews Jan, Director for Data and AI at a private equity fund in Europe. They discuss the challenges companies face when hiring data professionals, particularly the importance of having a clear job description and understanding the value a data role adds to the business. Jan emphasizes adaptability as a crucial skill for data hires and shares insights on the interview process, common hiring pitfalls, and how leveraging AI could change recruitment. The conversation provides valuable advice for shaping effective data teams and avoiding disastrous hiring decisions.

Transcript

TIM: We are live on the Objective Hiring Show with Jan. Jan, welcome to the show. Thank you so much for joining us.

JAN: Thanks for having me.

TIM: It is our pleasure. I had a chat with you about a month ago and we spoke about a few different things and I'm pumped to unpack some of those in today's conversation. Where I'd love to start though, is if we could get a little bit of an introduction about you, who is Yarn? What are you up to at the moment? Love to contextualize our conversation a little bit.

JAN: Yeah. So I'm director for data and AI for private equity fund in Europe. I'm sitting in London and so we operating in the mid market and I'm looking after that portfolio of 50 ish companies and drive the value with data. And that involves a lot of hiring and building teams. Sometimes it's more on the actual data side, sometimes it's more going towards the AI side.

TIM: And so you're working with the portfolio companies of the fund and How intricately involved do you get in hiring? Do you give them kind of strategic advice? Like he's the first person you should hire and then they're the ones who figure out the team. Do you actually get involved in interviewing and the mechanics of hiring?

JAN: Yeah, I do get involved. So I'm getting very much involved with the first type of hire. And in most cases that's a bit more senior hire. That will then build the team up. And because most companies often haven't really done this, they don't really know who to hire how to hire and how to interview. So I'd be very involved in writing that job description together with the team there and then doing an interview. When it gets to additional members of the team that is, I'm leaving that to the portfolio company.

TIM: And so you're there in that, that critical first step of them getting their first, I don't know, head of data analytics, director of data science, that kind of role. Is it?

JAN: Exactly. That's mostly the type of role that I'm hiring for.

TIM: Yeah, that's so interesting because I've spoken to a few Data leaders recently, and we were discussing this very conundrum, which is for a lot of companies, maybe those who aren't necessarily VC funded how do they go about getting their first senior hire? It's a circular problem because the person who would hire that person is almost the person themselves, if that makes sense. So having someone external like you as an expert to come in and help guide that process, I think must be very valuable for them. And if nothing else is going to prevent them making pretty disastrous errors in making the wrong hire I imagine that's a lot of the value you're adding to the equation Do you think

JAN: Yes. Getting the wrong person in is quite disastrous, obviously. And I for Orphan, for those companies. And we're talking about mid-market companies with we are writing check sizes between say 40 and 400 million pounds. Those companies don't. Really have there's no one really with a big history of doing those things. They have two problems. One, they have to first what does this person actually do? All right, because easy to say I want to do something with data or what do something with AI, but what does it actually mean? And how does it really drive value? So I think the most crucial bit is actually writing the proper job description and that scope for the person and what they're driving And often actually before we do the hiring gonna have like contractors in our consultancy already doing a piece of work that then convinces everyone to say Oh, we really need that person because now we understand why x and y And works and how they will drive value now once you're very clear on who you need I think there's an element of a technical interview. That is required. So I would be doing that often, but I think the job is 80 percent done when you know what you need.

TIM: that's such a great summary actually because yeah, I feel like in my experience hiring processes that go downhill quickly a lot of it stems from That initial spec being off like the right questions not being asked, fuzziness in the thinking of what exactly you need, which then inevitably leads to what I would call like the moving the goalposts problem where suddenly you get halfway through the process and some other interviewer comes in and they have their own ideas about what you're looking for and you're all over the place on the wrong page. So it's interesting to hear you back that up to say Yeah, once you've got the clear scope, then as long as you just stick to that, you should be right most of the time

JAN: 100 percent are so super agree. That's really important. And particularly if you then move around also reporting lines and so on, that does really bad. You need to know. Who holds the pen on this? What you why you're really doing it and why it really drives value for you if you just think you want to do something with data, I think it's a v that's a really bad reason to hire someone

TIM: and thinking about it. Now, actually, this process is helping the person who ultimately gets placed there because they're not getting into a poison chalice scenario where they've got a vague role where someone expects them to do almost magic because it's a I, you've laid a path for them to have a chance of being successful as well,

JAN: absolutely right and in some ways that is my role of making sure that those things are successful and actually drive that value and really work and If I'm getting the wrong person in and have the wrong objective for that's also, it's also going to fall on my feet. And so it's really important to know upfront what you want. Because I think there's a lot of value in those hires and particularly the even very large companies are really bad actually with data. But in, in the mid market often there's, people don't know really how to get started. And so how, what should we do and why are we doing it is so important and really helps people to then when they're landed to say, Oh yeah, actually, I know why I'm here is not just, Oh the CEO went to a conference and heard we need more data people.

TIM: I'm laughing because that's such a common scenario. I imagine one of the challenges is because you have quite a large number of portfolio companies. You probably can't, unlike if you're hiring, let's say into your own business where you really understand in intricate detail, including down to a cultural level, it must be hard or impossible to get that level of depth of knowledge over each of your portfolio companies. So is it a case of knowing enough about them to advise? Is it also a case of really needing them to fill in the gaps of the cultural elements and you're focusing more on the technical skills needed for the role? Like, how do you think about that?

JAN: Yeah. So the moment we acquire a company in the first hundred days, we would do like a fair in depth assessment. Actually, Marge, so you built something called a value creation plan, and then also we do an assessment on what is actually the current data capability, because historically, I think also looking forward, if you don't have your data in a good shape and actually know what's going on. That can drive under performance, but likewise, really having your data in really good shape, we know drives higher valuations. So really in the first hundred days, we do quite in depth assessment of what actually is the real, like flashing out the value creation plan for the entire company. What is the current position on the data? And then. There's a question about what do we want to achieve with the data function? Do we need to make any changes? And so then we have the discussion with the management who really drives this really. And that moment, then you make a decision. So it's quite in depth analysis of of every place. Of course, what I can't do is the the cultural fit and the chemistry of people, right? Because I'm not the line manager of those. You need the person who's going to hire. And typically those people sit under the CFO in most cases. That person really needs to have the rapport and have the chemistry. So I'm doing more of the technical and the why questions. And that's where I'm supporting other than actually hiring them directly for myself. And I think this is, that's a different dynamic, to be honest.

TIM: I love also in your approach is basically. Part of the solution to this is get more data like you're doing that research initially, and that data you're getting back is then informing the whole process, which is, I think, a nice a nice loop. One thing you mentioned in passing then, which struck a chord with me again, because of some conversations I've been having recently, is you said most of these people would end up reporting to the CFO. And I know from some guests I've had on recently, they felt as though sometimes, depending on who you're reporting to you could be set up for more likelihood of success or failure as an analytics leader Some of them had bad experiences in particular reporting into the finance function. They felt like their role was maybe not what it could have been do you have any views over who and who let's say the top level analytics leader in a business should be reporting to ideal worlds? Yeah. What's your views on where it should sit

JAN: I think the analytics leader, so when we're talking about KPIs and dashboards and so on, that has to go to the CFO. It's a bit different when we're talking about data science. And actually talking about product development or something very specific, but otherwise the analytics here has to go through CFO. Why is that? I think there are a couple of reasons. Number one is you need someone, you need to report someone who's cross cutting across entire business. And effectively, they're two candidates only, right? There's a CFO and the CEO really, and potential COO, right? Someone who does a lot of operations, but then you might be losing already the sales parts. And that's really an important bit. And now the CEO will have to do many reports. This is not a Direct C level report. So it has to go on to the CFO and a good CFO knows what they can get from this. I think if you are going with a CFO who is more of a glorified accountant and that, that does exist then I think you, you have a problem. But generally like good CFOs, they see themselves as driving the business. And they sit on the most important data to drive the business, the financial data. Working together with the CFO and being plugged in with the FP& A function is like the success criteria number one. Now, you don't want to, if you're far, if you're in the reporting hierarchy, very far below the CFO, that will be a real issue, because then, of course, you become like someone in the finance function only, and you don't become the cross cutting person. But if you have a direct report to CFO, you can really do the cross cutting across the entire different departments and help them

TIM: and for data science, because you made that distinction, it might go into a CTO kind of role instead, typically, or

JAN: very much depends on the product, right? If it's a tech company and you supporting the product, then yes, it will go into CTO, but if you are say like an insurer and we also have that case, then we'd go into the pricing function because the key thing in here is Price things better. So it very much depends what actually the real purpose of it is and maybe if it's more on the marketing side of things, right? You could go very much also into marketing. It very much depends what actually the `real purpose again is of that person. And with data science, you typically have a very big, clear use case while you're doing this. Not just like a general idea.

TIM: the lens you have, I find really interesting because you have this kind of cross sectional view across a similar set of problems for many different companies in many industries, which I think is put you in a unique, pretty unique position to. Be able to compare and contrast and I'd love to hear any insights you have around like most commonly recurring problems that these businesses would have when it comes to like data and data teams. If there's anything that just comes up again and again, that we could share with the audience. So it would help them in that scenario, avoid these kinds of common issues you've seen.

JAN: Yeah. If you're going back to the reporting lines, one thing that sometimes turns up that this data person is reporting into it, I think that is if you're very strong commercial chief Information officer, you're gonna be okay. But in most cases you're not gonna be okay. And so you've really got to change that because basically people get siloed away from the business. And so then they're not plugged into the business. People would then start questioning the value of the function itself. Because it doesn't really drive their value because you're not plugged in the business. You are seen like as that back office but doing something very fancy, really interesting, but not really additive. So you become cost, right? And I think that's one thing that really happens. From time to time. I think the other bit that can happen and this actually not necessarily your direct fold as a data leader is sometimes the finance function or the FP and a function is actually more of the issue or your you don't have a revenue function or no one does pricing and then you getting responsibilities for things that you shouldn't have, right? Like financial planning and budgeting. That is if someone doesn't really know on the finance side and how they classify different financial things, say like a down sell. If you think about churn, for example, defining churn is quite difficult because you can lose a complete customer. It's easy, but you can also, the customer is using your product less, maybe have usage based pricing. Maybe the customer is dropping one of the products. So how do you define all those things? And this is really important. And then you need those leaders. The other side to help you and define those things. So if this is missing, you have a big problem. And then I think the other bit that happens is sometimes people reinvent the wheel. There's also some things something they see basically people writing very custom pipelines to pull in data. And that takes a lot of time, a lot of maintenance, a lot of focus. You could just get it off the shelf. And again, you have a big team that is then not seen as driving that commercial impact. So I think those are the things that most commonly go wrong. There's also something around bigger organizations where you need it. You need to drive it up, basically the entire businesses and become sometimes people become a bit more passive than proactive. So I would always encourage people to be very proactive and not go into the worst case scenario, which is a ticket based system, right? Someone sends you from business, submits a ticket, fix this dashboard. And if you're in this dynamic, that's really not great. You need to become proactive and actually drive. Why do you need it? What what are you changing? And not being on the receiving end of that people should be able to do that themselves is that's easy. So lots of things go wrong, to be honest, right? It's easy to get right. It's also very easy to get it wrong.

TIM: You've mentioned one of those being like overinvestment on customized solutions. A company I've worked at previously, we had an acronym which stood for NBH, not built here. And it's almost like this mentality that we were going to make everything ourselves no matter what it was. It's It was a company full of software engineers, and it was like a bunch of nails to a bunch of hammers. We were going to code something to make something, even if it was, a new email system where we don't really need it because there's gmail for free and it's amazing, but we're going to make it. I wonder if that mentality is especially dangerous at the moment because AI is developing so quickly that There's going to be almost I feel like this cohort of people who identify as I am a coder. I am a statistician. I'm a data scientist, where suddenly, maybe the day to day things they do is going to drastically change. If, for example, in a year. Instead of ever writing code from scratch, you prompt an LLM to do it and it does it with 100 percent accuracy, hypothetically. Then associating yourself with the act of coding in your very identity is going to be, I think, a big problem for people. Is this something we're going to have to overthrow? Are we going to have to be like particularly adaptable? Are we going to have to think about what exactly we're doing and who we are? Have you thought about this kind of challenge?

JAN: I think we talk about this reinventing the wheel and you're doing that already a lot. It's never great, right? I like the Amazon motto saying built to differentiate. And I think that's always true even now. So if you're doing that right now, I think you're fine. But if it's indeed the boilerplate code that exists already 10, 000 times out there I think there's a risk. I think the big L. M. Providers are all trying to build. Two things, one is I think the ability to to basically control like software very generically. You've seen the operator and what Anthropic also has built. So just like a program that can click. And do some things.

TIM: agentic This would be so called agentic AI.

JAN: At Gentic, but even more in the sense of that, it just takes your computer screen and just can do like very generic things. Not only, at Gentic would also mean you might have to program that very specific for for use case. But I think they're trying to do this. You can generally do things that You and me can do on your computer. And I think the other use case I try going to go for is the software engineering side, right? Like the proper software engineering side. And I think if you're doing something that is done very often, say a website redesign, I would be worried on that one because. That there's so much code out there on exactly this problem is not a very difficult problem, right? If you get a tiny bit wrong, not a big issue in most cases so that is, if you like in that niche, then you also are probably not super differentiated in what you're doing. And I will think about if you can move into something that's more differentiated.

TIM: It's also a sense, excuse me, of adaptability. And in fact, when you're consulting with the portfolio businesses and helping them decide who to hire, are you explicitly looking for an adaptable kind of person just because this has come up in so many conversations I've had with people recently where it's almost a feeling like the tech is changing so quickly. That will be almost pointless to say let's make sure we've got someone with, I don't know, this particular set of tool experience because they're just changing. So we need to have someone who's very adaptable, can learn a lot of new things quickly, especially if they're coming from another industry. Is that something you would specifically look for?

JAN: Yes. I think it's very important. I think with almost everyone you hire, you always have to think about That they're going to probably replicate exactly what they've done in the past. If they had success with it, because it's just natural, right? You and me, we did something we're like, Oh, it worked. Let's do it again. And I think it's really important to, to look out and think about who. can actually go beyond that and say, maybe I'm not doing again exactly the same thing. But I'm thinking about maybe I'm going to do it differently this time. So I think being adaptable and wanting to learn is one of the most important things in this field, but also This has always been true, right? If you look back 5 years ago, or 10 years, 20, think about 20 years ago, what software engineering was, or data, completely different, right? Anyone remember Hadoop? I'm not sure not many people. And that's actually not a very old technology. It's going very fast. It's always been, and I think, this maybe just increases the pace a tiny bit. But it's not a complete game changer. If you didn't reinvent yourself over the and you've been in software engineering for the last 15 years, then you've got an Oreo problem. My Docker didn't exist like 20 years, basically ago.

TIM: Yes, the rate of change has already been so fast and it feels like it's getting faster. I'm not sure if there's any particular way to measure this, but that's, a lot of people's sense. What about when it comes to hiring itself? Do you think that's going to change almost as quickly as anything else? Will humans even be doing hiring in two years time? Could it be completely automated by AI? What's your kind of macro view on the use of AI in hiring?

JAN: I think it's a difficult question. I think I'm like, sometimes it's quite surprising because for the positions I'm hiring, you wouldn't expect that, but. Something to give people like small assignments and this is not like assignment in terms of solve this exact problem, but it's more what you would you do in the first 100 days and why there's more about, like, how would you think about building a team and all those kind of things. And literally someone sent me a said, Oh yeah, I don't want to do this in PowerPoint and do five slides. I want to send you a like here's a word, a file. It was definitely written by chat GPT. So how do you, obviously we didn't hire that person. Because it was like also not very good actually what chat GPT produced on this one. If it was good, I think then there's a legit reason, but okay. I think you need to hire to some extent in person. It's very difficult to not do that. This is going to be a risk that the otherwise just gonna talk more or less to the AI. I think we're not there yet, but it's also not very far off There's a quite a big risk on how do you do skill assessment and so on So I don't have an exact answer how it's going to change But I think there's a there are changes coming down the line. I think that's obvious how that's a good question We have a couple of we actually have a couple of portfolio companies that are recruiting companies and so if you think about differentiation in many cases Their value is not necessarily the recruiting itself, but there's other value, economic risk they take on and so on with the process training of people. And so that is a complete, so that's really important because that isolates them against certain changes in the future.

TIM: I was a bit dubious of AI for this use case for a while, but I feel I don't know, maybe the last five or six months I've found myself being increasingly bullish on at least the state of the A. I. Being good enough to solve a lot of the steps in the hiring process. Now that is to be fair, distinct from companies adopting it. It not being ground down by privacy regulation and not being ground down by recruitment legislation in some countries. But I feel like we're going to see like a new wave of SAS products that a SAS recruitment products that almost like A. I. Native. That are built on this emerge in the next, I don't know, a few months, and that will be interesting to see once any sizeable companies are actually using those, how they get on, because at the moment, from what I've seen, it's just your big applicant tracking systems, adding an AI feature here and there, but it's like, It's adding a bit of AI dust to a system that's fundamentally built for humans to click shit. Whereas I feel like we're in a new world where it shouldn't, there shouldn't be so many humans clicking stuff that's just a glorified SQL database wrapper. I feel like we're maybe emerging into the next era. But we'll see. I feel like one of the big issues with hiring, let me throw this at you is what's the success metric. I don't really have a way to model it to say if someone stays in a company for 10 years, tick or cross someone gets promoted, tick or cross what happens if someone leaves and gets promoted? Is that good or bad? We don't really have that data set that's as obvious or something as equivalent to like in sports and number of goals scored number of assists made is optimizing for. We don't have that for traditional roles. Is that part of the challenge? Do you think of maybe automating the whole thing?

JAN: It's definitely one of the issues. I think another issue is probably what we talked earlier on about this, knowing actually why you're hiring and what exactly you're hiring for. So I think sometimes this happens in the conversations, right? Some candidates where you say, okay, maybe you have to re-pivot. Shouldn't happen. Sometimes happens. And if you don't really have that exact idea what you want, I think that's very difficult. Like AI is also not necessarily gonna solve that for you because it would first have to diagnose what is actually wrong in your company? What do you actually want to change? Where do you see actually that added value? So I think it's. This piece of what exactly you want, you still have to figure that out. But generally I'm very excited for For AI to do a lot more screening, I think particularly the very basic stuff that is like completely obvious from CVs or very short conversations where there's zero bias involved anyways. So I think that's something I can also get past the regulators. I think when it gets to actual hiring decisions and actually doing terms of interviews. I'd that will be very difficult to to do just with AI. And then you've got the same problem again. What if you were interviewing another AI on the other side, right?

TIM: Yeah, that is going to be the challenge. And I, yeah, I feel like they are probably be implemented from the top of the funnel down just because of that top of the funnels of the most number of candidates, you're the earliest in the process. Biggest cost. Yeah, a no brainer, I think, personally, for making it less biased, because at the moment, like when you look at some of the stats around CV screening, like when Organizations have done these experiments where they basically tried to see what kind of biases there are against people from different backgrounds based on their name. The results are pretty catastrophic. Like one in Australia from a couple of years ago had tens of thousands of different CVs in three buckets. The buckets were the same as each other except for the names on the CVs. So first bucket had Anglo Saxon first and last name. Second bucket an Anglo first name, Chinese last name, third bucket had Chinese first and last name. They then applied to thousands of different roles around the country, different industries, different seniorities controlled for many variables in that experiment. And then they measured the rate at which those buckets of CVs got a callback or not. And so for the first bucket, it was like a 12 percent callback rate, third bucket, 4 percent callback rate. And they'd done a pretty good job in their experiment design. So I don't think they missed out any obvious factors that would explain that. Other than the name, which is shocking. So like I hear a lot of fear mongering or just legitimate fear around AI and some legislation in countries like America, preventing automated hiring decisions. But I think if the base case is already so flawed, it can't be that much worse than that, surely. And at least with an AI system, in theory, it could be auditable in a very clear way, tracks everything that is done, the decisions that's made and why. We don't have any of that at the moment with human hiring. So I feel like there's a lot of upside.

JAN: I think it's a little upside and like clear criteria, right? I think what people are afraid of is this. Machine that might make a decision and no one can really understand why, but as you're saying, I think it's at the top of the funnel where the criteria are very clear, right? And the existing systems or the old systems, they just search for keywords and stuff that's really bad because you have to exactly describe how it is in a large language will be able to judge much better. Did you actually say that or not? And maybe you just use different words. But basically I think that top of the funnel. This is just very hard selection and preselection on criteria that very transparent and that will be in most cases carried out by HR and maybe a first screening manager, but that's like it. And I think this is a low hanging fruit. That we should get to later on. I think doing probably completely automated hiring is really difficult for lots of reasons, but I think top of the funnel would already be worth a lot

TIM: Use recruiters.

JAN: Exactly.

TIM: so you outsource the pain and misery to them. I'm sure it costs you a pretty penny, though, to do that. So I feel like a theory could do it a bit more affordably. But we'll see. I feel like if I think about now, especially with a lot of the people I've spoken to working in very large companies where the talent acquisition function and the hiring team are completely disconnected. They will often report to me that. The talent managers that they get assigned almost on a job by job basis and don't have any relationship with, they're not doing a great job at the screening either, because to be fair to them, what on earth does the average talent acquisition person know about the very specifics of a data scientist versus a data engineer role? To them, it's just a bunch of data words. Like it would be trying to put myself in their shoes. It'd be like me trying to hire a lawyer. And differentiate between a, I don't know, Surveillancing lawyer and a barrister, and a whatever, I have no idea. I just have some words on a CV and I'd probably try to guess based on like what my mate told me or now I'd ask Chachipiti or something, but really I'd have no clue. Surely again I feel like a large language model could do better than an untrained person trying to select CVs based on a job ad and maybe one conversation they've had with a hiring manager.

JAN: Yeah, but it's a general problem, right? If you think about any type of process automation or doing things with AI. You just don't have those things properly codified because if you had for a talent team like A thousand page document that exactly described all the different roles in detail and what exactly you needed And probably that will have to change every year as technology progresses And your organization changes, but if you had that You wouldn't have that problem in some ways. And so currently, probably like how this more works is it's in people's heads is talking to your colleague about it. And but if that knowledge is not organized and formalized, and so that's a big barrier actually for AI to being implemented because you need to codify it and it's the same thing not only for hiring, but for any type of automation you want to do. So this is the biggest, I think, barrier we currently have and that people see that they need to fix their companies.

TIM: yes. And as you say without having defined what you're looking for and why it doesn't matter if it's an AI trying to search for people or screen people or a human doing it, you still have the same problem. I wonder if we can maybe automate away enough of the bullshit in hiring that opens up enough time to then actually collate these data and have the conversations we need to have to figure out exactly what we need to hire. And then whatever is doing that hiring, whether it's an AI or human or a combination, then maybe there's a better chance of getting it right if we've actually synthesized some of these data, put it in the right system. Like for example, so later on the funnel in the interview stage, I feel an AI interview assistant that's taking notes and summarizing and maybe pre filling your evaluation criteria. I reckon the candidates, scored a seven out of 10 on this. question. This is the reasons I have. What do you reckon? And then you as an interviewer can do a last sanity check or something. I feel like that would solve a bunch of time and open up some extra time to do other things, maybe. So even if it's not taking care of the entire decision, if it's assisting the human, it could help a lot as well.

JAN: I think the interviews would get a lot more structured if you had that particularly if you had an AI that would say here's a pre, like here's a list of all the questions you typically would want to ask on this exact position. And you haven't asked those questions yet because if you interview a lot of different candidates, you have to be very clear. About that you ask people the same questions and make sure that you can differentiate Candidates on those questions. Otherwise, what's the point of having that conversation? It's just going to be like then complete gut feel. So you need to be able to compare And I think an AI will be a good reminder for a lot of people to actually just follow good interviewing practices.

TIM: Now, speaking of that's probably something that we neglect a little bit, which is just figuring out what to ask the candidates. Is there almost like a bit of an art, a bit of a science in crafting a good interview question?

JAN: Yeah, I think, so now I'm more interviewing more senior candidates. I used to interview a lot of people when I used to work for the Boston Consulting Group. Probably over a hundred people for the same position. And and those were like already pre screened. What I try to do there is you have your set of questions, you involve them, and you think very hard about which ones differentiate and which ones don't. And I found that the more fluffy the question, the less it differentiates. And also, you must not tell the candidate in any way, what actually response you really expect. So there's a fine line between this, obviously, because if you don't give any context, they can't respond properly. But if you give too much, you basically give away the answer. And the question I found the most useful very concrete questions. First on, on tech and also asking them for opinions. So I haven't come across anyone who's good, who doesn't have strong opinions on certain texts. So if you just think for like quite junior hiring would ask, Oh yeah. What are like all the packages you're using, the libraries which one's your favorite and then tell me why. And then tell me, like, how this differentiates against another one. And basically, that moment, if people really thought about it. I think this is the first one that I really like to use. And the other bit is asking about motivation. Why do people want that job? What do they actually, what would they like about the job? What would be the things that would be not interesting? Because you find out a lot about what people say and what they don't say. So if someone is not talking at all about how they gonna work together with other people, they're probably not crazy collaborative but you've got to ask quite open questions, but also very concrete at the same time. So I think, and then the motivational part is really important in my view. I've had lots of the fluffy questions. very useless.

TIM: What's, like in your head, what's an example of a fluffy question?

JAN: So I think like a fluffy question is just Oh yeah. What do you think about this general space? Of data? I think it's not, it's too wide. The question and so you let people go and wander off wherever they want to and everyone's going to talk about something completely different. So how do you want to say candidate A did something better than candidate B? You can't, right? And I think also a lot of the, like some of the CV specific questions can be not very useful. Why did you choose that program over that program? If you think about education, honestly, people just got in. There's not necessarily that many considerations that people really had. And so they just make stuff up on the fly, I think.

TIM: and I imagine these learnings emerged after you'd already done some of these interviews, because then you'll, it'll almost become clear what a fluffy question is, if everyone's answering in a different way, and you're like all of them are like a 7 out of 10, so it doesn't help me at all. And so this must have been like an iterative process.

JAN: Yes. Yeah. It was very iterative, right? Like you have to adjust your questions, but that's only can happen if you do. That interview for the same position again and again, now I'm doing the same position more than the head of data the head of data science again and again, and you then develop a different set of questions that really help you differentiate because I think after 30 minutes, in most cases, if the candidate is good or not,

TIM: yeah, it reminds me also of doing technical interviews and it's not really until you've asked the same question a bunch of times that you even get a sense of what like a good, great, bad, okay answer is. And I remember when I've done like batches of hiring and I've had a question from the last time I was hiring six months ago, and I knew how much a candidate should be scoring on it. And I've had this new batch of candidates, even after a few, if a few of them. Didn't do that. I'd be sitting there something I've changed about this question? Like I'm expecting better and then the next candidate will come in and bang knock it out. Okay, cool That's the level we're after. I wasn't losing my mind But it's hard to know what that is until you've asked it. Funnily enough I was thinking Chachapiti might be okay at that generating answers for questions of a variety of Qualities like if you've got like a fresh question, you've never asked anyone I've not tried it for that use case But it Might be interesting to generate some data.

JAN: yeah, I think to be honest, I think if you'd start then recording all those conversations, that's probably, A lot of data you could get out of this because as as we talk about is not only about like what is the exact question, like just like how much does it open the space for answers and how much does it let you differentiate people? And I think actually even AI should be able to. Give you some kind of idea if you're asking the right questions or if they're not differentiating, particularly if you do a larger volume of interviews.

TIM: Is there anything else thinking back to that experience? Because it's, again, quite a unique scenario to be interviewing for like the same type of role again and again and again with a very high volume. Anything else that sticks out as a particularly interesting learning in interviewing those candidates? Anything jump out as? I don't know what differentiated successful from unsuccessful candidates.

JAN: I think successful candidates, like first, like we're explaining much more what they were, why they're doing things. And we're taking that step back on, on the issues and saying, okay, let's think what is actually and the bigger picture here, even if they have to make it up on the go they would say, okay, this is, those are the three things I would be maybe looking for in a type of package like this one does better on those dimensions and so on, and I'm just trying to right away jump into the answer. I think that's what good candidates definitely do, taking that step back and explaining. Why they're doing different things, because there's also a risk, that they have thought about this problem for a really long time, and they're just going to tell me exactly the answer and, but that's it. And then I don't understand why they gave me that answer. And to be honest, communication is so important that this will be already a problem. I think the second part is actually communication more widely. So always try to score people also generally on communication. And if they are actually being concise, if they are actually answering the question, the versus people are not answering the questions still happens to me. so I think. There are a couple of those dimensions you look at where that we can differentiate candidates quite easily.

TIM: Yeah, maybe they've watched too many politicians or something deliberately evading any question they get asked. Yeah, that infuriates me. I've got to tell you if I'm interviewing a candidate and they just don't answer what I've asked them. Think of a few people I've worked with like that, that are kind of master deflectors. And I'll be sitting there 30 seconds later when they've somehow pivoted the conversation somewhere else. I'm like, hang on, you didn't answer my question at all. It's almost like a magic trick. The ones who are really good at it. It's almost a manipulative way of communicating. JAN: It would be also really good, right? In a work environment, you might need that to some extent. But I think in an interview there's a question about you still have to give people the feeling that you answered the question, even if you didn't.

TIM: and I guess at some level, maybe in some roles, if you're a salesperson. There's, yeah, maybe some sense of being a bit less direct is a bit more normal repositioning everything, always talking in terms of the benefits to the person you're speaking to. There's always some massaging of the truth and the answer that you have to give. So maybe it's a bit role dependent, but yeah, I have to say in general, if I ask a candidate a question and they give me an answer to some other thing, it's. That's going to grate me pretty quickly.

JAN: I completely see that. Yeah, you don't want that to happen. I think the other thing that I'm just now thinking. And maybe that didn't apply in this case, but it applies in what I'm doing now a lot is one really important, the context that you're getting this candidate in, but who are actually the other people. That's going to book there in how are they going to work there? Because people are going to work in a team and so they need to be complimentary. And so if you're so then you've got differentiate a little bit, your questions and think about how actually are other people in this company working. Who are the, who exists, how do they see the role and is, it has to work as a team. It can't be just, you running the show completely alone, that is not a reality and that matters much more for more senior hires than of course juniors.

TIM: Yeah, so you can't think of them in isolation. You've got to think of them within the context of the team. It's a team sport, as they say. That makes complete sense. Jan, if you could ask our next guest one question about hiring, what question would that be?

JAN: Oh, that's a really difficult one to think about. But I think the question you ask about, like, how does the skill assessment change in the age of AI? I think that's a really intriguing one. I haven't figured it out. I would love for someone to figure it out. And tell me how to do it differently, right? But I think that's one of the key questions on my mind.

TIM: Yeah, that is a good one. And certainly one I'd be interested in hearing the answer to as well. We'll level that out at a future guest in the not too distant future. Jan, it's been a great conversation today. I've really enjoyed it. We've covered off a lot of different ground and you've provided some great insights to audience. So thank you so much for joining us.

JAN: Thanks so much, Tim, for having me.