Alooba Objective Hiring

By Alooba

Episode 93
Andrew Eichenbaum on AI, Human Connection and Imposter Syndrome in Data Science Hiring

Published on 2/10/2025
Host
Tim Freestone
Guest
Andrew Eichenbaum

In this episode of the Alooba Objective Hiring podcast, Tim interviews Andrew Eichenbaum, Sr. Director of Data Science at BioMADE.

In this episode of Alooba’s Objective Hiring Show, Tim interviews Andrew Eichenbaum, Senior Director of Data Science at BioMADE, and delves into the complexities of hiring data scientists. He offers insights into the challenges of hiring in the tech industry, including addressing imposter syndrome and the balance of logical and emotional reasons behind hiring decisions. Andrew emphasizes the importance of clear communication and having diverse teams. He also shares his thoughts on the impact of AI in hiring, including the potential and limitations of using LLMS and automated systems for the recruitment process. The discussion explores the role of intuition and structured evaluations in hiring, and the future possibilities of AI avatars conducting initial interviews. This conversation provides valuable perspectives for both hiring managers and job seekers in the data science field.

Transcript

TIM: We are live on the Objective Hiring Show with Andrew. Andrew, thank you so much for joining us today.

ANDREW: Yep. Thank you for having me.

TIM: It is our pleasure. And I'd love to kick off by getting a bit of an introduction about yourself. Who is Andrew? Who are we speaking to today?

ANDREW: All right. First of all, thank you for having me on the show. Really great. You could be here and talk about hiring. I haven't done this in a while since pre-pandemic standpoints. So it's good to know things have finally returned. All that being said, Andrew Eichenbaum, senior director of data science here at BioMADE. There's a lot that goes on behind that. So let me give you a sort of a quick overview. Of what all that means. So BioMADE is a nonprofit in the US; it is part of the MII Institute. These are a set of nonprofits dominantly funded by some U.S. government institution with the goal of promoting public-private partnerships and the development of a certain area of science for applied uses. In the long run, what that means is we're here for bioindustrial, where we are trying to genetically modify microbes, whether it be like yeast or algae, so that as they're living their best life, they spit out interesting things to us. In the US, for a really long time, we have had ethanol in our gas tanks from corn. That's a process, that's this exact sort of process that's been going on for a long time. But there are other sorts of places like beauty care, coatings, foods, and a whole range of possibilities. And we're here to help develop that and increase the US's ability to produce things in new and novel ways. This is the part where I always transition to myself. So my background actually is in physics. I did my PhD in physics many years ago out at Stanford and then was sucked into Silicon Valley worked at six at startups for about 12 years part of two successful teams. And most recently, I have been at larger organizations like Autodesk. sizable data team out there and have most recently been here at BioMADE for about the past two years.

TIM: Has your hard science, your physics background, helped you in this particular business now?

ANDREW: Yes, I've been dealing with data systems for most of my career. Physics, I came out of what's called a subatomic physics realm, and we were dealing with petabytes of data a day back in the late 90s. So part of my time as a graduate student was actually babysitting the clusters, learning how to manage all that, and that, started me down the path of what today is thought of as big data. It's a little different when you're managing a cluster of 300-odd Pentium II. 400 megahertz machines versus the multiprocessor systems you can just rent at Amazon these days.

TIM: It's amazing to think of, yeah, processing that volume of data, and then I didn't even know that was possible. But it was possible, but with, yeah, a lot more difficulty by the sounds of it.

ANDREW: High energy physics very much pushes the ends of the spectrum. The experiments out at CERN, which is a high-energy facility or subatomic facility that goes between the France-Switzerland border, actually manage about the same amount of data a day that Google does, within an order of magnitude. And so it's very interesting how you get a people, a group of people who are not CS people; there are CS people there, but more science people who learn how to do the same sorts of things that our top-end computer scientists do on a daily basis.

TIM: and thinking now about hiring, because that's ultimately what we're focused on. I'd like to share a thought with you, which has always struck me as a little bit unusual. Yeah. And that is in hiring people, even analytics and data science leaders who in their day job measure everything and extol the virtues of making data driven decisions for product, for marketing, for sales, for whatever often end up taking a very intuitive or gut feel based approach when hiring people. Why is that? Is that in any way ironic? Do you think, or is it just based on the lack of data available to us to make decisions for people? What are your thoughts on that?

ANDREW: There's a number of pieces here that we can unpack, and I would start with treating this as any other data project that came across my desk, and I'd ask, Why are you hiring? What are you hiring for? What are you hoping to accomplish with this hiring? And what data do you have sitting in front of you? So as much as that might sound like a give-me, why do people hire? And this is truly important so that we can define the question so that data might be able to help us. And so you deal with this on a daily basis with your clients coming in and asking for help hiring these positions. Why do they tell you that we need to hire or what they're hiring for?

TIM: Yeah, sometimes if I just think about it now, the answers are a bit more emotional. Sometimes they're a bit logical; it depends on how much I would dig into their answer. Like they might say, Oh, we've got a position to fill. We've got it. We've got a vacancy. It's like a superficial, almost level one answer. I find that the more you dig, the more it comes back to something about them and some headache they have personally. That's like they're overworked. They need this bit of work done because someone else needs it because they're getting ridden. It's like, almost like an emotional element to filling the role. Sometimes it steers more logically, and we start up this new product. We now need analytics on this product. This product is trying to make us money from this KPI, which is part of this company's vision. Sometimes it's this nice logical pattern, but I find that it's rarely explained that way. Maybe it would be if we got into a lot of details, but typically there's a, I find a combination of the logical and almost emotional reasons.

ANDREW: And that's what hiring is, at least to me. There's a mixture of the logical of what your, what goal are you trying to achieve by bringing this person? What are you trying to have this person do? Whether it be in the short term or the long term or anywhere in between. But then there's the goal of just needing that person. There's that sort of, I need this person to take something off my shoulders, or I need this person to do Y or Z. Pick something. But it's a very internalized gut feel why you need somebody else to help you. Beyond the logical side of saying, I need them to do this set of things coming into the day. And that leads to the hiring decisions that there's a bit of emotion and a bit of logic, but if we treat this like a data question, we can get deeper into that logic side and do things like set up rubrics for scoring. So we're fair with everybody or have a consistent set of interviewers who know the topics that they're thinking of. This is very much the way I organize my team when we're interviewing. We, I create a job spec with HR. I float it through my team for recommendations or suggestions. We come to an agreement, and we post it. And then for me, it's usually I'll take first-round interviews and then pass it off to a team member. And if all of those are good, we bring them in-house. But throughout the whole process, it's a, do they fit with the needs that we see in that 6 to 12 months? Or or what do they do beyond that? Can they grow beyond that? And do they fit with the team? And that whole, do we fit with the team, very much falls down to a gut feeling because it's hard to quantify. It's hard to understand. It's very much like a piece of music. Do you like this music? Yes or no? Why? That's a much deeper question. So how would you go? I've got my own ideas, but I'd love to hear how you would think about quantifying that sort of fit. How does that sort of work when you're working with a whole range of possible customers?

TIM: Yeah. If I were doing this myself for our own hiring and I was trying to quantify fit, which this is a hypothetical because I can't say we've tried to measure this ourselves. I would think about it. Gathering the personality traits of each person. So using the Big Five personality model from psychology, because that would be my understanding of the correct way to categorize people's personalities and then do, like, a comparison or think, Oh, this person coming in is like an extreme extrovert. We've got a team of introverts. How is that going to balance out? I'd start to go down that route. And I'd also be thinking about good points of difference where you get a portfolio of traits, which seems good, but then also being wary where there may be so different that you could have a clash. And I don't know where the right line to draw is there. You could think of marrying someone who's on the opposite end of the spectrum from you; that might be too much. If you want to go out all the time and they want to stay in all the time, it could be a disaster. But maybe if you're a little bit further away, you can almost pull each other in across the other side of the spectrum. So maybe there's something to be said for not having a chasm but still having some differences. See, I think about the Big Five personality model. I get them to meet each other as well. Maybe there's something to be said just for having some interactions between the various team members and the interviewee, just to see how they interact with each other. But then I'd also realize that hiring is so inaccurate, no matter how good a job we do it. Even the world's best sports teams have just spent 100 million pounds on a player, got them in on day one, and realized, oh my God, we've made a dreadful mistake. So I probably wouldn't overanalyze it either. What do you think?

ANDREW: So you've actually touched upon a number of points that I consider. So I hire for data teams whether it be data scientists, data engineer, analytics or somewhere in between, because everybody is a, I think of a triangle where data scientist is at one point, the data engineer is at another, and the analyst is a third, and everybody falls somewhere in between. They have their own strengths and weaknesses. But making sure that your team has a diverse set of backgrounds is super important to me. I, as much as I like physicists on my team, do want to make sure I have that computer scientist. I want to make sure I have some other engineers and maybe some from another realm of STEM. Or even others; I've had people come from, like, oceanography that I've super wanted to hire. And they just had this great path of realizing they could keep adding data. And they became that sort of data scientist. They were actually studying plankton. It's like, all right, given the plankton, how do the whales affect that? How do they, this, the tides, that? And they keep moving on, and they kept adding. And so they ended up going from oceanography to a data science role over the course of grad school to the work world. But making sure you have that diversity and that diversity of thought is day one. Or at least, I shouldn't call it day one, but step one of where you go. I completely agree with you that hiring itself is imperfect. We look at people's resumes; we guess we, I shouldn't say we guess. We assume that it's truthful. And we take them at their word so that if it's a good match to the set of skills that we've marked down, we bring them in-house, or we don't bring them in-house, but we bring them to the first set of interviews and see how that matches. And if any red or yellow flags come up, the real piece of it is saying, All right, they've met all those needs, and they haven't passed any red or yellow flags, or they have, sorry, they haven't put forward any red or yellow flags. We passed them to the next level, but you're not actually saying. Okay. Are they a good fit for the team? And this is something I don't have an answer for, but a gut feel in my, for me, just like you said, there's so much of that gut feel about whether somebody is going to fit well with the team and do well with the team that you have in place. I wish there was something better that I could say, besides having standards, and I'll reinforce The standards approach that for a position, we try and keep the interviewers at each stage to be the same with each set of interviewer groups, especially when they get in house to be taking separate areas, but being slightly overlapping so that you get a good view, overall view of the skills of the person coming in versus the expectations that you have, and I'm a huge proponent of communication during the interviews. Especially in-house. I like pairs because if somebody on my team throws up a huge red flag, most likely that person is not going to the next level, no matter when it happens. So if you have pairwise people going in, all right, you've got that. You've got that offset to say, Oh, I didn't take that comment. The other person goes, I didn't take that comment. That way. I'm into it completely differently. Okay, let's continue on. But that discussion allows people later on in the interview cycle, especially in the in-house. Delve into it and see what they can understand or get from the person's response and really see if that's a red flag or a yellow flag or just a misunderstanding. It's worked so far. It's worked pretty well, but I can say it's not perfect. And I'm sure great candidates have been left behind because of it. Maybe they weren't having a good day. It wasn't their best day, but they came in anyway because they felt obligated to, and they didn't pause. I would say that's something for a candidate. If you're really out of it and you're not going to show your best on that day, don't come in. We understand life happens. Just communicate it as far forward as possible and reschedule, apologize, and move on. That's what happens once you're on the team.

TIM: I think one of the fundamental issues with hiring is the small sample size problem, which even elaborate hiring processes might have, I don't know, five times, five, one-hour interviews. That's still not even one full working day of data to judge someone. So we have to extrapolate a lot from a very tiny sample size. And I can think of some hiring processes I've been involved with where. A candidate has said one sentence in the first interview but suddenly almost got them a label for the rest of the hiring process about being uncoachable. I can remember the exact example. They were asked to rank their SQL skills. Like, how strong do you think your SQL skills are? And I think they'd said 10, which I wouldn't say 10 out of 10 if it was me, because I feel like that's going to. Cause a lack of humility, red flag, but they said it anyway. And I don't think they dug in on it. That was just like an answer they gave, and they moved on. But that little label of overconfidence or lack of coachability then carried through with them across every stage of the interview process, even though that was one thing they'd mentioned once as one sentence. And yeah, there's such a small sample size issue. And I guess because you can't have an infinitely long hiring process, people, there's a competitive process; you can't have nine interviews; you can't do three tests. You just have to take your best guess at it from the data you have.

ANDREW: I would say that was also a failure of the interviewer. If I heard somebody say they were a 10 out of 10 on anything, it's like. So tell me why, 10. I've been doing SQL for the past 20 years. I commit regularly to the MySQL and Postgres repositories. I've been fixing bugs. I've been optimizing the engine. It's okay; maybe they really are a 10 out of 10. And as long as they said they were a 10 out of 10 on everything. All right, maybe not that, but there's a real issue. With tech, especially data scientists, imposter syndrome, and people feel like they just aren't any good. And these are the cream of the crop when it comes to data science. These are really smart people, but they just don't feel like it. I'm not trying to put anybody down, but to say it's, oh yeah, I'm going to go consider this data set of a couple terabytes to figure out if I can do something with it. It's not. Something you can do with everybody off the street. So I think that there's a delicate balance. I see it, especially when I coach my team members that they need to be positive about themselves and put themselves out there in the right light. But saying that, oh, by the way, I did everything at this company, and nobody did anything else. That's also not acceptable. Keep it to the things that you know when you did. And you contributed to, and the value that came out of that. But don't underplay the things that you've done.

TIM: Is there something then to be said for almost us confusing confidence with competence in an interview? Because you could be a very jittery genius who just can't quite get your point across, and you could easily be. cut down at that first kind of, particularly, I'd say, HR interview, I feel like they're going to be slightly more likely to confuse the two. What do you think?

ANDREW: That's on the interviewer. If the person is always going to be jittery, that could be a problem with the position. If they're supposed to be in sales, frontline sales, that's a no-go. If you've got a large enough team, say a large enough data team, and they're put in a back corner and not interacting with stakeholders much, but just the team, they're going to get to know them. They'll get over their jitters. They'll have good conversations. This is the job for the hiring manager to be able to explain to their HR partner or their hiring partner and the rest of the team what's being looked for, and to understand these concerns and make sure that they're accounted for in the interview process. If I've had this and I've been on the opposite side, where at the beginning of the interview I am nervous. But you grow, or I shouldn't say you grow, but you relax as you settle into it because it's something you've done before; you understand the types of questions going, and you can get into a dialogue, and you become more comfortable. It's part of the interviewers process to find out what that core of that potential hire is to really see if that core or somebody who's there after two to four weeks is really starting to get things done. For the team and being an asset for the team and the organization in whatever way they're being hired for, if you're just going to say they were jittery in this, that, or the other thing, then all right. On the opposite side, I've had interviews where people go, Oh, I'm great at this. It's like, all here's a whiteboard. Here's an example. Go do it. And they just completely failed. If you've got 10 years of experience and you say you've done this repeatedly, you should be able to do that. It doesn't have to be perfect. We're, but I should see the expected level of experience and competence that you're saying you have on your resume.

TIM: Is part of the challenge with hiring also, then, that even the metrics we do have, let's say, interview performance, test score, and CV rating, if you can quantify it that way, even those metrics are not that correlated with the job. Because in an interview, you could easily just be very nervous if you're an introvert; you're meeting a stranger for the first time, and that's typically quite uncomfortable. I, even when sometimes, might get people to do live coding. I can remember my last job. If someone came across to me and asked me to chop some SQL in front of them. I think I'd lose the ability to use my fingers and be a whale trying to use a keyboard. Okay, but I could do it perfectly well, just not while someone is staring at me. And so is that another challenge in hiring? Do you think we've got to find a way to, like, almost calm the candidate down to give them a chance to be their authentic self?

ANDREW: Yes, very much so, especially for in-house, for more junior candidates, I like to go first. To make sure I, we can, or at least be in the first panel so I can go over with them, let them get some of the jitters out, see a face they've seen before be that, don't start with deep probing questions at the beginning, as much as possible. With more senior people, I expect that they're able to handle that, and so I'll go towards the end so people can see what they're like, because we expect them to be able to handle themselves in one-on-one settings. But if you have your, everybody has their favorite programming questions, which I run by almost all my data people, and it's great because with a good programming question, the most senior people I've had run through it, there's a 10-second answer, which I then say, Okay, now code it yourself, and I've had people code through it in two minutes, and it doesn't have to be perfect. That's what IDEs are for, but I've had more junior people take 15 or 20 minutes, and I have to coach them through certain points because it's meant to have those sticking points. And it's whether they take the feedback, what their experience level is, how they talk through it, what assumptions they're making, and are they able to express that. It gives a good question, gives their, and gives you an idea of their entire approach to an area of the work that you expect them to be doing. So whether it be a SQL question or a Python question. So an algorithmic question or an architecture question, it has to have all those pieces to allow for enough rope for them to hang themselves, but make sure that they roll themselves in quickly enough to ask the right questions, understand what's going on, and head towards that correct answer.

TIM: What do you think of candidates using? an LLM in the interview itself, either online or in person if we're going to, yeah, move to this new world, is it fair game to use them or would you rather just see their pure un-cyber self?

ANDREW: If their use of an LLM improves the outcome of the interview, I'd say that the interviewer has done a poor job with the interview. When people ask me about the interviews that I do, or my team does, I specifically say we will never ask you anything you can answer with Google in 30 seconds. What's the point? The point is to understand and to try and get to know the person in as broad a sense as possible in the shortest period of time. And I know that sounds horrible. In that, we're trying to just describe a relationship that will probably have just as much time as they will with their significant others or family. And say, as you said, in less than a day that we're going to make this decision. But there, there currently isn't anything better, or at least I've not experienced anything better, whether I'm on the hiring side or the candidate side. If they said I'm going to use an LLM to give me the basis of it, I'm going to see what their, I want to know what their prompts are. It's like, all right, go for it. And that hasn't happened to me yet. I've never had anybody open up and say, I want to Google this, or I want to put this in ChatGPT and see what the response is and go from there. But there are people on my team who use it for various aspects of their day-to-day lives. I wouldn't take offense to it, but it still should not affect really the answers to their questions when asked properly.

TIM: I feel like we're in this weird in-between spot with LLMs where they've come along, advanced very quickly, and we're using them but almost in some scenarios sort of avoiding them. And we don't quite know what to do with them, especially in the hiring process; I think we don't quite know what to do with the fact that the CV has probably been AI-generated. And then we're trying to compare the CV against the GED, and they're looking perfect. People are maybe doing skills tests with an LLM. What does that mean then? What does that performance even mean if they haven't done it themselves, but if they've used a tool to make themselves more efficient? Isn't that better? I feel like we're in a weird In between spots where we haven't quite gotten to the next equilibrium that might be, well, in two years time, nobody writes code from scratch anymore. It's just an LLM prompt. And no one would ever be writing SQL or Python from scratch. And then it would be clearer how to evaluate in that scenario. What do you think?

ANDREW: I can't believe we'll ever be to the point; I can't say that. I can imagine in the far future, when, let's say, we hit sentience, real sentience for a computer, that then yes, it can write code. I'm sure for the things that are coming out for a junior developer, you can write me a routine. Here are these inputs. Here are the outputs. Make it do it in, log in time or less with memory out, memory footprints at the other thing, and spit it out. You could train an LLM to go through code bases, try to understand all that, and spit it back out. And I'm sure if I can say something like that, somebody is trying to do that. So can that happen? Yes. Does it replace the data architect who really understands that data model? And puts the table structure together in your relational database now. Does it take the system architect, who? sees what's trying to happen, what you know, all the ways that the product manager has said this is the way people need to access the system. These are some of the security concerns you know from the data architect or whoever's building that data model. Here's the model. Here's how we're going to host it. You know, putting all of those together into a working system. That's something we're not anywhere near with current systems because there's just a diversity, or there's a huge diversity in answers all based off of best practices. So you could ask an LLM, if I'm going to host on Amazon a web service that expects, whatever, 10,000 people an hour for a shopping website, how should I set something up? I'm sure it could spit out some general best practices, but if you're doing something new and different, you're just going to get those generalities again. And to bring that back to interviewing, if all the person is doing is spitting out generalities when you're asking some detailed questions or asking those next-level questions, they're not a fit. Because either A, they're not listening, or B, they don't have the ability to answer those questions.

TIM: Okay. And so what we're basically saying is, in the current scenario, we've got these LLMs. Okay. Some candidates might be trying to use them in an interview. Some interviews might be open to having them be used, but we would want to craft interview questions and interview styles such that an LLM can't answer as well as a human. So it has to be a kind of more cutting question. It has to be almost more, is it about being more personalized to the individual? So they have to almost be talking through their experience or kind of solving an unseen problem that an LLM can't just solve.

ANDREW: It's about setting up a question that can be answered multiple ways. But that you the assumptions that you make about the question define the way that you're going to answer it So I will give you my standard interview question. I've given it out before; I don't think it's caught on, but I've been running this now for 16 years, and I've gotten all sorts of answers. So imagine you have a slot machine. Everybody's used to a slot machine. You've got a set of rollers. They have all sorts of various symbols on them. And I say, what if that slot machine could be represented as a list of lists? Standard programming piece. That outer list contains each roller. And then on each subsequent list, it has all those various symbols. And I just normally limit it to characters, like A, B, C, through Z. Make it nice and simple. And I say, if you're writing me a routine that passes, that returns all of the in-order combinations, so just effectively left to right, distinct in-order combinations from that list, and I'll put something up on a board as an example. So everybody gets it, and I say, Go, and I don't expect and I don't expect anything, I guess is the best way. I want to see what the person's thought process is because they have to start asking. Like, one of the first things that pops up is that list of lists can be huge, anywhere from thousands long to zero, and each of those rollers could have one to millions. of entries on it. And if the person doesn't ask about the conditions to begin with, their answer is going to be completely different. Even if you solve it correctly, it will take care of all of those cases. If you don't ask those initial questions, then you're not going to solve for a case that works well. And so that's what I'm looking for. It's a simple question. It's a general question that people can get a hold of and understand very quickly from anybody who's done data programming. for listening. Even at just a collegiate level, the range of answers and the interactions that you need to have to answer the question in a great way, if not just satisfactorily, is wide and vast. And so those are the sorts of questions that you really need to ask the people to understand how they think about a problem, how they approach that problem, and how they interact with you. Part of that is that hiring manager or member, hiring member, is asking good questions that allow you to understand the thought pattern of your hiree. I will go back to the piece I mentioned earlier. Data science is a mixture of art and science. It's, there's a bit of that intuition, much like hiring. And we need to see how much of that spark that person has in that intuition area. But then how do they build upon it? With good practices and questions.

TIM: Is part of the quality of that question, then, the fact that it's got some ambiguity in it naturally, so that it's going to force candidates to ask questions just like they would for any other real problem, which is never perfectly defined?

ANDREW: Exactly. There is only one answer. So, like a good math problem, there is one answer. Let's not get into weird cases. But there's one answer, and there are a lot of ways to get to that answer. And it's a good way to see the diversity of talent coming across and different thought patterns in ways to answer those questions that maybe your other team members didn't think about. Because that person will then add to the overall depth of your team and the way that they respond and can understand or understand and respond to questions that come to the team.

TIM: Now, have you ever had someone ask, What's a slot machine? Was that ever a clarifying question? Because that was my first thing. As soon as he's a slot machine, I'm like, hang on, what's that again? Okay. Poker machine is what we call it in Australia. And apparently a fruit machine in Britain.

ANDREW: Yes, I have had to clarify a couple of times. But as soon as you start drawing it up on the whiteboard, people go, Ah, okay, it is that, but you're right. Sorry for the confusion with the international audience.

TIM: So based on what we've discussed so far, I'm imagining that you would personally be skeptical that recruitment could be entirely automated with AI or some similar tool over the next two to three years.

ANDREW: That would be correct, to put it simply; there just isn't the understanding yet. There are useful tools and systems that make the hiring process more efficient. Across the board, it helps your HR person, it helps your recruiter, and hell, it helps the people find the job in the first place in whatever digital system, so for me, I'm assuming they'll find it on a digital system or word-of-mouth digital board. So it helps to make those matches, but answering the question about whether they're a good fit for the team. I think that's still a very far way off.

TIM: Yes, it is. I'm just thinking back to some of the hiring processes. I've been involved with years ago as a candidate, and like the last job I had, which I got about 10 years ago. If I think back now, I had a couple of interviews with the co-founders of the business, one of whom was not really involved in the company anymore. So really. I met one person in the company who was then making that evaluation of me and maybe vice versa. I worked with them a little bit once I joined but wasn't really in their team. They were running the business. And so in that particular scenario, there was a kind of lack of data involved to make that decision. There wasn't really any consideration of why I don't think this person is going to fit into a team. I was like creating a new team anyway. So there's, I can see almost some scenarios where that's. Less relevant. What I also was thinking about recently was, like, is the AI versus human recruitment, do we need a good, almost like accounting on a whiteboard, like the one you've got in the background there of the pros and cons at each step? Because I feel like we have a lot of fear around AI and a lot of skepticism, which is justified, but I also think there are so many flaws in the way hiring is currently done. That if you were to sit down and think about each of the steps, maybe that'd be like net aggregate improvements in using AI instead of humans. If we were being realistic, what do you think?

ANDREW: To go back to your point about people optimizing their CVs for a job posting. Hiring has changed in the past, call it 10 years. You need to get your CV past the CMS before it even gets to be seen by a human these days. And keyword optimization, layout, and format, all of those things, which the CMS systems try and grade you on, or grade a candidate on, are something you have to consider. Now, whether that resume is then a true reflection of how you feel about yourself or is just a match of whatever your abilities are to that job, that's a good question. Aye. In my last couple of job searches, I have not gone to that. Now, granted, my last one was two years ago, but CMSs were still a big deal. I did not edit my resume per position. Did that cause me to not be picked up by certain positions for even that first round interview? Probably. But I feel like my resume and my LinkedIn profile, Was a good representation of what I had done over the 20 or so years of my career and told the story of what I could do and what I'm looking to do. And I'm very happy with the position that it got me. So to ask the question about how we trade off these pieces, we have to say, how do we trade off pieces of the process? How much do we rely on those CMS systems to rank and order the candidates? Or even knock candidates out because they only have six of the eight keywords we're looking for. I can say for myself, I believe hiring, especially for a data team, is one of the most important things you can do because that defines your team. It's all human capital in developing these new systems, new and novel systems. So, the HR team is going through resumes, all of those resumes. hours of work, but it allows you to find those people, even the ones that have been pushed down by the CMS who could still be good. They just don't represent themselves in the way that the CMS expects to see it. So is this a CMS problem? Is this an HR problem? Is it an overall process problem? Probably a little bit of each. But it's the best thing we've got for ourselves right now, and we just have to work with it until we can come up with something better.

TIM: I feel like one of the issues for candidates is, as you say, needing to optimize their profile for very different audiences. You could say the ATS, the CMS, the ATS; you could say. A recruiter and then a hiring manager, even those two people, would be interviewing you and looking at your CV in a very different way. And so that's part of the difficulty. Also because we've had this, let's say, human-based reviewing method for so long. That drives our need to format the CV in a particular way and keep it incredibly succinct. Trying to fit your entire history onto two pages is impossible. You have to get rid of a lot of stuff and just be very simple. Maybe if we had, like, an AI-based screening method, there'd be no limitation on how much data it could ingest. Maybe you could have a 10-page resume. And then maybe just by that fact alone, that screening process might be a bit more accurate just because there's more data to deal with. What do you think?

ANDREW: Not all. To start out with a clarification, not all resumes are two pages. That's a more Western approach to what you show yourself before you hit the sort of director or executive level. And then when you have these large organizations put together CVs, they go on; four or five pages is perfectly acceptable as long as it's well put together. But you do get resumes from other areas of the world. That they list large amounts more. You'll get 7 or 8 pages of very small print talking about everything they've done. And it's not specifically to fill in for the, to bypass the CMS or ATS system. But it's to get you to, it's just, it's cultural. It's the way they've been taught to create a resume. There's nothing wrong with it; it's just a different representation. So do the CMS systems manage that any better? I'd have to say no. I've looked at various parsers and the way they pull out pieces, and I'd say that overall it hasn't brought any candidates up higher in the ranking than would have been done by a well-put-together two-pager. But it's cultural, and you have to work with it, especially within the U.S. I see all different types of resumes, from that one- to two-pager to the seven- to eight-pager, color or black and white, simple, direct, and flowery. And it goes with the type of person that you're hiring is what you expect for them. If I hire mainly for data, but if I'm hiring a visual designer, your resume should be more on point. There should be something special and memorable about it. Whether it's the use of color or nonstandard but easily accessible and understandable patterns. You're a visual designer. This is your first calling card. And if you simply use the straight bulleted point that I would expect for my resume, it's a data scientist. That's the O. The text, the initial two-liner, and various history and everything else that goes there. Sure, that's fine; it goes with the position.

TIM: For candidates who at the moment might be applying to LinkedIn through job. Sites will often be facing incredibly stiff competition. Sometimes they're looking at the LinkedIn that might say a thousand applicants in a couple of days for a data scientist position. I've noticed this especially in the US. I imagine then a lot of candidates will be starting to think, Well. That's a lot of competition. Should I backchannel it? Should I think of trying to go directly to get in touch with a hiring manager or kind of leveraging my network to get a role? Is there a good way to do that? Is there a kind of bad way to do that? What do you think?

ANDREW: So there are good ways, and there are bad ways. If you try and link up with me on LinkedIn and the first thing out of your mouth is a sales pitch or. Tell me about the open positions on your team. I will leave you on read effectively from then on out. But internal references are huge. If you know somebody who I've worked with and respect, or somebody on my team, and they suggest to you, Oh, I know we have this position up for a data scientist. I know this person over at my former company who's thinking about leaving. I've worked with them; they've been great. I will trust them, and I will immediately set up an interview because I know my team; I trust my team. If they're saying it's somebody they want to work with again, I bring them in because it gets rid of that whole bypass, that whole system, and thoughts about team fit and competence have already been removed from my head because I know that I would never suggest somebody who's going to do a crappy job to come work with me. If I'm not going to work with them again, I'm not going to suggest them at my new place of employment. Yeah, I guess for me it's that simple, but I'm sure there are more layers at other sorts of positions.

TIM: Yeah, if I were looking for a job, I would certainly try to leverage my current network. And yeah, I would be reluctant to start bombarding hiring managers on LinkedIn with a pitch slap because that is not going to go down well at all. I'm reminded of a book I read years ago. One of the first kinds of self-help books I read was a book about networking. It's a dig your well before you're thirsty. It's a long run play. You can't think of it transactionally, because that's never going to resonate, I don't think. Andrew, if you could ask our next guest one question, what question would that be?

ANDREW: This is a question that came about from billboards in San Francisco. And if you've ever been in San Francisco, it's the one place where, along the major roadways, there are huge billboards, and they're all tech-related, from Snowflake to Databricks to whatever the latest startup is. And one of them was about a new system using avatars for interviewing. So digital avatars, not pretending to be human but being friendly in a place like that and using some sort of conversational backend and recording to get the information. So I guess instead of an HR interview, it's an avatar interview. And so it got me thinking, it's like, how would you feel about being interviewed by an avatar? And I know I had a very visceral reaction; if your team doesn't feel like my time is valuable enough to talk to me, why would I be talking with you? But I wanted to make sure that wasn't a general feel. I actually went amongst a couple of my team here at BioMADE, and when I asked them, literally, it was on their faces: What are they doing? And in terms of that, it would be the last interview I had with them, too. I would play with it. And then I would ignore them from then on. It would be interesting to see if this is an overall view or whether people who are in different realms feel perfectly happy to be interviewed by an avatar. With sort of the more standard questions and not deep-diving and probing questions.

TIM: That's a great question, which I'll level at our next guest. And I wonder if it's the sort of thing that would be weird to begin with, but then once it's normalized, you just expect it like anything else.

ANDREW: I'd go back to my original question. Why are you hiring, and what are you hiring for? If you're looking for somebody to join your team and spend that much time with you, you have to show from the start that there's interest. I'm not saying it's dating, but there are definitely corollaries to that. But if there is, that is, if you're showing such a low level of interest, and you feel like, especially in these times when ghosting is so prevalent, so little effort is being shown. So, oh, look, you get an interview; please send us times. Okay, great, we've selected that. You get on, and it's not even a real person. How does that make you feel as a candidate? How does it make you feel about that organization, especially if it's, say, a large multinational? Would you ever go back to them for a different position if they reached out to you later in your career? Would you suggest that for anybody else? Or you'd say, No, this was a ridiculously bad experience, and so I would never go back to them in any way, shape, or form because of this experience. And so those are parts of the question. People have to realize that interviews are two-way streets. The candidates are evaluating you just as much as you are evaluating them.

TIM: We will see what happens in the next couple of years because I assume these products can be rolled out, and we'll start to get your candidate. Use feedback on them and see what the perception is. I'm personally interested to see how it plays out. And yeah, I will level that question at our next guest and see what they say. Andrew, it's been a really great conversation today. We've covered off a lot of different ground, and I'm sure the audience feels a little bit wiser having listened to your insights and thoughts. So thank you so much for sharing them with us today.

ANDREW: Thank you, Tim, and I enjoyed our conversation as much.