In this episode of the Alooba Objective Hiring podcast, Tim interviews Shaun McGirr, Chief AI & Data Officer at DevOn
In this episode of Alooba’s Objective Hiring Show, Tim interviews Shaun and discusses the challenges and biases in traditional hiring practices. They explore the need for a scientific approach to hiring, emphasizing the importance of data-driven decisions over subjective judgments like 'gut feel' and informal interviews. Sean highlights the role of competition and regulatory pressure in driving more objective hiring practices, and the significant gap in many talent acquisition teams' understanding of the scientific method. The conversation touches on the potential of AI and metrics to improve the hiring process, while cautioning against the dehumanization of candidates. The episode concludes with reflections on the slow adoption of innovative hiring solutions in larger enterprises and the psychological barriers to embracing new technologies.
TIM: Shaun Welcome to the Objective Hiring Podcast. Great to have you!
SHAUN: Thank you, that's quite a spicy title even for the whole podcast, I have to say.
TIM: We're laying it out there, and I'm going to just dive straight into it because this is a theme that I love. I feel personally I'll just tell you straight off the bat that the way a lot of hiring is done traditionally can be quite unfair, a little bit rife with bias; sometimes the best candidate doesn't get the job. And one thing I've noticed is that when reading the academic literature on what predicts job performance, I noticed something striking: things like unstructured interviews, like your gut feel, vibe check, and pub test, years of experience, and age have very little predictive power, whereas things like boring old IQ tests, job skills tests, and instruction interviews do predict on-the-job performance. Yet companies would tend to favor the former, not the latter. Do you have any thoughts on why that might be the case?
SHAUN: I guess the first reason to ignore academic literature is that you haven't read it, so you don't know it exists, so maybe somehow you don't know that people study these very important things, like what leads to more effective hiring. My gut feeling, though, is more that people are time pressured; they have comfortable grooves. Once upon a time, they were hired based on a coffee water cooler conversation, and they think that they can do that at scale. Obviously, you do want people that you can work with, that you can chat with, but I would say, as a scientist by background myself, what's the simplest explanation? People do the same thing tomorrow that they did yesterday, and if there ever was less objective hiring, they're just going to continue that unless there's some kind of really strong push, probably from outside themselves, to do something different. That's my initial take.
TIM: Yeah, and I wonder whether that outside influence would be competition, like if another company in their space is nailing it using some other method, then would they almost be forced to catch up, potentially?
SHAUN: You'd hope so in the data space; right there is a supposed war for talent, but as someone who's been on both sides of it and then seen other people be on both sides of it, there's also a lot of people with talent who fail to get hired in jobs that I know they would be good for. When we think about what changes deep behavior, like how people hire, it is regulatory change; it is significant competitive pressure, or the company making a move to do things a bit differently. and I've been on the positive side of that a couple of jobs ago, where the way that the job hierarchies, the career progressions, and therefore the hiring methodology for the data roles were just taken directly from the software roles, half of which is fine and half of which is not that great. And so over a period of time we were able to lobby for restructuring the career progressions, which naturally led to how we can do more objective hiring.
TIM: I want to throw one hypothesis at you, and it's that, so you mentioned you're from a science background; we're data geeks. We think of things from a data perspective and a maybe rational, Let's measure this. What's the solution to we don't know what's happening? Let's find a metric and measure it and try to see what's happening. The vast bulk of people that work in talent acquisition don't come from scientific backgrounds or engineering backgrounds or technical backgrounds or data backgrounds, and so it's not an obvious lens for them to use to solve problems. Do you feel like there's almost a deficit of the scientific method thought process in talent teams?
SHAUN: I think that's fair, and I think an implication of that could be that sometimes I've seen some talent teams and some whole organizations fall very hard in love with some maybe pseudo-objective approaches to this. We've all done personality tests, right? Where you answer a hundred questions, and then you either get the job or you don't, and you don't know why you did or didn't. I don't want to call psychometric testing pseudoscience. I think it does have a place in some roles, but I think for people who don't have a scientific background, the hardest thing is trying to teach such people some of that thinking. It is always to have them realize that the point of science is doubt and doubting data and doubting yourself and doubting decisions. and we don't prove hypotheses; we seek to disprove them. So if it's true, I think it's probably objectively true what you said about the backgrounds of most people in talent; it probably does happen that sometimes if they feel under pressure to adopt objective measures, they go out and look at the ones that are sold in their language in their part of the market, and they maybe believe claims that they don't have the tools or experience to interrogate because they don't have the background. One of the worst implications of all that I've seen is people using generative AI to write performance descriptions based on numerical ratings, so we give the numerical rating first, and then we make up the description, so it's not data-driven decision-making; it's decision-driven data-making, which used to be one kind of a joke five years ago before generative AI. and now it is literally you can generate any data that you want to support any decision that you've already made, so things can go really wrong if people seek, I think, the wrong objective measures, thinking that they're doing a good thing and a safe thing and reducing bias and increasing diversity of the hiring pool, but I think many of the tools and approaches out there are just as likely to reinforce things that we don't want.
TIM: One thing I've noticed working in this space for years is that the intersection of talent and data is that, from my perspective, I feel like a lot of the endemic issues in hiring can be solved by introducing metrics measuring things consistently and objectively, scoring candidates along a predetermined rubric in the most objective way that you can. Even if it's partly subjective, it's better than no data at all. and the reaction I often get from talent or HR people is Oh, that almost seems like that's dehumanizing. We're reducing people to a number. Oh, this is technology. I want to run a human process. Shaun I'm more about the people. What do you think about that reaction?
SHAUN: It's a fair reaction; it's a common reaction. Everyone making a decision about anything is using a model to do that, right? It's just that some of those are biological. When we see something move in the forest and we decide whether we stay put and fight or run, right, so that decision is out of our hands all the way through to complex organizational decisions. There is no model-free, judgment-free decision that we're making. There is no human decision without bias, without fault, so I'd say it's very fair in the end. If you're hiring for one role and you have more than one applicant, you're making a binary decision about each of those candidates, and a number of factors are feeding into your decision. The value of trying to codify some of that with some data is that you can interrogate your own biases in the decision, you can play back how the decision was made, and you can try to correct things that you might be doing wrong, so even if you want to use a lot of human judgment, which is always required in any decision-making process, wouldn't it be better to have data available to you? I don't think anyone has forgotten, hopefully, the last 10 years of headlines about predictive hiring algorithms, although every time I talk to people, they say, at an enterprise level, Let's use HR talent acquisition that takes a long time. Let's automate that. So is it that maybe that drives that natural reaction that we want to keep the human element? But if we have the human element in those decisions about telling acquisition, we have to be honest with ourselves that we are using data in some way to inform those decisions. We just don't know it, and we can't recover it after the fact, and we can't compare one decision-making process to another. So I'm absolutely against cookie-cutter formulas and scorings being the deciding factor, but I do think, especially in data land, if you're hiring for a data analyst and you need them to be a certain amount good at SQL, that can and should be tested, ideally without humans looking at people looking at you doing that. I'm not a big fan of live coding, and just as a personality, it freaks me out, and I remember having to do those exercises, and if there'd been another way to do it, that would have given valuable information, and I think also it makes sense to break down the parts of the decision right. There are, if you have two great candidates that have passed every objective measure, and you need to make a decision that's going to be more subjective and it's going to be more human, fine. No, no one and no regulator is going to put you in trouble for that, but you want to be making that end decision out of the best possible set of candidates. You don't want to be at that point, and actually one of the people is much less good at the core things that they must be able to do in the job, right? So if we think about the whole funnel and disaggregate the decision-making process, the mix of human and machine data that we can put in a database and data that lives out there is going to be different at different stages. And then once you do that, you can get a better total overall process quite easily, I'd imagine.
TIM: I wonder whether, because that's so subjective and vague, you could get very different opinions from different people who've interviewed the same candidate along that criteria. I wonder if it's just, would it not be at least a step in the right direction to call it out explicitly so you might have these 10 criteria you're looking for; you've weighted them. You said the sequel is worth 20; I don't know their statistics knowledge is worth 10. Their communication is worth 20, and there's this remainder 20. I don't know. vibe check likability cultural fit I'll call it whatever you like, but at least it's on paper, and you're going to score people against it. No matter how subjectively you score them, at least it's down there; it's still another number. What do you think of that approach?
SHAUN: I think that trying to shove all of that into that number feels like a bit of a road to ruin. I would, in that situation that I would describe, rather have the people with the opinions about who we should go with make their argument about those intangible things but still in terms of the things we have decided. I think that overall decision about who is the person that we feel, you know, if it truly is a dead heat, it's never actually a dead heat, right? You just haven't talked about it enough, so I would, yeah, codifying that and putting it in a box, I think people will find ways to work around it or potentially abuse that. I would, but I would agree completely with the base of your suggestion, which is we need to have a good idea of who we are looking for again. How many jobs get worked up on a job portal anyway without people actually knowing what they're looking for? If they never decided what they're looking for and what they want, how could they describe criteria? How could they divide them up like that? So I think the drive to objectivity about the things that are pretty objective should also bring some rigor to everything else, I would say.
TIM: So the root cause of the problem is actually earlier on in the process in that the very thought process to think through what you needed to hire has not been done in the first place; therefore, how objective could the following process be anyway?
SHAUN: I think, yeah, if I look at ads and I've listened to experiences of friends and former colleagues, today the progress, at least in data land, has been less than I would have hoped over the last five to ten years. You still see ads where the number of years of experience in a given technology is more than that technology has been around for. There's also a bit of an arms race, probably on both sides, with people using ChatGPT to both pad their CVs or write a nice cover letter but also to add job descriptions without the expertise and the judgment required, so I would think when I was hiring, I had a very good idea based on who I had in the team already. The fact that I came from that background, I think that's cheating right relative to many other people, but I would say to the people hiring data people out there, given everything we've said about the disadvantages that talent teams have hiring in this space, even compared to other technical roles, get involved. Don't complain about the quality of the candidates in the funnel if you've done nothing to coach the team into what to look for, and likewise the team shouldn't accept their own stakeholders saying, Hi, hire me these personas, and I'm not going to tell you what I'm looking for. That's a hell of a mess to clean up later at final decision time and then have to explain in some other way.
TIM: about the fact that the way most companies do hiring is this weird split between the hiring team and talent acquisition, where often for larger companies, in particular, the talent team will be the ones who have written or reposted a job ad; they interact with their ATS applicant tracking system. They might do the initial phone call screen, the CV screen, and they just hospital pass it across sometimes to the hiring manager, and then their team does all the interviews and ultimately makes the decision. Maybe talent is its providing feedback to candidates like it's this weird melange of responsibilities. Is that part of the problem in itself? That it's not just fully the technical team doing all of it, and they're the experts, and it's not fully the HR team doing it because they can't because they're not experts in the things they're hiring for? It's just fundamentally flawed the way it's set up.
SHAUN: I think I've been lucky mostly from what you've described that I've had people who, whether they had the title or not, saw themselves as business partners more than funnel fillers. Obviously, discussion was left up to me as the discussion and final decisions were left up to me and my team and my stakeholders. but I think for most data roles where the requirements are a bit fluffy, fuzzy people often don't know what they're looking for until they start seeing candidates, and then everything that we've already discussed is potentially going to ruin decision-making towards the end of the process, like the technical experts need help and coaching as well. And I've certainly asked for and benefited from that from talent partners in the past, as I have given frank feedback about how I understand why this person passed your screen. Now we know we should also ask about this, or for more senior roles, it's hard to understand what people would really want. They may apply to a job that you're advertising with a sexy job title, but then they turn up on the call, and they actually want your boss's job, not to work, not to work for you. These are things that I think telling acquisition professionals, if they're listening and watching, can spot a mile away and save a lot of time. So I think, on balance, I've been pretty lucky that I have felt comfortable imposing my part of the requirements and work generally with people who have actually partnered with me to try and find and fill some pretty tricky roles. I'd say people who think that it's a separate job are just not going to have a fun time towards the end of that process. and then talent is an obvious reason for data people to take the guidance from the experts on that part of the process. Likewise, talent teams, by investing a little bit, are not going to have to keep researching, right, for positions, right? And their core KPI they have is how long does it take to fill a position? and if you throw a bunch of bad candidates over the fence, you tie up the experts. They don't like it; they make a no decision, and you are left with really old job postings that you have to keep searching for, and you'd rather move on, so I think there are reasons for both to move towards each other, and these are not volume roles, right? This is not staffing a huge event, right? These are the roles that should deserve the little bit of extra time to work together.
TIM: One other related thing that I feel like is a bit of an irony is that I would say The vast bulk of data leaders I speak to when I hear about their hiring approach, I would describe it as imagining a spectrum of pure intuitive gut feel to pure numbers. I feel like they're 80 percent along the way to this side of the spectrum towards the intuitive side, yet in their day-to-day jobs, they're running data teams, they're making data-driven decisions for product, for marketing, for sales, and for ops. They're evangelizing the use of data, but then it's put on a different cap. My hiring cap is on now. I'm just going to, yeah, we'll vibe, feel it, and let's go for a pub cover beer, and I'll just intuit my amazing gut feeling decision, and I don't understand that irony. Is it because it's a decision about people? Is that the difference? And then, because it's about people, then we defer to our guts, or we don't have better data. What is it, do you think?
SHAUN: I think the universal thing here is people are often not so good at the thing that they preach other people should do.
TIM: Some projection going on
SHAUN: Yeah, basically, if you go to any company trying to sell you anything and then ask how they do the thing that they are helping you to improve, you will often find them a little bit wanting. So I've just noticed that everywhere in my career, which is a bit of a, if we zoom into your question, it's a bit of an expert overconfidence thing, right? So one potential explanation is I've already seen all the data I need, and my neural network model has already trained and optimized to make the very best decisions. That's one potential explanation. We already talked about the other explanation of not really knowing exactly what they need to make a difference, especially in this economic cycle and the sort of sub-cycle of data teams never having enough budget or FTE to do the things the business says that it wants. and so everyone in the team is doing double duty, wearing multiple caps anyway, and then starting to mix this data together starts to get really challenging, and there aren't enough hybrid people out there to begin with. Probably the most contentious explanation I could reach for, though, is that many data leaders, even if they started as practitioners, are a bit blunt on their use of data. and I'll say something even more spicy. Many data leaders do not come from a predictive background; they come from a reporting, data warehousing, data platforms, data engineering, management information, and everything up to but not going beyond. Let's remanufacture the data we've collected to make some predictions that can be wrong, that are wrong by definition because they're predictions. If a prediction can't be wrong, it's not really a prediction, so I think if I put all those things together, there is a bit of self-selection of data. Many data people are perfectionists. I don't think that's super contentious. They're very analytical, right? They love a bit of data, etc. They do all those things well to help their organization improve, but maybe they have just been away from the tools for quite a long time and haven't felt that direct pressure on themselves to know what to look for. But especially over the past few years, with AI becoming very popular and many data leaders trying to find their place in the world where AI has taken some oxygen from the room, frankly, the data that we have about the performance of our existing teams on the things that we want to maybe measure more objectively is pretty poor. So I think they probably sense deep down that most data leaders think this is pretty shaky ground based on the data that we do have even about our existing team, and if we project that forward, there's quite a lot of risk, so it's safer to rely on expert judgment. That would be a very charitable soup to make out of the ingredients that I put forward.
TIM: What about throwing another one at you? I wonder whether there's just a lot of metrics that, at least in my mind hypothetically, we could be tracking given enough time and effort, given we have the data, like metrics that conceivably could exist that most companies just don't track at all. A simple one that I know one of our clients does really well is Agoda. In their marketing team, they hire at scale; they would interview thousands of people a year, so they've got enough data where it actually makes sense to do this. They have interviewer analytics. It's a leaderboard that tracks KPIs around whether you are doing enough interviews because we should share them around the team. Very simple metric Another one is conversion rate, so are you passing the right number of candidates and your conversion rate relative to the next gate? So imagine if you let everyone through and then all the people you let through get rejected; you've just been an ineffective filter, so like even some basic, what seemed like, bi-terms, pretty basic metrics of a hiring funnel that they measure that give them immediate insights that other people would be completely unaware of because they don't measure them at all.
SHAUN: That's basic, right? And it's relatively safe because you're not measuring the candidates; you're measuring the employees, so if you were, especially at scale, struggling and wanting to bring some objectivity, I think a very small amount of those metrics could take you a very long way in improving your process simply by making the way the current decisions are being made Or something about them, or at least who's making which level of decisions visible I have to say that in the companies I've worked in, I haven't seen an ATS or system that presents those to you cleanly. There's always, I don't want to be like the people I was just talking about, some kind of data quality issue that means you can't trust the dashboard. but certainly at that level of scale and that level of rigor, that makes complete sense, and they're very easy. Those are objective things to measure, right? And they are, if you have joined up the funnel, they are easy, and I can, yeah, I can see that working, and my gut tells me that you mentioned Agoda; you mentioned the marketing team. Much of our whole conversation today might be very boring to domain teams who do hire at scale, and so another irony might be that it is potentially much easier to convince a non-data team or a business team of this data-driven approach than data teams themselves because not many data teams and data leaders hire at that scale even that you're talking about.
TIM: But yeah, the scale is a fair comment because if you're hiring thousands of people a year, then the ROI on improving any bit of that funnel drastically with some simple metrics is obviously there, whereas if you're hiring a few people in a year, you could probably get by with a good enough process. You don't really need to automate everything or have perfect structures.
SHAUN: The big challenge would be when you're somewhere in the middle and you're just building a relatively large team from scratch, for example, or replacing a relatively large amount of people; then I think you'd have an interesting decision to make: how much do you invest in thinking about it like a funnel versus how much do you invest in using your prior historical data? Versus how much do you invest in just getting better data on the candidates themselves, even without fun reengineering your process but just bringing to the committee decision, or whatever it is, some more data that comes from a different place for a different reason? The other thing that strikes me about marketing is that it is used to and forced to make decisions from very imperfect data, right? So marketing can't wait until the data is perfect before they start deciding how to change their marketing spin, right? So their use of data is in the firing line under the competitive pressure, right? That we talked about earlier, they don't have time to be perfectionists about how they use any of their data. So it's pretty natural that they would then take the same approach to the hiring funnel as well.
TIM: of competitive pressure I love sports. Everything in my head is an analogy to sports or war; those are the only two analogies I have, and so there's been a lot of changes in sports recruitment. Moneyball is the famous movie and movement more than 20 years ago now in baseball; football, or soccer, has kind of caught up in the last five or 10 years. Liverpool and Brighton are some of those EPL teams that are doing very data-driven recruitment, and it seems to have broken through now such that the teams have gone Oh my God, Brighton spent 1 million pounds on nine players, and they're all amazing. What the hell are they doing? They're now competing at the top level. So it's caused that competitive change, but is there maybe some reason why that feedback mechanism won't really work in business? Maybe it's not as immediate, like you'd have a points table; you don't have a relegation dogfight. It's not as obvious just
SHAUN: It's really obvious between companies that compete directly, and then when you go from the company overall to the data team, I don't think any data team is genuinely competing against any other data team unless they're actually working for a company that sells data as a product or is using data to build a completely proprietary product of some kind, up to and including AI. So I think where they do compete is for talent; apparently, other than that, that competitive signal through to our data team is very weak in a way that it's stronger in software product development, right? If your software product development organization is weaker/slower and you are competing against someone else's software product, in the long run, you will lose, right? So that's an argument company. I work for Devon. We provide consulting to help people improve their software teams, which sometimes includes their data teams, and we also provide whole teams as well, and our goal for our customers is they improve their performance in that software product delivery to reach a state of high performance. And a lot of our customers are in incredibly competitive markets. They are software as a service companies where their customers have plenty of options, and it's not an easy sell, but it's a bit easier than I don't even want to say that the data version of that because I even quite articulate it myself in such a clean way, and if I can't draw that link between this and that, it's got to be hard probably for most people, so yeah, that competitive pressure would have to come from within, right? It would have to be that the data leader has a drive for excellence. thanks comes from within or comes from their team; it's not going to come from outside the walls of the company except if data specifically is what the company sells, that is, if it is the product or so crucial a part of the product. If you think about flight tracking applications and accommodation booking, if the product serves an incredibly competitive market, then the importance of data in that, but in my experience, those data people think of themselves as product people who specialize in data, not as data people. And that's how they attach to that business driver, which in turn attaches them directly to competitive pressure.
TIM: Right, and so now we're talking about such a subset of a subset that generalized thoughts of a football team competing on recruitment are like businesses. This is not really realistic, at least in the short term anyway.
SHAUN: Football's different. Unfortunately, it'd be great if there were some more of that competitive pressure, but there are also relatively clear success metrics as well.
TIM: We'd have a lot of debate over that, actually, over even once people come in and get hired, how are they being measured?
SHAUN: That's the other thing that we've skirted around till now; a few of my answers have You know, maybe being predicated on collecting data about your current team that I don't recall collecting, most people looking to use data to improve the decision-making are generally overlooking some kind of data that's right there, and they're often investing a ton in other stuff, and they're not using what they have. in front of them, right? So one obvious thing would be if you're starting to use more objective measures, do validate your current team against the measures and vice versa. Like, why would you only apply that to potential newcomers? You'd be doing yourself a disservice if you didn't test it on your current team, get their feedback, and likewise, and then use that to help you design the profile of the person that you're actually looking for to try and hire.
TIM: Absolutely. What about taking a step back for a second now and a bit of what they call navel gazing but looking into the future? AI is changing everything; data roles are changing. Are any roles changing? Really, what do you imagine data analysts, data engineers, and data scientists would be doing in, I don't know, five years time if you can think that far into the future? Will some of these roles even exist? Will data engineering just be purely automated, or do you think it's going to happen?
SHAUN: five years is tricky If you say 10 or 20, then I think that horizon allows for much bigger changes, but that's a less interesting prediction, so let's go with five years. Some people in five years are going to be doing those jobs exactly as they do them today, and they did them five years ago because a lot of people like the craft of hand-building a data pipeline. and optimizing it and tinkering with it, maybe for accuracy reasons, maybe for cost, maybe for laughs—I don't know, but DataLand is, that's my catch-all phrase for all the people you said DataLand is yet to go through that process that software development went through, which created a lot of ugly things and some pretty awesome things in the process. So because DataLand is late to that party and because a significant amount of data work is exploratory we don't know what to build until we go out and start asking questions of the business asking questions of the data comparing those two things and saying Oh there's no way you can answer that question with the data we have But here are the other questions you could ask, or we could go and get this other data that's a very human process, right? So I'm very skeptical about a lot of the question-answering stuff being automated. Even if AI can generate all of the possible questions and all of the possible answers, how do we know which are the meaningful questions to ask and answer? that's a I don't see any AI company putting their hand up to be responsible for bad decisions taken on poorly recommended questions, even if the answers are perfect for the wrongly selected questions, so I think data engineering will look a little bit more like software development in five years. I'd like for it to look even more like software development. There are a couple of good impediments, reasonable impediments, and some hard stuff. I don't see a lot, but I see a lot more traction of AI in software development as a craft than I do in dataland. I think a lot of data people are a bit behind, maybe hit in their sands about how AI is going to change their day-to-day work and are therefore missing some great opportunities for AI to help them do those human things at greater scale with greater efficiency and with greater accuracy, right? to help the analyst think of possibilities and ask themselves and the business and data questions that they haven't thought of right, so I think data engineering will look a bit more like software data science. I think it will still be very exploratory and hopefully take advantage of AI to make their exploration more robust and more valuable, helping people find an edge. But I think, yeah, the data analyst job will probably change the least. Ultimately, a lot of people with the job title data analyst are hired into a company as it's growing when someone says, Oh, we need someone to make these reports for us, right? and we're pretty far off that person who hires the data analyst being able to know how to hire an AI to do that job. If there's anyone listening whose job is literally to write boilerplate SQL code, I would be a little bit afraid because AI can do that, but most people who really care about the data work they do for their business are in it because they want to create business value despite all the nasty things I've said about some of them today. They get into it for the right reasons and for good reasons, and those right reasons, those good reasons, will be the last thing to be automated in my
TIM: One other lens I was thinking about was maybe just not underestimating the inertia of massive enterprises.
SHAUN: Yeah, oh.
TIM: systems
SHAUN: Yeah, and that inertia at the enterprise legacy system level is a lot of what data teams end up having to cope with because the job wasn't done in software land in IT operations in finance operations in everything else, so a lot of what lands on a data team's plate, be it in a domain or a central data team, is finishing jobs that other people didn't finish but still need to be done right. human data integration, so it'd be really smart data teams with a lot of that work really leaning into how AI can first take that from half their job to 10 percent of their job and then think about what they're going to do with the time that they get back.
TIM: Yeah, and I guess part of this is definitely psychological, isn't it? Because if you've worked in a particular type of role for long enough, oh, that's a software engineer is a perfect example. Any software engineer in the world is used to doing it in a pretty consistent way of I have a ticket that maybe is a bit vague. I need to unpack that a little bit, and then I'll start writing some code locally. It'll go on to dev; it'll be code reviewed. It's going through that process, but we might be at a stage where several of those steps are automated completely. Maybe you should not be writing code from scratch ever; you should be interacting with an LLM maybe to write the code for you. and something as simple as that might be too profound a shift for many people to proactively take on themselves.
SHAUN: that is I'm seeing that in software land the people most ready for that shift are the most senior people who have written all the code they think they want to in their life and they're not learning anything or gaining anything by writing more code but they have the expertise to know what they want how to clarify it how to quality control it so when the data product is well enough defined to be a software product the same thing should be applied it's just that not many data products Do we know what they're supposed to be when we start building them but we need to build them in this we need to build them with code before we know what it is that we're building otherwise by the time we need to productionize them and turn them into something more like software it's too late and it's too messy and we've given it to the stakeholder and now they want it and they won't accept change all the more reason I think for data people to really take seriously the ability of AI to make some of those annoying repetitive things much easier to do, and if people aren't comfortable doing it right now, generating SQL, optimizing SQL, and making SQL more readable and more understandable to your teammates in the right company-approved AI tool if you discover that lots of people are joining similar tables in a different way, which has been known to happen Pretty trivial to feed all of that into a large language model and say what's the consensus about how we should make the code that we write, which is not quite software, more readable and maintainable by each other, and if AI helps you, just agree as humans to some code standards. Great, it hasn't automated anything, right? it's actually made something possible that was never going to happen otherwise, which is usually an easier sell than Oh, it's coming to write all your SQL queries. I just spoke into a microphone and had an LLM generate SQL that gets people business insights. I think only the most trivial insights on only the most boring pre-prepared data, which I just don't think anyone actually does and we've had that for 10 years, right? We've had a CEO ask a question and get a single number back for quite a long time. It's just that that's not an actual use case of real human beings. The tough thing is we, as a data team, have inherited these 5 enterprise data sources, and the people who can tell us how these were designed have left the company by and large. They didn't leave behind any documentation. Meanwhile, we need to help people make decisions, maybe even some of the same people who create that data in those systems, and they're not willing to wait for us to rediscover when I set up the role. The core responsibility of a central data team like that is like, why would you turn down anything that can help you sort through the mess? and you really shouldn't be worried about the automation of any of that human work because that is such messy work.
TIM: yeah and it's I feel like when technology is changing so quickly and if you work maybe not on the bleeding edge but somewhere near like you're up to date and you read a lot and you use a lot and you're using the latest version of chat GPT and you're checking out Claude probably it's worth appreciating that a lot of people aren't and the dysfunction of massive enterprises and huge government bureaucracies is staggering beyond belief, so don't underestimate how much work there is out there at the moment because of just that level of dysfunction that's apparent.
SHAUN: there'll be a AI will bring a pretty deep transformation on lots of things but It's going to be pretty slow until it's pretty quick and I think many people are still looking for it for this immediate massive payoff and they're disappointed when they get a nice 10 percent bump or 20 percent bump in one part of their job but yeah I've seen some studies recently comparing the innovation diffusion of innovation electricity steam engine the internet they take decades and so even if AI is going to be much faster than that or the magic AI we have now that Chat GPT made popular is only two years old and it's only good for a some set of things there's all of the rest of the other AI that we need that hasn't really been invented at that scale which still needs to be hand built per use case right So there is no shortage of work to do, but I just wouldn't want people to think that in five years a data analyst who can claim to know nothing about how to use a large language model will be in a good position in the job market.
TIM: I'm reminded of an anecdote that I'd love to share just again to paint a picture. Of the level of dysfunction that can exist in larger organizations, in particular, that would be nowhere near solving someone who had joined one of the big four banks in Melbourne a couple of years ago—literally a couple of years ago only—and they were getting a tour of one of the office floors by someone and they walked past a desk that didn't have any computer on it, like no monitors, but it had just a desktop machine underneath the desk with an umbrella over it, and he's like, What the hell is this about? He's Oh yeah, that machine is running some VBA code in an Excel spreadsheet that someone wrote 20 years ago, and they've left and the banking system or whatever is running off that machine here, and that umbrella is there in case there's ever a fire and the sprinklers go off. This machine must live; otherwise, we're all doomed. Okay, so that's out there. Okay, there's still a lot of problems to be solved.
SHAUN: Yeah. and the amount of work to safely untangle that thing is huge. You probably only want to do it with the help of AI, but most of what you're going to have to do to turn off that machine or recover when it turns itself off is convincing humans to trust you and convincing humans to tell you what they want, and I think some people think AI is going to be at that part of the job.
TIM: Shaun One quick final question: Is there anyone that you know of that you would describe as almost like a hiring hero or someone you've learned a lot from in the way they approach hiring, or you think they do hiring in a really good way?
SHAUN: I won't name them, but I can definitely think of people who hired me in a way where they tested objective things about me, but they also really took time to get to know me, which is not a very objective-sounding thing to do but is very important as a candidate, especially if you're going to be, say, the first person in a role at a company or the first leader of that kind in a company, particularly for those maybe higher-stakes roles. It really matters that when you get to the right stage of the process that there is a human connection and a human process to make both sides feel like they've had some kind of meeting of the minds and I don't know how much that contradicts what we said earlier I don't think it does I think that's a better way to state the value of the human work that happens in talent acquisition right Making people comfortable to make a decision because, yeah, after you extend an offer to someone, they still need to accept that offer, right? So if we think about the final part of the funnel, there are ways that you behave in the process that make you feel like this is a place that I want to work. and I think on the flip side of those things is the kind of getting back to people with real feedback in a timely manner that the only way to do that is actually data-driven, so I would say to the people who think that they want it to just be a human process, I hope they've got an ever-increasing budget. Which I know that they don't maybe look to the data that you have about the process you're running to maybe give those hiring managers more time to get to know the people that they are thinking about hiring; everyone will be better off, but it needs to start with people making a small sacrifice away from what can be a very comfortable status quo.
TIM: perfect That's a great place to end a Shaun Thank you so much for joining us today; I really appreciate your insights.
SHAUN: Thanks for having me.