Alooba Objective Hiring

By Alooba

Episode 51
Jeremy Adamson on Empathy Over Expertise: Rethinking Tech Team Success and Ethical AI in Hiring

Published on 12/17/2024
Host
Tim Freestone
Guest
Jeremy Adamson

In this episode of the Alooba Objective Hiring podcast, Tim interviews Jeremy Adamson, Author & AI Strategy Leader

In this episode of Alooba’s Objective Hiring Show, Tim interviews Jeremy, an author of two books, 'Minding the Machines' and 'Geeks with Empathy,' discusses the importance of empathy and authenticity in technology teams. Highlighting that the most successful teams are not necessarily the ones with the best tech but those that empathize with their stakeholders, Jeremy shares his hiring philosophy, emphasizing humanity over mere technical skills. The conversation also delves into the pitfalls of conventional and AI-driven hiring processes, ethical AI, and broader implications on human dignity and society. This episode offers a rich dialogue full of insights into the delicate balance between people skills and technical expertise in the evolving tech landscape.

Transcript

TIM: Jeremy Welcome to the Objective Powering podcast. Thank you so much for joining us.

JEREMY: No, thank you for having me. I'm excited to be here.

TIM: I'm excited to have you, particularly because you have authored two books in our space, which I think in itself is a pretty amazing achievement, and maybe we could start by just having a brief understanding of these two books and what the themes are in the books and what prompted you to write them.

JEREMY: Yeah, of course, my first book was Minding the Machines, and that's more of a textbook and a framework for building and leading high-performance data science and analytics teams, and I released that several years ago and then followed it up with Geeks with Empathy, which came out coming up on a year ago now. and the theme for that is that I was seeing that the most successful technology teams weren't really the ones with the best tech, the best data, the best, the most PhDs, and credentials; it was the ones that could empathize with their stakeholders and people in their organization. So it's a bit of a proposal, a desire, and a proposition that we start to develop our empathy as much as we do our other technical skill sets.

TIM: And so in your own hiring, then, is that something that you would look for in your candidates typically, and if so, how do you evaluate that? Is that something you're covering up in an interview or

JEREMY: It is absolutely, and as I'm scanning resumes as I'm meeting people, I find it so important to get to know them as people and to look for signs of humanity in their applications, and actually when you and I were discussed in the podcast, I had mentioned this: that quite often I'll get 300 resumes. and for some reason every person in this field has been given the same MS Word format, and 300 of these resumes look the exact same. They're very explicit on what they did and every role, and so it's really hard to pull out humanity from those, so I tend to look at if they've worked for any NGOs or if they have any extracurriculars mentioned on there. and then when we get into the interview, even before I start talking about technical skills and what have you, I try to feel them out as a person. At the end of it, if they're successful, we're going to be in the trenches together, and you need to be authentic; you need to know each other. There needs to be that mutual trust, and if you go into it wearing a mask like a lot of us do in interviews, you could end up in an unhappy situation on both sides.

TIM: Oh, this is really interesting. I'd love to delve into this more because I feel like interviews are often flawed because both parties are almost maybe reluctant to be fully transparent and honest. I think that the earlier in the hiring process both parties can just go, This is the job, Here's all the details you need to know: everything about the salary, who you're working with, what the metrics are for the same, and everything, if all that could be upfront. Amazing if the candidate can go; he's exactly who I am, not hiding behind a resume that's 50%, and then, as you say, there's interviews putting on that mask and not really being authentic. I feel like, yeah, the sooner we can do away with that, the better. How do you unlock that authenticity in them? Is it about making them comfortable? Like, how do you approach that?

JEREMY: It's really hard, and when you're in an interview, like you mentioned, both sides put on a bit of a persona, and especially if you don't have a lot of confidence as an interviewer, then I think most people reflect back on interviews that they've been on the other side of the table. and then they try to imitate them because they think that's the way an interview is supposed to be, so they use very formal language, a lot of five-dollar words, and they sit up a little bit straighter, and then on the other side of the table, your objective quite often is to get the offer, and then you can decide if you want it or not. So it's not to understand the company; it's not to understand who you'd be working with, so it's really, when you think about it, quite a strange little dance, and I see it like dating. If your goal is just to get married, you're going to act like the person that your boyfriend or girlfriend wants you to be. and you might end up getting married, but it's certainly not going to be a happy marriage. There's no trust; there's no authenticity, and you're going to have to play make-believe for the rest of your life. The way I try to get around it, with mixed success, I admit, is I try to start every interview with that. I tell them with all honesty this is as much you figuring out if you want to work with us, so if you have any reservations, bring them up. You're not going to offend us. We need to go into this really knowing each other, so let's take off these fake personalities and have an actual conversation and not just go through the silly theatrics. and if you can't get through that, you never know what you're getting into, and I've passed on several applicants in the past just because I always felt like they had that persona, and I never got to know them, and if you're going to be in the trenches with them, if you're going to put them in front of stakeholders, you need to know them. You need to know how they're going to behave, and you need to know that you can provide the right environment for them too. It's not all one-sided; it needs to be a good union for both sides.

TIM: Is it a case then for some of those candidates that they might be reluctant to show their true self, maybe with good cause or good reason? Maybe in the past their true self hasn't been received well, or maybe they're working in an especially corporate environment where there's this level of vanity even once you're working there. and if they were really as honest as they were with their mates, then maybe they feel like, Oh, this might not last very long. I don't know.

JEREMY: Yeah, it's a tough balance for sure, and I'm not saying people should show up in a hoodie, and you got to treat the other party with the respect and the gravity that it deserves. It is a professional setting, but at the same time, like for myself, when I go into interviews, I know it's not something you should bring up, but I'll mention my kids. I'll mention how important family is to me. I'll mention I like doing speaking engagements like this, so I need some flexibility on time and what's important to me, where I want to go in my career, if you bring that up in your first performance evaluation, and then they're like, This isn't what I signed up for. There's hard feelings all around, and you're never going to end up in a position where you feel valued, where you feel like a good alignment, so it's worth, I think, the risk of a few rejections to land in a place where there is that alignment.

TIM: And so it sounds like candidates and hiring managers and anyone else involved in hiring just need to have a bit more acceptance of the discomfort, maybe of having an actually honest conversation, whereas you say you might be five minutes into an interview and some information share, which realizes on one side or the other. It's this: this is not I shouldn't be here, or I shouldn't have this person here, because this is clearly not the right job for you. Should we just accept that? Should it be fine and normal? The sooner we can find that out, the better. As you say, it's better to find that out in interview one than performance review one after three months.

JEREMY: Yeah, absolutely, and that's the most respectful thing to do. It respects everybody's time, and you need to disconnect your value as a human being, which is completely separate from your ability to perform the job or work in the environment that they're looking to hire for. Those are two completely separate things, and if you have that personal confidence that you can take that news and you have the confidence to deliver that as well, then I don't think this is the right fit for me. I appreciate your time, and I had a great time getting to know you, but I think I should bow out here. That's not at all rude; that's the most respectful thing you could do.

TIM: and you make a really interesting point there around maybe not associating one for one with our jobs, like maybe moving away from the Hey, Tim, how are you? What do you do? Oh, I'm an ex, and we shouldn't be framing things that way, and we should say, Now I'm a In your case you're a father and a husband and a blah blah blah. Is maybe more important to your identity than a particular job you have at the moment. So maybe we need to reframe that as well.

JEREMY: I think so, yeah, it's like work plays a big part in people's lives, and if I can generalize, I think a lot of younger people entering the workforce tend to devalue work a little bit, and they think of it as a bit of a necessary evil so that they can pursue things outside, and I think if you go into it with that mindset, you're never going to feel that sense of accomplishment or reward that comes from a job well done. So I think the creative application of our technical capabilities is very important for a sense of self-work and self-actualization, so I'm not saying that we should completely get away from it, but it is a very small part of who we are. We're part of a community; we're part of a country; we have religious and political views. We have all of these other pieces of our personality and our networks that they don't disappear when we go to work; we may tamp them down a little bit to go with the flow, but this is much a part of our personality as whether we like AWS or Azure.

TIM: And would these be things that you would delve into in an interview? Because I feel like you're getting into that gray territory, aren't we, between understanding more about the person or finding things that we don't like about them or inevitably at some point won't like something about them? If we start getting to some of those areas, so what are your thoughts on that?

JEREMY: Yeah, no, it's at least in Canada, it's illegal to ask a lot of those questions. You can't go into it and say, How many kids do you have? Do you have plans to get pregnant in the next year? So I certainly wouldn't encourage anybody to do that, but when you're on the interviewing end, if you can interview EN, rather, if you volunteer that stuff just to share more about yourself, the other party certainly shouldn't use that as a way to exclude you, but anything that doesn't affect your availability or your ability to do the job, I would certainly encourage people to share those and get a sense of your humanity across.

TIM: Keep using this phrase humanity, which I have never heard, I don't think, in the hiring context, which is quite interesting. So when you're looking for the humanity of someone, what does that mean? That means you just want to understand who they are as a person. Is that, in effect, what you're searching for?

JEREMY: It is partly that because I want to be able to get along with them and trust them, and we need that camaraderie within the team, but at the same time I think one of the overarching themes for data science and analytics over the next several years is a closer integration with the business and For a lot of our history, we have really enjoyed that persona of the intelligent outsider that I don't need to know the business; my scrum master is going to tell me what to do on Monday. I'm great at my job; I have all of these credentials. The industry doesn't matter; I'm going to keep my head down. I'm going to do my thing, and it's up to the product owner to figure this out, and that's especially in legacy organizations that just doesn't fly anymore, and I think for those types of people, they don't enjoy that way of working as much as they think they do. They like it because there's no psychological risk. It reaffirms whatever social abilities they might feel they have, but they never have that deeper sense of accomplishment that comes from helping a person rather than delivering your 15 story points for the week, so those are the types of people that I'm really trying to avoid hiring because more and more we're getting integrated with the business. and very often stakeholders can't articulate what they need from a technical perspective because they don't have the vocabulary, so they'll ask for a marketing churn model, for example, and a data scientist can hear that, go into the back room for a month, and come back with something that's completely useless, but if they care about the individual, they want to do a good job. They want something that creates value for the business. They're going to take the time to sit down, talk it through, and understand any sort of operational constraints and what the person actually needs, and they're going to work with them to turn that into a project that makes sense, and that needs to come intuitively that needs to be part of their character, I think, and then when people are the yellow color of the parachutes, if I can, when people are yellow, their social communicative, I find that comes a lot easier, and that presents itself in resumes as working for NGOs as talking about They're extracurriculars and little league coaching and all of those things that are going to translate better to a more collaborative work environment rather than preferring to be a backroom function.

TIM: You mentioned sorry. I'm passing the yellow of the parachute, which I am not across that one. What is that one about? Sorry.

JEREMY: Yeah. So that's one of the many dozens of those personality assessments. Yellow is the more communicative, social type of people, and then there's green, which is analytical, and blue, which is orderliness, and I believe red is aggression and dominance, but I try to find people that are more on the yellow side these days. Yeah. Yeah. a really interesting picture of some of these, let's say, geeks without empathy And as you were describing these caricatures almost that you've come across in your time, immediately I could think of two people that you've just described to an absolute tee.

TIM: And of course I won't name them, but I found that they almost had it as a badge of honor how little they understood the business they were working in despite having worked in there for a decade and being one of the earliest employees. They were just obsessed with their particular, relatively small bit of the tech stack. And even within that they also didn't know the other bits of the tech stack. So they're so microscopically focused on being an unbelievable genius in one specific thing. I don't wonder why that is so for this person I'm thinking of. It's Yeah, a fear of failure probably in delving into other areas, like a lack of comfort in being crap at something, which inevitably will be when you do something new, and they had a history actually of only being amazing at something like they were almost world-class at three or four things in their life, but maybe weren't comfortable in being mediocre—is that part of what you've seen? Is there another element because I feel like you've You've almost done a psychological assessment of these people, and I'd be interested to hear more of your insights.

JEREMY: It's funny you mention that, and that's the badge of honor term; it is fear of failure, I think, and in my experience, again, I don't want to generalize; it seems to be older people that fall into this. It's more older millennials and young Gen Xs who I'm part of that cohort as well, and when I was younger, being a geek wasn't cool. If Star Wars trivia, you're not winning pitchers at trivia night; you're getting wedgies in the bathroom. like it's being a geek wasn't cool back then, and I think a lot of us got a sort of defensiveness out of that, like now we're making a good coin, and we're the bosses, and you used to pick on me, but now I'm the smart guy, and I think there's trauma behind that, to be honest. There's deeper stuff that needs to be unpacked, and we need to be okay with failing and admitting we don't know stuff. It's perfectly okay; we can't grow unless we fail, and for that person that you brought up, for him or her to go for coffee and admit that they don't understand parts of the business I've been here 10 years. I'd like to know more about what you do. That's an awful lot of vulnerability for someone who probably was bullied to some degree when they were young, so it's difficult to make that leap, I think, and I would put that to their leaders just to try to encourage them to guide them down that path. and it's a difficult one, but it's going to be much more rewarding than spending another decade in your microscopic area of expertise.

TIM: I was listening to a podcast recently from a neuroscientist talking about a bit of the brain that actually grows when we voluntarily do things that are good for us but that we don't want to do. For example, for me, if I went to the gym, that wouldn't really classify as that because I like going to the gym; I do it all the time, so that's fine. But if I were to say tomorrow, when it's 33 degrees, I'm going to run a half marathon at lunchtime around Sydney, that would be awful. I don't want to do it. It's probably healthy for me to go on a nice long run, but I really don't want to do it, and so apparently if we voluntarily subject ourselves to these little bits of, like, short-run pain for longer-term gain, there's a bit of the brain that actually lights up and grows. And so, yeah, I wonder what the geeks without empathy could do with a bit of a dose of that growth mindset to help get them out of the little rut that it sounds like they found themselves in.

JEREMY: That's a good point. Yeah, it gets easier over time, and I'm a little resentful that you brought up it's 30 degrees there when it's about minus 25 here. But no, absolutely, and for me, even public speaking and doing things like podcasts would have absolutely terrified me in my youth. It doesn't come naturally to most people, but over time, as you get accustomed to it, you grow to like it; you get that dopamine hit. I think anybody, if they're intentional about it, can break out of that rut, and whatever disposition they have can be trained away.

TIM: The other thing with this research I remember now is that as you voluntarily expose yourself to these new and difficult but good things, it's not just that you get better at that one thing, but you get this almost like generalized skill of just dealing with difficult stuff, so in the public speaking case, yeah, maybe your ability and willingness to engage in that then give you other opportunities. general skills to then go off and go, I haven't done this. I've never done tennis before, but I'll just give it a crack because now I have this bit of my brain that allows me to overcome hurdles. It's like a, yeah, obstacle-jumping bit of your brain or something; that's the way I've heard it described. And I guess we all need a bit of that.

JEREMY: I think so.

TIM: wonderful I'd love to change gears a little bit now and talk about probably everyone's favorite topic, or certainly the most commonly used two letters in the history of the world, that is AI, and discuss a little bit about AI in a hiring context because I personally feel like there's gonna be enormous changes over the next few years. In the way hiring's done, I think AI is going to play a big part in that, but in the meantime, have you dabbled with AI as a hiring manager? Have you seen candidates use it in the hiring process so far? I'd love to hear your thoughts about that.

JEREMY: I am not in my current role. I actually try to avoid it as much as I can now, but I have used it in past roles, and it was usually invisible. I didn't realize it was happening, so HR would be scanning resumes looking for keywords and experience matches against the job description that I had given to them. but most of the systems were very rules-based and explicit. They would say if I'm hiring a senior data scientist with 10 years of experience, they would ignore anything with tangential titles like decision support system, machine learning, and whatnot, so that was my main frustration. So I started asking HR just to give me everything and let me do the screening, which is why I end up with 300 resumes. So I'm shooting myself in the foot a little bit, but I'm always concerned that it's going to screen out the best candidate when it does that. I'm not a Luddite; I'm okay to use automation for a high-level screening if they're looking for contract work and I'm looking for full-time, or their immigration status wouldn't allow them to work. But I really hate the idea of trusting an algorithm to say if a person isn't up to snuff; that just doesn't sit well with me, so that's why I tried to do it myself.

TIM: Yeah, and I think you're very right to be dubious, certainly of the tools that have been out in the last several years. I personally have converted from skeptic to a techno-optimist. I feel like in the last three to six months, just largely based on my own personal use of ChatGPT and other similar tools for various things, I have been increasingly impressed that they seem to have chipped away and solved a lot of the initial problems with the hallucinations and lack of memory and those kinds of things. and I feel like it Maybe it's now inevitable that we would have some kind of AI screening with a large language model in those early stages because, yeah, those older models definitely couldn't even deal with the synonym kind of challenge, like, Oh, I was looking for Python, and it said pandas. And so we filtered the max and said Python those kinds of things. I feel like Chachapiti's nailed that kind of thing pretty well. Now that, in theory, an application calling some of these LLMs could do a good job—not that I've used one yet myself—I should say I also feel like there's a lot of upside in making it more objective. You In the sense that there's so much on a CV that you don't really need to know You don't need to know someone's date of birth or their photo or their religion, which is quite common in some markets, or arguably their hobbies. Where do you see this going in the next few years? Can you see yourself being comfortable with this step at some point, and if so, what are the guardrails we would need in place, do you think?

JEREMY: I can see myself getting comfortable with making sure that they have comparable technical backgrounds, but I'm trying to find my words here. When I put on my geek hat, it makes sense; it makes a difficult process much more efficient using fancy probabilistic models. It'll present the candidates with the greatest likelihood of success, giving whatever training data it has. So I can get behind that from a process-driven perspective, but when I put on my human hat and you peel away the first layer of that, it doesn't feel ethical because it doesn't look at the totality of the person, and I know what's happening in the background is breaking them down into a series of coefficients based on what it sees on the resume. and then feeding them through some sort of getting away from Gen AI, say using a generalized linear model, which is some of the ones that I've seen in the past, and then it says whether or not you're good enough to be considered for a job, and that just feels gross to me, and I don't think even if there was transparency that it would be completely fair like you could ask for a breakdown from HR. and if they said you had long tenure at your last few jobs, so we gave you 3 points, and it took you 5 years to get your undergrad, so we took away a point, and so on and so forth in some cyclone diagram, and then in the end they say your life was worth 40 points, but we needed 42 before we were going to give you an interview. that I probably sound like an extremist, but that concept just feels gross to me, and I would rather ethically just take the pain of the extra time to go through it myself, even if it's going to be a little bit more subjective and take a little bit more time. I don't feel like that's something that I want to get involved in. attaching points to their life if that makes sense

TIM: Yeah, you've painted a bleak Orwellian future there that we're maybe not too far away from inhabiting. Let me throw the devil's advocate at you on this, which is that the current way we're doing it with humans itself is wrought with bias. I'll give you a particular example: there have been really interesting studies in Australia, lots of markets, actually, where the researchers will get thousands of different CVs. So one thing that Sydney University did was they had, I don't know, 10,000 CVs and split them into three groups. The CVs between the three groups are basically very similar; the only difference, of which were the names on them, the first set of CVs was like an English first and last name; the second set was an English first name and a Chinese last name. The third set was Chinese first and surname; they then applied en masse to thousands of jobs around Sydney and Melbourne: senior jobs, junior jobs, different industries, and different domains, like a very wide variety, and then they measured the rate at which they got a callback, either a literal phone callback, an email back, or some kind of yes, we want to interview you. The first group had a 12 percent callback rate; the last group had a 4 percent callback rate. So, long story short, if you applied to a job in Australia with a Chinese name, you'd have a one-third chance of a callback compared to a white name, so there already seems to be some kind of endemic if you're being friendly, unconscious bias if you're being more honest, and racism in a system. That's can we make it any worse than it currently is. I guess it's where I'm going with the AI option instead. What do you reckon?

JEREMY: That's a really good point, and I think my perhaps mystical perspective on it is assuming best intentions on all sides and that everybody's acting ethically, but like you say, even some of that may be unconscious. To be honest, I don't know that there's one perfect way around it. My concern is that when we start trusting the machines to make these decisions for us, we lose a very important piece of what we do as leaders, and we hand that over quite blindly to some training data, to someone who developed this model, and I've seen some places where they remove the names of the universities; they remove the names of the applicants. I think there are ways to mitigate the unconscious bias piece without going full-fledged down this path because once you go down that path, it's really difficult to walk your way back, and I don't think we should replace the few hours it would take for us to make these hiring decisions. with an objective process, which, if I'm being cynical, a lot of it is trying to give up the weight of a poor decision, and I've seen that quite often too, is that when people are hiring managers, they want to be recognized if the person succeeds, but they want to have excuses if they don't. And if you hand this over to a machine, then your hands are clean. I had nothing to do with this, and I think that's a part of it, and saving time is a part of it, and I think we don't have the right motivations for going down this path. If a person is trying to get around the unconscious bias issues, there's probably other ways to do it. without handing it all over to OpenAI, I

TIM: So our AI gods are maybe not quite ready for the keys to the castle yet; you're basically saying

JEREMY: I think so, yeah. I read a fun thought experiment; it was in an article the other day, and it said if OpenAI could guess with 100 percent certainty who you were going to vote for based on everything you do on social media, based on reading all of your emails, and so on, why bother voting? Why not just let it tally up the votes on who you would vote for? and then decide for you, and it makes perfect sense from a technocratic perspective why waste your time voting if it already knows 100 percent, but you hand that over, and you lose something more than the efficiency you gain by not invigilating a voting process, and this feels like a similar thing. It's not something we should give up to the robots.

TIM: Yeah, I'm personally in two minds on this. A big part of me sees the current flaws as not like marginal flaws, and it's not just that it's a lengthy process, so it's time-consuming, but it's already accurate. I feel like it's already inaccurate, unfair, and slow. It's not just that it's slow. I've thought about it, and maybe that's one way to think about it: if it was a perfectly accurate process currently where humans were doing something like diagnosing a medical illness with 99% accuracy, and that's the benchmark we've got to beat, it's a human doing this thing and just doing it faster or doing it at the same level of accuracy or whatever, but I feel like the current problem is that we don't have that. it's hit and miss It's very subjective. It's Let me give you actually another example that I just remembered: we ran an interesting experiment a few years ago to investigate consistency or inconsistency in CV selections, so we had about 500 CVs. These are real CVs; we had them from our own applications. We got 10 different recruiters, all working independently of each other, to take these 500 CVs and get a shortlist for our open job, so that was their task: a shortlist of CVs like any recruiter would. They didn't know they were part of an experiment, so it's maybe slightly unethical to be honest, but whatever, they got paid. So I'm sure they're okay, and we got them to shortlist CVs. Long story short, they all chose different candidates. It was like flipping a random coin, and behind the scenes we also had all these candidates test scores, which we didn't share with the recruiters, and so not only were all their selections almost random and flipping a coin, but they were not even positively correlated with how strong the candidate's skills actually were. which I guess says something about the CV more than the people doing the screening, but yeah, when I'd seen that, I just thought, Oh god, this process is so flawed. I reckon you could apply for a job three times and probably get selected once and rejected twice because the person got up on a different side of the bed that day or a different person was doing the screening or whatever. like it's just so inconsistent and unsystematic. I feel like we could at least improve that.

JEREMY: It is tough, and I don't get it right either. A lot of people I've ended up going with just weren't a good fit despite best intentions on both sides, and I don't know what the fix is. And I, again, when I put on my geek hat, I'm 100 percent with you: this makes perfect sense. It's just it feels icky, and at the same time, I did work in the public sector for a number of years, and I was part of their version of completely objective hiring, where it's a panel interview. You have those same 5 questions. You're not allowed to deviate; you're not allowed to guide the conversation, and that usually doesn't end in good places either. You get very transactional, tactical people who are good at answering questions in the STAR format, but when it comes to execution, when it comes to creativity, they tend not to be able to deliver. I don't know that there's one right way out of this; it's a very complicated question.

TIM: I'm glad you brought that up; actually, let's unpack that a little bit more because, yeah, we would often talk about having these interviews where, at least, you've got a scorecard, you know what you're looking for, and you ask the candidates the same questions because you're making more of an apples-for-apples comparison, making it easier to compare. But then, yeah, I've heard some people go through these experiences where it's been taken to the nth degree, and it just hasn't worked, so I'd love to hear more of your experiences there and to unpack it a bit more in detail why you felt like it was ultimately ineffective.

JEREMY: I was actually coached going into these that you're not allowed to make any facial expressions that would give the candidate an indication that they're on the right path or the wrong path, and you're not allowed to ask any follow-up questions. If the candidate misinterprets the question, you can't clarify. You need to give everybody the exact same experience; otherwise, they felt that bias would come through. People would be particularly advantaged and whatnot. From our perspective as the interviewers, you're sitting there with colleagues who, five minutes before, you were having coffee and gossiping and chatting with, and now you've got your suit on, and you're reading these things in the most even possible tone because if you have any inflection in your voice, HR is going to be mad at you. and from the applicant's perspective, they see these three robots who have complete android faces, and if they are looking to make a difference, if they're looking for a place where they can last, when they look at these three people, they're going to think Is this my future? Is this the type of place for me? If you try to dehumanize it all the way, I think you lose something too, and you drive away the candidates who want to get in there and shake things up and make a difference, and especially in technology, that's the attitude you need, not people who are just going to do what they're told and stay away from the limelight.

TIM: So it's not only that the information-gathering exercise is flawed because you can't even dig deeper; you're forcing the candidate down these narrow paths, and they can't deviate, so you're missing out on a learning opportunity, but it's also the negative signal of what that says to the candidate themselves of then going Oh my God, why would I want to work with these three robots? They've got about as much personality as a bag of crisps, and not even salt and vinegar like plain crisps, and so they're then opting out of the process. Maybe I'd never thought of that aspect; like, it's that bad that it's deterring the candidates.

JEREMY: It was for me. I actually ran into one several years after who I had interviewed, and he remembered me. I didn't remember him, and we had a good laugh about it that both of us felt like we were completely different people in that strange 45-minute meeting.

TIM: And so that's clearly influenced your current view of a better way to do hiring, which is, as you say, unpacking and trying to find the humanity of the person and having a real, authentic conversation. Basically, it sounds like the polar opposite of that approach is that a fair

JEREMY: Yeah, I think you're right that influenced me quite a bit, and the other thing that came out of it is, in the interest of objectivity, if the role requires, say, a six out of ten in Python and you're evaluating a candidate and that candidate is a ten out of ten in Python, it's very easy to put that person at the top of the pile because you think we're hiring for Python. They're very good at Python; why not go with this person? And, of course, what happens is they land in the role, and they get bored, and they're disinterested, and they're not building their skills, so it often leads to a misalignment there as well, so I would rather hire someone who's a 5 out of 10 who's interested and eager and wants to grow if they have those social abilities as well

TIM: The geeks with empathy again are what you're looking for. That's really interesting. You've given me lots of different perspectives already, which I will meditate over and think further on. We've touched a little bit already on ethical AI and your concerns, justifiably, about the path of going down and hiring. I think you should expand on some of your general thoughts on ethical AI in minding the machines. If so, could you unpack that and unpack your thoughts a little bit more there?

JEREMY: Yeah, of course. I think the landscape has changed a little bit since Mining the Machines came out. When that was written, it was a bit easier; there wasn't a lot of guidance, and ethical AI basically meant you don't put protected categories into your model; you can't do retention bonuses that would favor a particular demographic, for example. But now it's just quite a different landscape in the EU. You've got their AI Act, and the GDPR talks about what you can and can't do and the different risk categories, but in Canada it's a bit of a strange situation. We have an awful lot of ambiguity here. We've got something called the Digital Charter Implementation Act, and it's been in our house for several years now, and it talks about what the penalties are going to be for being offside, but it doesn't talk about what is considered offside. So I use the analogy with clients quite often that it's like building an apartment building, and you have no idea what the building codes are, but a couple of years from now there's gonna be an inspector coming, and they're gonna punish you if you're offside. It's really hard to do anything but guess in an environment like that. Personally, I've always had the same philosophy. I think the regulatory ambiguity supports that, but Everything that a person does in this space I feel should elevate human dignity in some way. Technology's here to serve people, and if we invert that, I think we end up in bad places. That's always been my personal objective function: if this project is successful, is it going to help people, or is it going to hurt people? and I think if we look at everything through that lens, we can be pretty confident we're going to end up on the right side of the law, and we're not going to have any negative PR, and as individuals we can have clean

TIM: As soon as you started talking about that, I had the thought of at what point, instead of us using the LLM, is the LLM going to be asking us to do stuff like when is the first person going to be the LLM's bitch in some kind of organization? What do you think of that? should like, especially if you have some version that the next version of AI coming out is meant to have an IQ of 300 or something, probably smarter than any human. Should they be the ones running the show? Should we be answering to them? What are your thoughts on that?

JEREMY: It depends on your view of the purpose of life and humanity, and that's a pretty vast philosophical question. I think if you want the most productive capitalistic society, certainly we can hand the keys over and just do what we're told, but back to my point: technology is there to improve our quality of life. We developed it, and we didn't do it with the intention of creating hopefully overlords, and there's a meme that was floating around, and it's cutesy, but I think there's an awful lot of truth and terrifying prescience behind it where it says AI was supposed to free me up to do poetry and art to do my taxes, but instead it's doing poetry and art so I can spend more time at work. and that's really a terrifying thing Humans need opportunities for creative expression; they need to feel like they're productive members of society, and if we hand all of that over, I don't think we end up in a utopia where we don't have to work and we can just satisfy our pleasures all day. I think that takes something very fundamental away from us. And again back to the voting analogy and back to the hiring analogy, we need to really interrogate these opportunities and not just go down the path because five minutes ago it became possible. This is a very deep, impactful question, and we need to give it the time it deserves to discuss and meditate over.

TIM: And so you would look I'm guessing dimly upon people who think now maybe the AI should have rights, that maybe they should have an AI union, that we shouldn't be enslaving these AIs and paying them one cent an hour to do all our analysis for us. You'd be getting worried if people started coming up with that kind of rhetoric.

JEREMY: I am, yeah, hopefully it's not in my lifetime, and I can leave my kids to figure that one out, but that's not the world I want. I'm very pro-human intelligent automation getting away from having to copy and paste things in Excel. That's awesome; I can get all the way behind that, but deciding who to hire, what markets to go into, and when you should euthanize your grandparents—these types of big, scary decisions… This is deep stuff, and it shouldn't be up to probabilistic text prediction engines. These are deep human questions. that we should retain for ourselves

TIM: Speaking of that, just briefly, I feel like one set of products and segment of society that has not been scrutinized anywhere near enough is the dating apps, given that 90-something percent of people below the age of whatever, 40 or something, would now meet their partners through dating apps, of which there's a duopoly anyway, and I think there's only two big ones. and it basically comes down to a couple of data science nerds sitting somewhere in San Francisco who've created these matching algorithms, literally playing God, deciding who gets matched, who gets shown, who doesn't. Is that something that we should be really digging into, that there should be some kind of inquiry on? 'Cause I feel like this is next-level Orwellian shit.

JEREMY: It is, yeah. I hadn't given that one very much thought. Now that you mention it, yeah, that is scary. I'm past the dating phase myself, but it is a struggle meeting people, going for coffee, getting to know each other, and moving up to that next step. I think there's a lot to it, though; that's part of the human experience, and to hand that over to a couple of nerds in a basement and present you with the partner who you have the highest probability of matching with—why bother dating? Just send the contract over. You can get married; they can move in and make sure you're aligned on the kids names and how many you want, and then you can just get started. Yeah, much more efficient, but again, what kind of life is that?

TIM: Yeah, so I guess we're talking here about trade-offs, aren't we? There's this in hiring: maybe a speed versus bias trade-off, fairness versus accuracy, or human versus not human. A lot of trade-offs, but yeah, a lot of scary paths we could go down, and you mentioned your kids. Yeah, we might be giving the ultimate hospital pass to our kids to sort this mess out in 20 years. Who knows? To end on a, let's say, slightly more positive note, I'm wondering, Jeremy, if you could ask our next guest one question, what question would that be?

JEREMY: I think the most interesting thing for me—and we've talked about it a lot—is that balance between people and technical skills. I've clearly got a very strong bias towards people skills and often wonder if I'm an outlier, so I'd love to hear more follow-up episodes that focus on that. convince me

TIM: Possibly.

JEREMY: Maybe.

TIM: Jeremy This has been a really fascinating discussion, wide-ranging. I've learned a lot. I've heard some things I've never heard before, which is amazing, and so it's been an absolute pleasure having you on the show. Thank you so much for joining us and sharing all your wisdom with our audience.

JEREMY: Oh, thank you. I had a lot of fun, and yeah, there were some very interesting side roads we took there, but I appreciate the opportunity.