Alooba Objective Hiring

By Alooba

Episode 131
Raman Thakral discusses From Data Analytics to AI Integration in Tech and Hiring

Published on 3/21/2025
Host
Tim Freestone
Guest
Raman Thakral

In this episode of the Alooba Objective Hiring podcast, Tim interviews Raman Thakral, SVP Data Analytics - WNS

In this episode of the Objective Hiring Show, Alooba's founder, Tim Freestone is joined by Raman Thakral, a veteran in the field of data analytics. They discuss the rapid evolution of technology and its implications on both data analytics and hiring practices. Raman shares his extensive experience in the industry, including his transition from strategic planning to data analytics and consulting. The conversation explores the current hype around large language models, the integration of AI tools like ChatGPT and Perplexity into daily and professional lives, and the potential pitfalls and advantages of using AI in hiring. They also delve into the philosophical aspects of AI's advancement and the balance between technical skills and growth mindset in future hiring practices.

Transcript

TIM: We are live on the Objective Hiring show again. Today we're joined by Raman. Raman, welcome to the show. Thank you so much for joining us.

RAMAN: Thank you very much, Tim. Looking forward to our conversation today. I'm excited about it.

TIM: Yeah. And so are we, and thank you so much for joining us. And where I'd love to start is just to hear a little bit more about yourself because I think it'll help our audience to start to understand who they're listening to today.

RAMAN: Sure. Thanks, Tim. And so about myself I've been 20 years roughly working in the industry. I've had a three-phased career, if you will. I started off being somebody in the strategic planning and the M&A sort of teams to start off my career. That was almost the first six or seven years of my career. And post that, data analytics happened. And since 2012, I've been in the fields of data and analytics. I started working as part of the corporate teams, being a hands-on engineer and an analyst. To now move to a third-party setup where I'm part of the consulting firms and have played varied roles from delivering large-scale projects and complex data analytics programs to P&L management, operations management, and a bit of sales as well. More or less in about these last 15 years or so in data analytics, I would want to believe that I've seen the whole nine yards of this industry.

TIM: Yeah, you've seen a lot, and a lot's changed throughout that time. And you've also seen a few hype cycles. So I'm wondering from your perspective, do you feel like the current hype around large language models is fair enough? Is it going to deliver the value that we think it is? Or does it feel like it's. a bit overhyped? What's your view?

RAMAN: To start with, Tim, is there value? Of course there is. Would they deliver value? Yes, of course. But I think the question to be asked is how much and when, right? Is where I think while people started off, what has been good for the industry though is that the long, the large language model, and the potential efficiency and effectiveness they can bring into the system have got everybody excited. Which, in a sense, has made everybody work on putting, for lack of a better term, their house in order, and putting house in order is a lot of these programs around the cloud migration that get at getting data in place and getting the data governance quality right. A lot of effort eventually is now getting into these fundamentals of data and getting value out of data. The core of it still remains: how much value you can derive from your data and external data. And which requires a lot of foundation as far as data is concerned. The good thing is that the wheel has started moving on that data journey. At the same time, a lot of these cloud-based products, as well as platforms, even the big giants, the likes of AWS, Azure, etc., are gambling on these agentic AI at a fast scale. Long answer, but the short of it is yes, I'm very excited about it personally. But it will have to go through its own hurdles and checkpoints before we start seeing something practically.

TIM: And I'm interested, what about in your day-to-day life? Do you find yourself using Claude or Chat GPT for any particular things around the house or with the family?

RAMAN: Interestingly, Tim, when this thing popped up, the whole prompt engineering term popped up about a year and a half ago. Again, curious that I am, I did a couple of courses, and frankly, those have been helpful. Again, parts of it have been overhyped, but I think from some of the ease of doing the regular communication, I'd not say the creative communication, but run-of-the-mill emails, managing calendars, and getting those initial drafts ready for quite a few things to the extent of getting the initial draft for things like project plans. That cuts out your… It's always difficult to write the first word on a blank paper. But if you have something written, which is even a rough initial draft, that gets your mind going. So from that perspective, I've been using it fairly actively, the chat GPT, and now even the copilot, which is integrated in most of our offices with Microsoft. Interesting, whenever you're in a team meeting, it's automatically making notes, prompting for the topics that you should perhaps be probing a little more. And then if you missed out on parts of the meeting, then summarize it for you. So some of these. Some of these functionalities, of course, add a lot of efficiency and then keep everybody or bring a lot of transparency in terms of communication.

TIM: And what about like regular life? Like outside of work, do you find yourself using the tools much?

RAMAN: Frankly, now I understand why they used to say that Google will take a hit. Perplexity has taken at least 50 percent of my bandwidth where Google was. It's always good to—it's always, at times, you're seeking more answers than 20 links to figure it out. There are, of course, the other cases also when you want to see those 20 links and then conclude from that. But for an especially quick answer, to the extent that, do you think there is data analytics hiring happening at a certain event today? Perplexity will give me a very pinpointed answer to it, too, having me figure it out, going through that link, and then looking for it myself. So yes, particularly for the search functions, some bit of internal thought process going from my personal LinkedIn post to summarizing documents, sometimes summarizing the news articles, you would have those plugins on Chrome these days. I would say yes, very much. It's in my day-to-day life, very much embedded.

TIM: I haven't used perplexity much myself. I should. I was looking through our Google Analytics data, just this morning, actually, and I noticed, yeah, quite a large amount of traffic now coming from Perplexity to our site. I should thank them for that and use the tool a bit more myself. One thing I've found really helpful for Chachipti is as a language tutor. So I'm trying to learn a bit of Italian at the moment, a bit of Russian. And it's. It's an amazing live, free tutor where you can just have a conversation with it in multiple languages, get it to coach you, give feedback, tell you what your most common mistakes are, and give you little quizzes. It's really impressive. And I like how they've improved the voice such that it feels really quite empathetic. Oh, you're doing a great job. Keep going. I know it's not real, but it's still, it feels better than if it were just written text.

RAMAN: No, and interestingly, Tim, you say that. I was incidental, and I talk of broader AI then and Gen AI combined, right? I was; I had to reach out to one of the utility companies to sort out a matter at my end. And when I was talking to the bot, which I couldn't figure out for a fair bit of time, that bot would do things like people like you and me would have all of those expressions, kind of taking those few seconds to think and figure things out, which would make it so much more real. And only after about 90 seconds into the call could I figure out it was a bot and not actually a human. So you're right. It's the empathy, the humanizing of AI, that is happening at a scary fast pace.

TIM: It is. And I feel like we also don't quite know what to do with that yet, because I was reading just a link this morning comparing AI sales bots between those that deliberately sound. Very human, and add in those ums and ahs like you mentioned, the filler words, and others that start the conversation with, Hey, by the way, I'm an AI bot, and they're almost openly transparent about the fact that you're speaking to a bot. And I don't, at this point, know which one is better. And it's also funny with the filler word stuff because. That was one of the first bits of analysis I did of our transcripts of these podcasts; I was trying to find out how to speak better by removing filler words and those things that kind of make me human. And so, in a way, it's like I'm trying to make my speech more perfect, more like a robot; robots are trying to make the speech more like us. Where are we going to end up? I don't know.

RAMAN: We're both trying to bridge the gap. But interestingly, Tim, we were just a minute ago talking about perplexity to your point, right? I've been a perplexity user; I've been a ChatGPT user, but now people are talking about deep seek or talking manners, etc. You feel like you're becoming a dinosaur, not in years; you're becoming a dinosaur in a couple of weeks now.

TIM: It is staggering and definitely hard to keep up. The only thing I'm comforted by is the fact that, if you work in the cross section of tech and data,. Probably, you're maybe not at the cutting edge, but you're at least near the front compared to all of society, so maybe we're not as far behind; we're not slipping as far behind as we think we are. It's just because we're in the whole space where there's a lot of noise, whereas an average person on the street probably isn't really giving a shit about some of these things.

RAMAN: Yeah, no, and I'll give you a practical example of that. Now Not too long ago, Satya Nadella mentioned in one of the interviews or one of his speeches that SaaS is dead or about to be dead, right? All of these software-as-a-service companies will soon be gone, and agentic AI will take over. A broad-brush statement was hard to digest at that point in time. But like I said, we are in this area. We've been enabling some of that agentic AI for some of our clients. And now we see a couple of implementations have already happened; by the way, we are not saying that we are about to or will do it. A place where we were enabling a snowflake data migration and then eventually enabling some of these agents. It is so seamless. And so adaptive in a way that they are, since these agents are directly talking to the data rather than the preset rules or a preset model, making them make their visibility so wide compared to, say, a rule-based or a specific model-based AI. So these are very powerful. Of course, some of these cases have become practical and up and running now. Some of these will become more. But agentic AI is also happening very fast in an enterprise system, especially for companies that have been regularly investing in their cloud infrastructure and keeping themselves up to date on whatever these big giants have to offer.

TIM: Yep. Things are moving so fast, and it's exciting and also just poses a massive opportunity for entrepreneurship to use this new groundbreaking technology to create, yeah, I don't know, SAS 2.0 or something. I agree that if your company is a relatively straightforward SAS product, like the one I use, for example, like a DocuSign contract signing. Product that's worth, I don't know, billions of dollars, but the actual technology itself is pretty simple. And now the ability to recreate it seems unbelievably simple. And so you can imagine a new wave of competitors that have, from the ground up, built AI-written code with a hundredth of the number of engineers they might need, selling at a hundredth of the price. Like I assume that wave is about to come through. Do you agree with that? Or are you seeing something different?

RAMAN: No, I'm from a cost perspective, and creating these products, I'll again give you a practical example of this one, right? We work with a lot of financial services clients, right? Financial services has, it's nobody's guess, the most regulated industry or one of the most regulated industries in the wider market. And because they were regulated because they had to create those explainable models for regulators as well as for explainability internally. And they had to keep it relatively close-net and secure. So a lot of this coding had to happen in SAS first, but now moving to the cloud, a lot of it entails moving to the new set of languages that have wider use cases, arrays, libraries, and so on. For example, a lot of banks had to move from SAS to Python. Now, I know there were teams set up, if I talk about two to three years ago, which would actually do that, if you will, translation work, right? See what is written in the SAS, put it in Python, understand if Python is fit for purpose, if it's efficient, so on and so forth, and then, move it and embed that translation. All of that work is now getting done with half the effort, or less in a few cases. Of course, you still need that human in the loop. But from humans doing things that have become a human in the loop. So I completely agree with you, right? That in pockets it has started happening, and if it starts to happen widely, a lot of things will be a lot cheaper, but a lot of work will also go away from humans. It's it's it at times even becomes a philosophical conversation where it all leads to, but again, I heard or read somewhere a very interesting hypothesis around it. Then it becomes a problem of plenty, right? You have everything available at a cheap price, and the only thing that, humans come in for is the creativity part of it, which will still be very indigenous to us. Sorry, I digressed, I think, from data analytics and went a bit philosophical on this.

TIM: No. That often happens on this show. So I'm all for that. And the thinking, the big questions—what about then AI in the hiring context? From what I've gleaned so far from speaking to people about this over the past few months is that, yeah. Candidates have adopted it. Candidates are using it to optimize theirs. resume, sometimes to help with the interviews, and I hear that people are using it to write job descriptions, but it doesn't really seem to be used yet, certainly at any level of scale for kind of like the core elements of hiring, like choosing the candidates, candidate selection, and candidate evaluation. What have you seen? Have you started to dabble at all with any kind of AI hiring tools? Love to get your thoughts about that.

RAMAN: Tim, of course, you would know way more than I do, right? I speak of it as a user rather than the developer of such things. And correct me wherever I go wrong on this one. But for whatever I have seen in the industry, and perhaps it was basic tool-based, sorry, rule-based models that were created earlier or have been in use for some time now, which is finding those keywords across your resume. You would have an OCR tool or a documentation digitization, or call it, like, putting it in a structured format, which will then pick up the keywords. If you had, let's say. 5 or 7 out of those 10 keywords, your resume is shortlisted, not shortlisted, and so on and so forth, right? I, personally, am not a huge fan of that because it loses and misses a lot of context that's out there in a CV. Then those keywords, and I understand a lot of people have played that system as well. At least to get through that first hurdle. So that was one use case. There was another interesting use case that I saw, and it was GSK using in their hiring process, where the first interview is taken by a bot wherein they'd ask you a question and you'd have to answer. They'd. In further answers to the models and then the basis that you will proceed. Again, personally, not a huge fan. You miss out on a lot of context and become a bit objective. It's a very good interview. It's a very good mechanism when you're looking for a very specific role and a hard set of skill sets. But if it's, you're looking for, broadly, some of the other aspects of what's that we've used today, right? The likes of empathy and creativity can only be personally, I believe, better judged by humans at least for now than what machines would. I don't know if you concur or what; what are your thoughts on this one?

TIM: Yeah, as I say, what I've seen so far is that really these tools are rarely actually being used. In fact, if I think back to all the people I've spoken to in the last four or five months where there is some experimentation happening on the interview evaluation side, it is actually not with talent or HR teams. It's with the—I guess it makes sense now thinking about it—the engineering or the data science teams who are looking at, Oh my God, I've got 500 resumes to read. I don't want to do that. Let's create a little pipeline into Claude and get it to do the evaluation of the CVs. Tell me my top five CVs based on these criteria. And they've created little skunk works, unofficial little products they've developed in-house, which to me is really interesting because that shows there's a clear demand for that. But I think it's going to take a while for these types of things to be used regularly because there are so many barriers. There are all these different employment legislations across different countries, particularly around the use of automated decision tools. Like even down to some cases at a city level, New York City has its own legislation specifically about automated employment decision tools. So if you're a global business. You could be open to legislation at a city, state, or country level. So then they're really reticent, I think, to suddenly automate anything. And so that's going to take a while to actually come through, even though I think the actual technology itself, like the quality of the large language models in doing some of these tasks, is already easily there. If you're comparing it to a human reviewing 500 CVs in an hour, like, how well are you going to review those? You're obviously going to be very biased. There are already a ton of really interesting experiments showing how biased humans are. So I think clearly there's an opportunity to make it better. But it's just going to take a solid few years, I reckon, for the tools to actually be implemented. And the software layer to be built on top of the LLMs, like a new wave of HR tech that will come through, I imagine. That's what I think will happen. Also, actually, I did hear last week that Workday, the big HR tech company, had implemented something like a CV screening tool in their product and immediately been sued by some group of people. So even the providers themselves might be worried about adding rules that automatically are going to reject candidates. And so there's just, I feel like a lot of risk and worry at the moment in implementing things like that.

RAMAN: And eventually, Tim, I believe a lot of these models you can't just leave it to the machine. I think there has to be a bit of an explainability to it because there are no, again, coming from that financial services domain, if you're accepting a credit card application or rejecting it. It should be backed by rules, which can eventually be explained to the regulator if needed. And I think similarly for hiring, it's a similar sort of a use case. If you're accepting or rejecting a candidate, you have to ensure that your model hasn't created an inherent bias, which is discriminating against a certain race, a certain gender, a certain demography, etc. Right? That's because a black box can learn anything in any way. And till the time that explainability will remain, I think, in my personal opinion, they have an inverse relationship. The more sophisticated the model is, the more sparse the explainability becomes, and therefore there has to be a balance between the two. And a machine can truly make that sort of a mistake of biasing a certain set of audiences versus not.

TIM: Yes, and I agree with you. We should have that transparency and that explainability of the decisions. My kind of devil's advocate for this, though, is always, shouldn't we have that for the current human-based recruitment approach? And we don't. And we know from, yeah, interesting and clever studies—I'm not sure if you've read about any of these, but basically researchers would get it. resumes and then applies to lots of jobs. And the only difference on the resume is like the name. So they test the impact of having—one I've seen in Australia was a Chinese name versus an Anglo-Saxon name. And they can apply en masse to thousands of jobs and measure the callback rate. And then from that, they infer discrimination against a particular subpopulation. So in Australia, if you apply to a job with a Chinese name, you have only one-third the chance of a callback as with an Anglo-Saxon name. Which is a disgrace. I'm sure everyone would agree. Yet at the moment, I can't, say to a company I applied to for a job three years ago, Hey, like, why did you reject my CV? On what basis was that? Because it's just a human in an ATS sitting, hitting a reject button or a bulk reject when they close the role or there's no recording or transparency at all. So I think if we were to add any kind of automated tool, like even if it's just like rule-based logic, it will know what it's doing. So surely we could then have that transparency. That should be an easy outcome of having a systematic way, as opposed to the current human approach, which is God knows what.

RAMAN: Makes sense. Makes sense. No. It's a fair challenge, right? You can have a bias from a human as well; just that human, if at all, if they were to explain an answer, at least they'd have to give some sort of an answer. A machine can always say, Hey, I'm a black box. Figure it out for yourself.

TIM: Yes. If it's a true black box model, then we're stuck, aren't we? Yeah, it'd be really interesting to see where this goes. I think there's so much upside because recruitment is just so tediously manual, like all the different steps of, Oh my God, post this job ad and read these resumes and schedule these interviews and follow up with a candidate to do this and all this kind of stuff. And I think it's actually the lack of technology in the process, which is part of the reason why it's so bad. Every candidate will complain about getting ghosted. And at the same time, in the same breath. I hear a lot. Technology is dehumanizing. Not really. If at the moment, because the process is so manual, you don't have time to provide feedback. What could be more dehumanizing than that? And so I think this is such a ripe opportunity for good technology to come in to improve things. It's just a question of what technology and how, I guess.

RAMAN: Help me understand them. If I ask you this question, right? As I was telling you, part of my role is a component in my role, which is about growth. And. One of the tools that we use, amongst many, is, for example, LinkedIn, because that will be one platform that will be common in your line of work as well as mine; they have something called a sales navigator in which you would contextualize the kind of. people you're looking for, why, what sort of services, and so on and so forth. And then it will give you the relevant sort of their recommendation in general work, and they would come up with a certain set of recommendations and why. Is that something that happens currently in recruitment? Is there an equivalent tool over on the recruitment side as well that, if you were to throw in a JD, contextualize why and who, and then give it to it, maybe it could be your top 10 candidates and so on and so forth.

TIM: So my guess is there must now be startups like in the last 18 months who have done this. Yeah. But so there's that, and then within a company, there will be its own applicant tracking system and the history of every candidate who's applied to every single role in that company. Then those tools would have some kind of matching between a job description and a set of candidates, but it's their candidates as opposed to the universe of all potential candidates. Yeah, but I'm sure that kind of tool will be getting built now because, like, even just in the last few years, tools like Apollo have been released, which make it very cheap to get contact information from people. Whereas I remember even when I started at Luba, if you wanted to be able to email people, it was like a Zoom info license, 30,000 a year entry-level. Now you can get an Apollo license for free for a thousand. So then that's opened up a lot of opportunities. But still, it's going to be an interesting data challenge, I think, because. Even if we think about, let's say, the CV matching to a job description. One big issue is how truthful is a CV? I could put on my CV that I'm a rocket scientist and a model, but I'm neither of those things. You know what I mean? And this is one thing I've heard a lot in the last four months: people are saying, yeah, we're getting all these job applications. They look amazing. They look maybe as good as they've ever looked, but then they get into that first interview and they're like, who the hell is this person? This is not the person on the resume. Is that something you've noticed yourself?

RAMAN: Not agree. And Tim, I'll tell you a bit of a fun story here, right? We were talking about ChatGPT earlier in a chat and how we're using it extensively and so on. And I, I was using it fairly actively, and just out of an experiment, I wrote ChatGPT a prompt saying, Hey, based on our past conversations, deduce some personality traits about me and tell me what are my strengths, what are my weaknesses, what are the areas I should be working on, and so on and so forth, right? So that's some sort of form that I used. And very interestingly, some of the strengths and the weaknesses that came out of it, I could relate to a lot of those. So our data is so much out there, like we were talking about if those tools were to be then built, then I truly, from those tools, would know if you're actually a rocket scientist and a model or not, because you would have a lot of interaction footprint across the web, and some of these AI tools that would be easy to deduce.

TIM: Yes, I think you're right. I think that's what will happen. And it's perfect now because the last language models are very good at interpreting unstructured data. Maybe that's one of the key breakthroughs: if you're trying to do this five years ago, you're like, I could find the person's LinkedIn posts and some random article they wrote and some citation they have, but how on earth do I make sense of all of that? But I guess a lot of language models could.

RAMAN: Yeah. Makes sense.

TIM: I want to shift gears a little bit now and talk about kind of the sources of candidates because one outcome I think that we've seen over the past few months is, okay. A bit of a slowdown. So there are more candidates looking for roles, but also candidates seem to be applying en masse with these kinds of haphazardly written resumes. So a lot of companies are complaining about getting a lot of candidates applying for their jobs, many of whom maybe aren't the right candidate. There's almost a sense that there's a lot of spam out there. So I hear from some companies that are saying okay, so now I'm relying more on recommendations or referrals. Is that a channel that you look to if you're hiring? Will you try to get referrals? If so, how do you do that?

RAMAN: Tim, for us as an organization, referrals have always been, if you will, the top of the pedestal way of assessing or shortlisting candidates because we trust our employers to bring in the right set of people knowing the organization, not just that they are technically fit, which is. I would want to believe it is relatively easy to figure out and then culturally fit. Our employees understand our culture, challenges, opportunities, et cetera. So that has always been a channel. We are a firm that is 60,000 people strong, and about 10,000 to 12,000 of those are technocrats. From that perspective, sorry, just so from that perspective, it has always been a channel, but like you said, particularly in some of our delivery centers, which are in India, the Philippines, and South Africa, we were hoping that with the advent of these technologies. Our reliance on the external consultants would go down, but it actually has gone up purely from a volume perspective and then, art to figuring out who's the right candidate. In fact, a percentage—I won't have the numbers, but those have relatively gone up, at least in my area where we've been recruiting. I see a lot of these applicants; some of these, I get some bit of a visibility to who the direct recruit is versus through an agency. So that reliance has gone up because you want more surety; the more you should be more assured given that eventually you're representing those people for client work, and there's no option to be. On a laggard on this on the sensor from that perspective, volume as well as authenticity of the CVs has in fact created more inefficiency, if you will, into the system than it was, but like you were saying, it's a wave. I think we'll pass that hurdle when the machine is creating a CVM machine and reading it. So both will be able to talk to each other to see what has been fabricated and what has not. But at this point in time, it has created more volume and an additional set of work than actually taking it away. We are from a voluminous person.

TIM: Yeah, I feel like part of this is also maybe that candidates adopted these tools immediately because they can; they're just individuals, whereas companies are naturally going to be a bit slower to respond; they have lots of rules and regulations and buying committees and existing software and many reasons why it would take them a bit longer. So maybe we're in this weird in-between period where the technology has come along so quickly, candidates have adopted it, and it's a candidate-like employer-driven market, a bit of a downturn, like a perfect storm, but then maybe a year into the future, the new technology will be used by the companies as well, and it'll all equalize. to some normal state again, potentially.

RAMAN: Yeah, again. I don't know if you've come across this app/website. I may be naming it wrong a little bit, but I think it's called Humanoid, which does what it does. For example, if you created a CV out of Chat GPT, you put that Chat GPT text into Humanoid. It will humanize your language and be non-detectable. Now there are tools that detect if you've written it via AI or yourself, so that will make it undetectable by these AI readers, if you will. So a lot is happening.

TIM: Yes, I, I love using AI to humanize our texts. I'm trying to wrap my head around that one. One issue I think with having to almost like move to an extra reliance on the referrals and recommendations could be that it could argue it becomes slightly less fair. Like for all the issues we've seen with jobs boards, we're just getting inundated with applicants. It's like the pro and the con of it. The job is open. Anyone can apply in theory. Anyone has a chance, which is great because then it's not just like cronyism. But because of that, then there's a lot of spam. People apply with absolutely no relevant skills at all. Like anyone who's run job ads will know, you get the most random people applying. To the jobs, and you just think I don't. Why are you applying? And so if we go back to referrals more than it becomes a bit of a—you could almost imagine that—you're hired, you hire your mates, you hire people who look like you or are in the same style as you. So it might pose some challenges in terms of having that diversity of thought. How do you think about this?

RAMAN: I agree with you, Tim, that it has to be a healthy mix. I believe a lot of the time it happens organically that, eventually, there are as many referrals, but the short answer again is I'd agree that I'm particularly now again speaking about my organization or my previous organization; we are a very well-known brand name in India. And some of the other places like South Africa I mentioned, and so even when we were putting out positions, we'd get interest from a certain stratum of demography because for them, the name was well known, but we wanted that sort of diversity culturally, as well as from a skill set perspective. Like I said, the next person sitting next to me should not always be looking exactly like me and talking exactly like me. It's where, for some of these markets, we have to deliberately make that choice that we'd hire externally and hire with a certain sort of agenda in mind when we'd want to hit some of the goals of diversity. And that diversity could be demographic diversity or could be skillset diversity, et cetera. We've had to do it, especially where our teams for our delivery centers are; the volume is so high that, eventually, the normal curve fits in. Okay. But for the smaller offices, the likes of the UK and the US,. We've had to make such calls deliberately in the past. So I agree with you. You've got to maintain the balance unless it's automatically taken care of itself.

TIM: Are there any hiring practices, evaluation techniques, or anything that you see? That is commonly done so that you think, Oh, we should get rid of that. Let's throw that into the bin.

RAMAN: It's a tough one. I'll take maybe 10 seconds to think through this one. Because again, as would happen in a lot of companies that you also work with. That, for each of the levels, the hiring mechanism and the steps may be completely different for, say, a junior to a mid-level resource to some of the senior people that you'd want to get on board.

TIM: Does even any step, even if you think back to, I don't know, some of the processes you've been in as a candidate or a hiring manager, any that you think back and go, Gez, that was pointless. As an example, I can remember early on in my career having to do like mental math problems on the spot. Calculate 375 by 43, and thinking about it now, God, that was a waste of time. Do I really need to be able to multiply two- and three-digit numbers in my head, which I wasn't good at? But I bought a book, and I tried because I knew they wanted to test this. This is like M and an investment banking interview. They used to like that, but I'm sure they don't do that anymore. They used to also do stress interviews where they deliberately were assholes to you to see how you would react. I'm not sure if you ever experienced that kind of stuff, but I, I wouldn't like to see those come back anytime soon, for example.

RAMAN: I agree with you a hundred percent on that one. So I've, in my early days, had to solve math puzzles and whatnot. To get through an interview. At the junior level, it still happens, but in a different shape and form, which is that you do a lot of guess estimates. And the idea is there is more to see the thought process. If a person is able to break down the problem into specific pieces, it's never about if the answer is a hundred or a hundred thousand. So I still see merit, especially For some of the staff we hired at junior and mid levels, how do they structure a problem? How do they break it out into specific pieces? But I 100 percent agree with them if it's still happening to what 2 plus 2 is or if I were to take a boat and I am going from here to there, how many miles and at what speed do I have to travel? Then I agree that shouldn't be there.

TIM: As the common one used to be, which I thought was silly. If you could be an animal, which animal would you choose to be? And somehow your answer to that dictates whether or not you're going to be a good software engineer. I don't really understand. But I don't know.

RAMAN: I can't remember how that makes sense. I guess that it would have been a question for some of the marketing folks or something, but if

TIM: Maybe.

RAMAN: It was for a software engineer; I can't figure out why.

TIM: Yeah. Yeah. Maybe there's some defensibility if it's a creative question or a think-on-your-feet kind of question. Yeah. Potentially. Roman, if you could ask our next guest on the podcast any question about hiring at all, what would you choose to ask them?

RAMAN: So I think the elephant in the room is if somebody is hiring, and especially if they're hiring for the next 5 to 10 years, they're seeking. What sort of core skillset are they looking for? Are they still looking at a hard set of skillsets? Or are they going for more skill sets at this stage, given that the landscape is evolving so fast and so rapidly?

TIM: That's a great question, and I will share that with our next guest. I am going to guess what they're going to say. Just because we've spoken about similar things with quite a few guests, a lot of people are talking about hiring candidates with a growth mindset who are willing and able to learn as quickly as possible because if technology is changing so quick, you're going to have to learn something new every day rather than focusing on the current toolsets. But what do you think?

RAMAN: No, that, I think, in general, has always been true, right? Everybody would want that sort of a candidate. It's just that the proportion is changing. Till about five to 10 years ago, I would want 60 percent of somebody who knows the skillset or 70 percent of that I'm hiring for and 30 percent that became a good thing to have a growth mindset and all the good things that you said, Tim. That proportion, I believe, is changing rapidly. And in my personal opinion, again, if I'm hiring from a mid to senior, that proportion would go down to 20%. Now, I'm not too worried about the skill set. Till the time I'm able to connect the dots and make it, I can speak more English than Python. I'm better off than that. Simplifying things and connecting the dots is becoming more and more important for me. Because I know if I'm given another environment as a third party, I may be working on a technology one today, but six months down the line, I'd have to be on technology two. And which was relatively more similar about five or seven years ago, but because that has changed, my proportions have changed significantly.

TIM: Yep, things are changing so quickly. It's certainly been a theme from our chat today. Raman, thank you so much for joining us and sharing your thoughts and insights with our audience today.

RAMAN: Thank you very much, Tim, for having me and again for the lovely chat. I personally also learned a lot, especially from hiring and discussing some of the topics on broader AI. It was a pleasure being here again.