In this episode of the Alooba Objective Hiring podcast, Tim interviews Dan Sarkar, Data-Driven Strategy Navigator
In this episode of the Objective Hiring Show, Alooba's founder, Tim Freestone discusses the benefits and challenges of data-driven hiring with Dan Sarkar, Data-Driven Strategy Navigator. Dan Sarkar, with over 15 years of experience in data analytics and strategy, shares insights on how AI and data tools can improve hiring efficiency and reduce bias, but also highlights the importance of balancing these tools with human intuition. The conversation also explores the potential pitfalls of AI, such as algorithmic bias, and the experience gap in HR teams regarding data and AI tools. Dan emphasizes the need for careful evaluation and contextualization of data to make fair and transparent hiring decisions. The episode wraps up with a discussion on the future of hiring practices, particularly for analytics and data science roles, and Dan poses a thought-provoking question for the next guest about striking the balance between objective and intuitive hiring.
TIM: All right. We are live on the Objective Hiring Show. Today we're joined by Dan. Dan, thank you so much for joining us.
DAN SARKAR: Thank you for having me, Tim.
TIM: It's absolutely our pleasure to have you on the show. And where I'd love to start with each guest is just to hear a little bit about yourself. Like, who are we speaking to today? Who will I be listening to today?
DAN SARKAR: So I'm Dan Skar, and I am currently the head of data in Pinella. I have been working in data analytics strategy and data science for almost over a decade now. Huh? Almost 15. Plus years in the industry, working in different roles and in various of my previous journeys, and I have been very ingrained with data analytics and building data strategy with applying different types of techniques, methods, and statistical modeling and all those things in the past several years.
TIM: And I'm sure a big part of your leadership role in analytics has been. Hiring good people to join you on the team and to help execute the strategy. And so I'd love to start a discussion around the use of data in hiring itself because I feel like traditional hiring is not necessarily data-driven; I'd say it's almost like it's intuitive or emotionally driven a lot of the time. How do you think about using. Data in hiring versus maybe intuition and gut feeling, and, you know, what's your general view of things?
DAN SARKAR: That's a great question. If you think through this part, right? So intuition-based hiring has always been probable. Going to be a little bit biased to some degree because humans are subjective in decision-making, and it's probable that some level of bias is introduced into the process versus if you look into data-informed or AI-driven analytics or AI-based tools, which are penetrating all sectors of industry currently, including HR analytics. You can tend to. To at least remove some of those biases because they would be much more objective-driven. They, it would be more, finding out the skills and, as well as, different types of. Candidate's ability and personal traits in an appropriate manner can help you to reduce the bias. Having said that, I would also tell you that these AI-powered tools or these data-driven tools that you are using for different types of screening or in the hiring process should be somewhat complementary, not a replacement of human intuition-based hiring, because there are definitely advantages because it could bring you better efficiency. It would help you find the right type of candidate and figure out the hiring and the gaps in your workforce. And of course, it could improve the inclusiveness. So there are a lot of benefits to applying. Data-driven hiring. However, there are also cons to it, and you have to understand the cons also to some degree without understanding the cons. If you just. Pushing for AI-based and AI-powered tools in the hiring process could lead to a different type of bias as well. So it's very important that you strike a balance between these two components. Utilizing the different types of data-driven hiring processes, but not replacing as well as some of the intuition. Intuition, because that can help you to understand the personal traits, the cultural fate, and the experience of the other side of the candidates, is important to consider here.
TIM: For those other aspects you've just mentioned, you mentioned cultural fit. For example, they tend to be a bit more subjective. Is that fair to say? And if so, is there potentially a danger that, you know, if too much of our hiring process is based on subjective factors, then, you know, we're going to lead ourselves astray and make those hiring errors? 'Cause we're not focused on, you know, the objective things that we can measure.
DAN SARKAR: I think you know that there is that component. It's there, and that's why applying this AI power tool to the hiring process can improve your efficiency and find and eradicate some of those loopholes in the hiring process. So you are very correct. One aspect is that, of course, you know the soft skill component of it. Huh. That's also another trait that probably intuition-based hiring can be able to measure more accurately, perhaps even more accurately than AI power. Huh. But to that point also, you have to think about that the. The AI power tool also comes into its own new corps and nuances, right? For example. There are, or there could be, algorithmic bias or the fairness of it. And you have seen some of the things in the hiring process in the past. For example, few. Probably a while back. was a tool that Amazon introduced for a hiring process, and Reuters reported that tool had some issues, and they had to scrap that tool, right? So there are pros and cons on both sides of it, but striking that balance is really important in this case. So that if you want to go into a more fair hiring process, I would say.
TIM: Yeah, I remember reading about that Amazon tool, which was a while ago now. That was 10 years ago, if I'm not mistaken. Something like that. I.
DAN SARKAR: It happened in the past, and it could happen again in the future too, unless and until you are really careful. The part of it is you have to think about, and this is where, I would say, from a data science background or stemming from an analytics background, I would say that numbers don't lie. But at the same time, numbers always don't tell you the true story unless and until you really contextualize it. Unless and until you make sure to interpret the numbers correctly in the proper way. Huh? That could also lead to a different level of bias and so on and so forth. So that's why it is important that you strike a balance and you understand these nuances when you are applying the tools as well as when you are making your hiring decision and creating a structure for evaluation.
TIM: Part of the challenge is probably going to be that. If the parts of the organization that are responsible for implementing these tools are not, it's typically going to be HR or talent acquisition. You could argue it is probably not the team that has the most knowledge around data AI, machine learning, et cetera. Normally, like typically, people running talent teams come from a kind of softer skills background. It'd be very rare, for example, to see a data scientist. Heading up talent or a software engineer or any kind of engineer for that matter, or a scientist. And so I, I feel like that's going to be a challenge. And so maybe there's almost like a skills gap or some, some, yeah. Extra skills or knowledge that those teams might need to put themselves in a position to properly evaluate these tools. Would you agree with that?
DAN SARKAR: You are spot on. I think when you are building, when you are bringing a new tool in. In your hiring process, which is extremely critical, you evaluate the tool in a proper context, in a proper way. What is your hiring objective in the organization, what type of roles are you trying to fill, or are you figuring it out? What are some of the gaps you are seeing currently? At least a tool is showing you these are the right things to implement because hiring is a tricky thing. But to that point, every company's hiring is very, could be very different depending on what they're trying to fill for that role, right? So if you take one tool and try to customize it that has been designed for a certain company or with a certain use case in mind, and then if you apply that to a different context and a company, it could be. Similar, or it could be different. Also, the factor is that, yes, there is subjectivity in human hiring, but sometimes the output driven by the tools can also be misleading. So that you have to really understand the nuances and strike that balance so it, it can, it, the argument goes in both ways. So yes, you do have to have the knowledge and skill when you are bringing these types of tools into that. Into your organization, but at the same time, you have to also evaluate them correctly based on your criteria, based on what you are trying to achieve from your department.
TIM: Yes. One area we could drill down on a little bit might be that application screening step, because I'd say pretty much any company in the world would have job ads with candidates supplying a resume and maybe some application questions. And then typically what they've done is had someone in talent acquisition or HR manually screen through those resumes. Compare it to a job ad, compare it to some words they've been given by the hiring manager about what to look for, and they'll take it from the, I don't know, 500 applications down to a short list of 50, let's say. And so maybe that's an interesting potential use case of AI to explore that kind of automated screening step. I feel like, at least in theory, the use case for some kind of large language model screening is a bit of a no-brainer now. Because, of course, it could be done much quicker. It could be done in milliseconds as soon as an applicant comes in, rather than waiting for someone in HR to come in on Monday and monitor the screen. So there's that kind of dimension, but then there's the, I think, consistency that again, it should be able to deliver because as opposed to some human, you know, reading a CV and they're onto their 500th CV of the day and they're like pretty bored of it, by then the AI doesn't get bored. AI doesn't have all their biases around the people they like and the people they don't like. And so in feeding it a very clear set of instructions, here are exactly the 10 things I'm looking for. Go and rank and score my resumes. And you could even imagine having, like, multiple models doing this at the same time and taking an average of the models. Like it's got so much opportunity to make it way more transparent and measurable in a way that currently it isn't. So I'm not; I don't quite get the kind of pros and cons on both sides argument for this. To me, it seems like a pretty one-sided affair. What do you think?
DAN SARKAR: Yeah, so I agree with you that if you use these tools, it can drastically improve the screening. Efficiency and so on and so forth. Because that definitely means someone doesn't have to go through the 500 ratios. And if you can plug in the right type of keywords that you are looking for a specific role and ask that large language model to go through it, each would be able to tease it out correctly unless and until the model hallucinates in some shape and form. So that's definitely the case; today's technology is there so that they would be able to do it. Now one of the things is that. Of course, a lot of times if you fit the right instruction, then you probably get the right input, but at the same time, you might miss some traits if there is no human interaction or no human review. Huh? Because the model will be very. Very model-driven, right? So it's very robotic in nature. It'll be able to tease it out. Unless and until you have all the intelligence built into the model, then it could be a different case. But, large language model at this point, I point out, is doing a lot better than it used to be. A couple of years back, huh? In this context, of course, but at the same time, we haven't reached the point where we'll be able to tease out the emotional intelligence part of it from a resume or where we haven't reached the point where we'll be able to say that, hey. We are not in the age of, at this point, still a GI true sense. We are still achieving that. We are making progress in that direction. So I think you have to understand that context as well when you are utilizing these tools. In the process of screening, and this is coming from my data science background, right? Because I have. I know how this model works. I have seen this; you know how these models have been built into a product, and that's where it becomes much more, and all these nuances should probably be considered, to some degree, if you want to make the process really fair and transparent.
TIM: Ask you this then. So let's say you had a bet for your life, okay? Your life depended on this bet, and you could choose to have a typical, let's say, resume-screening HR person do a screen for your resumes, or you could have. Let's say Chat or some combination of large language models that you could prompt, and you could tell her what to look for. So you could prompt the HR, you could prompt the LLM, and you need to screen, I don't know, a thousand resumes over a week or something. And the accuracy had to be as high as possible. Define that however you like. And you had to choose either any HR generalist or a combination of models. Which one would you choose? Currently, and why?
DAN SARKAR: I would actually take, if there is an opportunity to have both, so that human oversight with the AI models would be my answer in this case. And part of it is because, as I said to you, these models and these tools are great. It can. Make the process probably efficient, but at the same time it is also trained based on the historic data, and the past behavior is not always of the future, right? This context really well. So some level of. Like humans in the loop will be good. But yes, if you are getting a large volume of resumes and you have to narrow it down, huh. Then probably the first phase could be this AI-based, some sort of a rubric that you may apply. But at some point, some human oversight will probably be necessary too.
TIM: So then it's fair to say you would currently see AI as more of an efficiency, time-saving automation tool as opposed to an accurate, intelligent, improving, process-making-better tool. Is that, like, that's a broad summary. Is that fair to say?
DAN SARKAR: I would say, to a large extent, it is still the case. Of course, you, we can. Some people will argue in this case that, okay, we are getting into the era of artificial general intelligence. When these machines become more intelligent and they are able to do different things, agent AI is becoming more and more predominant, and hopefully, that will also take going to be the. The future, huh? For some of these tools in the HR analytics that you are bringing in here as a use case. Still, it's not there. Huh? And we still have to have to have to go through miles before we can achieve that state.
TIM: About machine learning or marketing analytics or whatever. And then we would have that question. It's with another expert in the same domain to review it like a peer review system. As part of that, we'd have developed this kind of comprehensive definition of what a good question is and all the types of traps to avoid. We built a content management system to then do all kinds of data validation checks. And so we had this kind of elaborate process to write, review, and publish a question on our platform. Which, like, the end outcome is pretty good. Like the question quality is good, but obviously very, very slow and very, very expensive. And quite unscalable because if we wanted to suddenly go across, I don't know, a hundred domains that are far away from our own skillset, it's kind of hard to then go and find experts when you yourself have no experience. You see what I mean? And so we had that process, and we recently used it. A project to test how well large zone engine models are now doing this kind of work. And so we managed to build a fairly comprehensive pipeline where Chate writes a question, Claude reviews it in this back-and-forth programmatic way with all these levels of data validation. Of course, it took us a few weeks to iterate through a prompt to get a consistent output. But basically we've ultimately found a way that we have the same quality of question. Produced at 99.99% cheaper cost and infinitely scalable in the sense that we can now create a question about any concept in the world as long as it's a real concept. So that, to me, like that process, opened my eyes a lot to the state of AI and how far it's come, or large language models in particular. And I, I now view in a lot of the cases, humans more as like a bottleneck in the process and that we should be redesigning to get out of the way. Of these large language models and just let them get on with their job. Like, if I could recreate my company and add all these AI models working 24 hours a day when I'm not, I'm not needed. To me that's a much more exciting approach rather than just, well, kind of human in the loop, and they're going to just chip away at the size. Like, I feel like there's a bigger breakthrough than just a little bit of automation here and there.
DAN SARKAR: Yeah. So I think you brought up a great point, right? So basically in your approach, what you are doing, right? So you are applying multiple large language models, and then you are trying to kind of cross-validate that with what Claude is doing, how, what chat Dan Sarkar is doing so, and so on and so forth. So this is definitely a really good approach, and I think it will. Help you refine the process. To some degree, unless and until both CLA and ChatGPT have been trained with the same data sets and same type of biases, it said, and bias—some of these biases can be, we and you recognize it, but some of these biases maybe you won't know because you just don't think about them, like subjective biases or data biases in the data, and so on and so forth. But to that point, I think applying multiple models is great in a sense, because these models have been coming out every new day there has been a new release. The latest one last week, I think, probably came from Alibaba, right? And they created a large language model, and then they published it. Before that it was growth three. Before that it was CLO 3.7, right? So you are consistently seeing that the technology is being improved. People who are building this large language model, they are also adding new stuff. They're testing it in a much more rigorous way than it. Probably two years back when the first time ChatGPT came into the market. So definitely there is an evolution, and that's why these systems are going to be much more reliable. Now, to your point, the other aspect is that yes, humans can be a bottleneck in some of these things, and that is an experience I have, not from the HR side of it, but from the other side of it. One of the challenges you would see, probably in the coming era, is the adaptation of a large language model in real cases, right? Because yes, you can build a tool, but at the end of the day, people have to start using it, and it needs to. It needs to be adapted in a correct sense. So that's something we are going through in terms of technology and adoption, especially in this space. But so there are use cases on both sides of it. Huh? I'm not saying that what you felt is. It is wrong because sometimes I have seen that in my experience too, not in an HR context, but in other contexts. But at the same time, you still have to understand how these models are developed. What is the process? What type of data has it been fed into? What are some of the cases that it could really set off? Those types of things make sure that you approach this in a correct way.
TIM: What about, I'm interested actually in your like day-to-day life. Have you found yourself in the last couple of years using a chat chip material? Claude, in day-to-day tasks or, yeah. Stuff in your personal life?
DAN SARKAR: It's a great question. I have played around with a lot of these in the model, not to the extent, probably yet at work, because there are, of course, a lot of the things you still have to, I. Have to ensure what is allowed in work versus what is not. But I have played around with some of the open-source models. I have played around with some of the Chat GPT Claude and different versions of it. Even some of the other LA large language models, right? For example, metas. Yeah. Google, Gemini, and Copilot—so all of this I have played extensively. And when it's. When did it start, huh? Two or three years back. Huh? It was fascinating too when you asked ChatGPT different levels of questions and saw how it would respond and what it was really accurate about and what it was not. Huh? So I have done some of those things.
TIM: Still not very good at counting the number of S's in strawberry, apparently.
DAN SARKAR: Yes. I can tell you that, so when I saw that, essentially what I did was that I. That and it says two times. First time wrong, right? And basically I have instructed it to write a Python program and count the number of S's. So the next time it did that. And then I ask you again, what is the number of R's in "strawberry? It was able to; it did say that correctly. Yes.
TIM: To think now just a little bit bigger picture of where you think hiring is going in the next couple of years, let's say, particularly for analytics and data science roles. Like, will we still be doing, will we still be doing, I don't know, job ads, resume screens with HR, you know, technical interviews? Reference check offers, like, are they still going to be the process in two years, or will they look fundamentally different from the way they're currently being done? What? What do you think is going to happen?
DAN SARKAR: I think you know what you are. We are going to see, of course, adaptations of this AI power tool into the hiring process more and more. So definitely that's a trend you are going to see. Now, whether that would fully replace all the processes you mentioned—multiple initial screenings, multiple rounds of interviews and evaluations, and so on and so forth—that you currently see. C. It may not be right. So that's one. One trait you are going to see, the other traits you are going to see, perhaps. The knowledge, the candidate side also, what is currently today versus what it used to be, right? Because now people have all access to all these large language models and GPTs and so on and so forth. So traditional wave, how people have learned, huh? Versus currently, the candidates will learn that there is a difference too. The key here, I think, is that every company is probably different, as long as they can understand what their objective is and utilize these tools correctly in the hiring process. I think it's a win. Huh. But again, you have to understand that from the context of. What these models are able to produce, and so on and so forth. In your case, of course, you seem like that; you figure it out quite a bit into the process, right? Understanding different types of models and applying them in the context of hiring. And that's probably the key. Not all companies actually have. Reach that level of maturity. Some companies have to go the extra mile, perhaps. And then some companies, of course, companies like Google, who have already tried to revolutionize HR analytics, in the past, they probably know all these traits, and they are going to probably push for better and better tools and analytics in the hiring process more and more in the future. And some of it could also replace the. A lot of these steps are into the humans, into the process, as we move forward.
TIM: I suspect those massive tech companies are aware you'd see a lot of the adoption because they have the volume. Any investment they do. The process is worth it. If you're hiring, I don't know, 50,000 people a year or something, they've got thousands of data scientists and engineers who could help build these tools internally anyway at a good level. So that kind of makes sense. That's probably where the innovation would come from. I'm really interested in the analog to sports recruitment. I don't know if you have, but have you ever seen the movie Moneyball?
DAN SARKAR: I have heard about it. I have not seen it.
TIM: Yeah. So it's probably about 25 years old now. At least the story, not the movie. And it was about the revolution of, of baseball recruitment. Away from this purely intuitive, gut-feeling-based approach to one that's based on metrics. And it's just such a fascinating case study with what I think is going to happen now in hiring for day-to-day roles. Because back in the day, you know, there were so many. And all the scouts would use these kinds of little. Indicators or signs that they thought gave them a clue of who a good player was included things like, oh, you know, how attractive is the player's girlfriend? That then dictates how confident they are. Therefore, if they're confident they're going to play well and these kinds of things. And this is probably no different to, oh, well, you know, he's got a two-page resume, not a three-page one. And I like that. And you know, they use the right font on the, on the resume. And I like the way that they, all these things that. All these kinds of rules of thumb that we've been using probably have no predictive power at all. Then it got replaced by this ruthless, data-driven approach. And at least in the case of the baseball team, suddenly they saw an uptick in their performance once they managed to recruit all these players at a very low cost who were drastically undervalued by the market. And that was 25 years ago. And interestingly, it wasn't until I think the last five or six or seven years that started to happen in football in Europe. Even just as of last week, I heard the owner of Manchester United say the key reason why they've wasted a billion pounds in the last 15 years is because they have no data analytics at all in recruitment. They still use these kinds of intuition-based approaches in hiring, much to their detriment. Now, maybe in regular business, it's not as obvious when you're winning or losing. You necessarily have a points table. Maybe it takes longer to manifest, so maybe it just plays out over a longer period. Maybe also, we don't have good metrics. Maybe that's part of the problem. It's like there are not as obvious metrics of success or failure as opposed to a sports team. But yeah, I'm interested in that, in that analogy. What do you think will be like the breakthrough in moving towards that data-driven approach? Anything in particular?
DAN SARKAR: It's a, I think you got a great point. And, in a, bringing the, in the comments from Manchester United, and some sport in, in sports, of course, a lot of these sports. In modern sports, at least run by analytics, right? Because of you, it's not in every case; maybe matching state United is one case, but if you look into, let's say, basketball, right? The Golden State Warriors, and they are very data-driven, right? So they will always look into how many points Curry has. versus how many points Draymond is making and all those things, like starting from rebounds to everything. Sports analytics has become more and more data-driven, for sure. And a lot of these sports teams have great analytics teams. It is also a fact in today's world. But you would. Data can be very powerful, right? Data is a few years back. Of course, right now that's not the case; it no longer is anymore. The main thing is, it used to say the data is the new oil. Huh. So data can be really powerful in that context because it can provide you a lot of insights if it is applied correctly. sports analytics teams have. Recognizing that, they are moving, utilizing data analytics to the extent that they could. Huh. But it's not data-driven, I would say. It's more data-informed, huh? Because again, I told you this thing is coming from me. Your own experience, right? Data can be very powerful. Data numbers don't tell a lie, but you have to. It's also always don't tell you the truth unless and until you contextualize it correctly, you put the right preface to this, and then you analyze it and interpret it correctly. So that part is still going to be there. Huh? Now, can a large language model replace it? There is a potential that the large language model has that context, and they can get better and better. But still, at this point, it's not there in today's world. So it's probably the future.
TIM: Maybe to step back a second and think about more of the problems, like, because then that would give you an indicator of where the solutions might come later on. Like, in your view, is there a particular common problem that you see when companies try to hire data people that really needs to be solved?
DAN SARKAR: Yeah, I, that's a great question, to be honest with you. Sometimes it's, I don't know whether it is, it is a common problem across the industry or every industry. In some roles it could be previously, data science roles have been to a large extent because a lot of times people really didn't understand what a data scientist should be doing. Huh. And there is one, but I think that has been streamlined over the years at this point, probably. It's much better. But at the same time, you are seeing more and more things getting into the AI side of it. So that's becoming the main role. So part of it is that, yes, that factor exists to a certain extent. That's what data people should be doing, and when they hire people, they should make sure they know what type of problems they would be solving. But again, it also gives a great opportunity for people in the data, right? So that they can shape it. One way to think about it is that, yes, companies may not know that, but you can also go into that position and figure things out and show them. Huh, the key here is, of course, I think, in the bulk of the companies or most of the companies that you have to think about, what is the business value, right? Or is it the business-first approach? And sometimes, of course, in data analytics and data science, people haven't. Have it, fully, and really perceive that correctly. Whether they, when they're hiring people, versus what they're bringing to the table, and so on and so forth. And that there is definitely some room for improvement for that.
TIM: What about in terms of candidate friendliness? Like one thing that comes up again and again is, are you having a nice process for the candidates? And I can certainly see how if people are thinking about more of an AI-driven hiring process in the future, they would almost automatically think, oh, that's maybe going to be worse for the candidate. Somehow, like a less human experience, which I would personally debate, but that's by the buyer. Yeah, any kind of quick wins we can come up with to make the process a little bit more candidate-friendly. Anything you've seen work well in the past?
DAN SARKAR: I think it's a great point because there is that concern that if you overapply AI to that, you will lose that personal touch. That is because machines tend to be robotic. But at the same time, I think applying some sort of machine always gives you that consistency that you may not also get from the humans. So there are two aspects, two parts to this, right? But in terms of candidate friendliness, I would definitely encourage that because that's a very big component of it. Huh? Because you have to think from the candidate's perspective also that they are putting in their time and energy and so on and so forth. And sometimes these hirings can be long, long, huh? The process can be long; there could be multiple rounds of interviews that can drag. These things. And that also increases the cost, but also candidates get burned out; they lose interest, or people are hosting them, and they don't hear back. And there are a lot of biases in the hiring process. And that's where I think the. AI could do a lot better job because it could make the process really efficient. It could also bring in the fairness and so on and so forth, and you don't have to deal with that. The argument goes both ways, but it just has to strike a balance because if AI misfires in some cases, then that needs to be also. and should be prevented, so that's why some sort of oversight is needed. And it could; you could apply multiple machines to do that too. Or you could bring the right humans into the process, and they could do that for you, huh? Along with the AI. Huh? So that's something I think you have to think about when you are putting these processes into place.
TIM: And maybe this is good; we should because we should aim higher, but are we almost holding AI to an impossible standard? Because of all the things you've spoken about there, we don't currently hold humans to that standard. There's no transparency over, you know, how your resume was rejected. Who rejected it, and why did they reject it? On what grounds did they do it? Did they score it? Like, we don't currently have that for humans. So part of my concern is not having. Not comparing AI to some perfect imaginary hiring process, but comparing it to the way humans currently do it, is it better or not? It might; maybe there are some pros and cons, but doing a fair evaluation against that versus some, you know, perfect utopia that doesn't exist.
DAN SARKAR: Yeah, and I can totally see that because I think humans are. They definitely can be erratic in terms of their behavior, their moods, and so on and so forth. And to, to that point. Of course the bar is probably really high when we are trying to measure this AI versus humans because, in theory, humans are also probably making a lot of mistakes, but they're probably more lenient to aid versus when AI is making a mistake, and they're making a big deal of this. So definitely that's that. Something that, over time, needs to be straightened out. And I think more and more people will start using these AI systems. They were able to see the benefit of it. And more and more we can remove the fear, huh? Of that, hey, AI is going to. Create havoc and this and that type of thing. That's probably going to help that option. And maybe we won't be comparing it to a Utopia standard, but to that point, of course there is a difference, right? We know how the AI kind of works currently. It processes the information when it makes the logic when a large language model creates and finishes a sentence for it. What logic does it use versus a human when they think about something, how they used to perceive that, and how they approach it, right? So I think that part still. Not, it's the—that's the part of science and the technology and all the things that are happening today. Yes, the comparison probably is not fair. Huh. And I can totally see that. And I have seen this, of course, in my experience too, but, of course, if you put something, aim something higher, then that is this. AI systems will do a lot better job, and people will probably be less critical of these AI systems. Oh, it is making so many problems and so many errors, and so on and so forth. Rather, they would take that and try to incorporate it into their day-to-day life and make it much, make it more efficient. I would say the whole process.
TIM: Yeah, it'll be fascinating to see where things go in the next few years. Dan, if you could ask our next guest any question about hiring, what would you choose to ask them?
DAN SARKAR: So if I were to ask the next question, Shaina, about hiring. I would say, how would you strike a balance? Between objective hiring and some of the intuitions, huh? So that your hiring becomes really fair, transparent, and efficient.
TIM: Okay. That's a great question. Which I will level at the next guest, whoever that may be, at some point next week, and I'm looking forward to hearing what they say. Dan, it's been a great conversation today. Thank you so much for joining us and sharing all your insights with our audience.
DAN SARKAR: Thanks, Tim, for having me. It was a pleasure coming with you.