In this episode of the Alooba Objective Hiring podcast, Tim interviews Mamnoon Hadi, Head of Analytics & Insights at Readdle and Founder of Algarizm
In this episode of Alooba’s Objective Hiring Show, Tim interviews Mamnoon about the transformative role of AI in the hiring process. Mamnoon shares experiences from implementing AI for a financial services client in Ireland, discussing how AI improved efficiency and candidate experience. The discussion covers the use of AI for screening CVs, conducting initial interviews, and the potential of AI in scheduling interviews. They also explore the evolving skills required for data professionals and the ethical considerations of using AI in recruitment. The conversation highlights both the promises and limitations of AI in creating a more efficient and fairer hiring process.
TIM: Mamnoon Thank you so much for joining us on the Alooba Objective Hiring Show today. Great to have you with us!
MAMNOON: It's my pleasure, Tim.
TIM: Well, Mamnoon I'd love to start with everyone's favorite topic at the moment, which is AI. If software was eating the world in the last 20 years, I have no idea what AI is doing to it, but something more than eating it.
MAMNOON: Definitely yes.
TIM: I'd love to drill down on AI in hiring in particular and get your thoughts on that. Is this something you started to experiment with in your own hiring process? If so, I'd love to hear a little bit more about that and your general thoughts on how AI could be used in the hiring process.
MAMNOON: We have implemented AI for one of our clients; it's a financial services company in Ireland, so they wanted to set up a six- or seven-member team right from scratch, and that's where we were basically exposed, so they did not have any in-house capacity of the experts of the domain expert. So they were exploring how we could do the AI basically data hiring, and what we did is we basically, because they had a very small team and did not have enough recruitment resources, we did implement a step of AI in recruitment using AI, which basically saved, I think I'm assuming, 30-40 percent of their time and made the candidate experience far greater than it could have been.
TIM: And how did you do that? What did that implementation look like?
MAMNOON: So what we basically did is so the idea was not to implement; the idea was to basically get them the resources they need for their data team, which is being built from the ground up. AI turned out to be the tool, considering they had the situation of having no recruitment, no recruitment with a team within their own organization. It was a very lean organization in that context, so what we realized is that most of the time, in my experience as well as in the experience of most of the data leaders I've spoken to, all the CVs, in fact, all the good CVs, look almost identical. That's the key problem: they will have all the buzzwords. Python R-implemented models implemented ML ops. When initially the job was posted on LinkedIn, there were hundreds of CVs, even in a small market like Ireland, and they had to hire some resources in India as well, and there with the application count was in 500 plus for a job, it was not possible for them, so we had two possible solutions. one solution was one solution was to basically go ahead and and speak with everyone with so many more resources or get to an agency who were charging who were charging like a Three months salary off a hire which was mad from the context that the company wanted to keep themselves lean so what we did is in the for the screening process we we used ai tool which basically called everyone and recorded the conversation and the outcome of that call so we did what we did is we did a dry run I had never done this none of the members within the team had done this So we created a dry run of with 50 first 50 applicants and we listened to every conversation between ai and between ai and the candidate I was Very much like I think If I listened to a recruit person who initially did the screening from the recruitment call I would have the same level of satisfaction as I had on AI I am not saying it was perfect, but I think it was a level of satisfaction that was as good as you can get even from a human, a smart human as well. So that process was cut down a lot, like two, a lot, like we were able to for the job, which was basically advertised in Ireland, for which we had around a hundred applicants. We were able to finalize 11 applicants for the next round, and they all had, in the true sense, what they wrote in the CVs, and even in India, like for 500-plus applicants, we were able to cut down to around 50 or 60 applicants, and they all had what they had mentioned in the CVs in the true sense. And then comes the next part, which is scheduling, because the hiring process was so long, like it needed three different rounds, and all those rounds—sorry, two of those rounds—were with the team members. So there were around three people for a marketing team, and there were a lot of permutations and combinations of the scheduling. I thought it was, and this is a very easy problem to solve, but for some reason no one has explored this problem, or I did not come across an AI solution that could solve this scheduling problem specifically, which can go into everyone's calendar and then just basically schedule for some urgent hiring. that we had to do manually, but I was just under the assumption that if you're able to solve this problem, this scheduling problem solution should have been there, but it was not. I'm sure someone is building it today.
TIM: Yeah, it's fascinating how sometimes the seemingly simplest business problems turn out to be for weird reasons sometimes the hardest to solve, and you're right, scheduling of meetings with multiple people is annoyingly difficult, and I'm not currently aware of any AI solution to that, but you're right. If it had access to all of our calendars, it could just find a relevant spot and book it in. Surely we can't be far away from that, just about getting access to the data, but the logic itself should be pretty simple.
MAMNOON: exactly simple logic
TIM: Yeah, hopefully we're not too far away, but I'd love to drill down more on the first step because this is really interesting. So you said that you used some kind of tool that would call the candidates and do an AI interview. Was it? I'm interested. Did it schedule the interview in advance, or did it just go through the list of phone numbers and call them? Did they know they were going to have an AI interview? I'd love to hear more about the process
MAMNOON: So yes, it was a scheduled call, so every candidate was scheduled, which was basically applying for the job in the usual ATS system. They were accepting the candidate and rejecting a candidate based on the problem in the data rules, which is if you have a condition for Python, 99 percent of candidates will say they have worked on Python. If they have a condition of they must have worked on data science, 99 percent of candidates would have done some sort of modeling in their academic setting or in a professional setting, so after that was done, there was an automated email that was going out to the candidates that this is the calendar. The good thing is that the tool had its own calendar 24/7. What a brilliant resource to have! And yeah, the candidates were made aware that it is an AI. It is going to be an AI call, but I personally called 10 or 15 candidates and talked to them about their experience. and even though they all were rejected by the AI, their feedback was yes, they were perfectly okay because that was my concern that even though it is beneficial for us at the end, it should be a nice experience for candidates as well to go through the whole process so they had a nice process. Weirdly, they felt more confident in opening up to an AI than to the three candidates. I specifically asked, Did you feel more confident speaking to AI versus some human? and they said yes, which was an eye-opener for me as well.
TIM: That's such an amazing experiment you've done, and I really like the fact that, yeah, you took the time to call the candidates, some of the candidates themselves, to get that direct feedback from them, because, yeah, that's especially those that were rejected, because that's a set of candidates that normally in a normal process I just forgot about. You will never really know their thoughts on the process at all, so the fact that you took the time to do that is really insightful, I'm sure, and that's great. What else did they tell you? So you mentioned that's a really interesting insight that some said they felt more comfortable speaking to an AI. Again, intuitively I would not have guessed that that's fascinating. Any other interesting bits of feedback that you got by talking to the candidates?
MAMNOON: One more thing that was on the positive side was that they thought some of the candidates had which AI was not able to gather, so even when I looked over the whole conversation and result, which basically the AI took, it was the right decision, but there were bits and pieces that the AI recruiter didn't do well. And one of the pieces was where someone had basically used the data science to improve the revenue. For example, if I had done some very simple project, but that had made a huge impact on the business, a I was not considering that as a stronger value than someone who has done a lot of modeling and a lot of heavy lifting in terms of the technical side but might or might not have delivered the business outcome. So that is what I realized, but overall, as I said, the end result of the overall conversation was right, but there were bits and pieces that were missing. I'm going to get started.
TIM: And what about in the configuration did you have the opportunity when You're setting up this interview, and this AI interviewer that you would define what good is, like you said, this is actually what we're looking for, or do you feed it a job description and it figures out the details? Like, how does that bit work?
MAMNOON: Initially, what we did was we provided the job description, so because there was a business case, it was not just the job description, and so there was some private information we fed and some information that was available for the candidates as well because in some cases candidates had questions regarding the job description. So we categorized it into three categories: private, which means don't use this information to create some information to be shared with the candidate; then something that was public; and then the third one was context, which could be used to answer the questions related to the job for two candidates. So the private information we found was the business case for what we are looking for, how long this role exists, who this role is, and then the context information is who this role will be repeating reporting to, what is the duration, so what is the current hiding timeline we are looking for, and then how we are building the team. tech stack that exists today, and all that, and then the public information was so we fed three different types of information; it took around half an hour to feed all that information to the tool, but it was a good experience to be honest, like I gave the feedback to the company that built it. They are, it's still a beta product, but they like it even for a beta, even for a product in the production environment. It was really good. But I gave feedback on these bits and pieces to the company, so hopefully they'll be improving it further as well. Okay, okay.
TIM: To the best of your knowledge, it does the evaluation. Is it taking a transcript of the conversation and doing like a GPT analysis of the text against those criteria, or is it also analyzing the actual video? Is there any kind of video element to it?
MAMNOON: I don't think it is evaluating the facial expressions and all that; it is just evaluating the transcript.
TIM: You mentioned in passing a massive benefit of this, which is that the candidate can book in anytime they want. This AI interviewer works 24 hours a day, and in multiple instances of it, you could do 10 interviews at the same time.
MAMNOON: Exactly at the same time, 10 candidates can be interviewed.
TIM: Yeah, which is amazing. I know the phrase game changer does get thrown around a lot, but in this case it really is because you're not burdened by one person's calendar when they're available matching the other person's calendar and then the back-and-forth endless bullshit as you alluded to before. Getting those interviews scheduled can be very tedious, but the other big thing, obviously, is you can interview every candidate. So you're not just left with the CV, as you said, maybe the 500 CVs in India, and deciding, Well, we can only really interview five or ten of these people. I'm going to read through 500 CVs and manually pick the five. Now you can interview many more, I guess, for Probably less cost than manually reading cvs
MAMNOON: And that's a thing that basically the ATS system has to pick up. It is a game changer for them because now, even though I did it, I actually didn't have to do it. You know, they must have Python; they must have this experience that I did not have to do, all of that unnecessary automation, rule-based automation, so the ATS companies now have to think around that problem because it's their competitor.
TIM: And what about the take-up rate? So I know from when, with a little bit of some of our customers, if we've recommended to them in a similar scenario, you've got a thousand applicants; you should do a skills test as a first screen, 20 or 30 minutes. One of the most common bits of feedback we might get as well That's fair enough; that will help us screen down the candidates. It should be accurate, but will every candidate do it? Not every candidate is going to commit to a test before having had an opportunity to speak to someone from the company, so I'm interested in the take-up rate, like what proportion of people who are invited to do that AI interview actually completed it. I'm not sure if you track that metric.
MAMNOON: So it was not a task; it was a conversational conversation. Within that conversation, there were some technical questions related to the job, and I was amazed by the technical questions, which were curated based on the CV of that particular candidate. Considering in terms of the pickup rate, I don't have that number, but it was not as big as something noticeable. Yeah, if I have to take a guess, I think it would be about 90%.
TIM: Roughly, let's say if you had to guess, roughly 90 percent did the interview; they chose to actually follow through with it. but yeah, from the schedule, I actually spoke to that candidate What I'm interested in is the candidate's perception of doing an AI interview versus, for example, doing a test as a first step, and it sounds like the engagement was really quite high with the AI interview, and I'm trying to understand the mentality of the candidates if they at any point indicated why they chose to go through with it. Did they feel like they were going to get some value out of it? Did they feel like it was going to be fairer? Did they feel like, Oh great, I've got an opportunity I otherwise wouldn't have had ? Was there any indicator around their motivations for actually doing the interview with the AI?
MAMNOON: One of the things that almost all the candidates mentioned was that they believe this is going to be a game changer, and for them, it is a good experience. Let's call it the initial phase of revolutionizing the HR hiring process, and they felt great that they were part of that initial test, which is happening. And when I told them it was a beta product, they were all amazed, like, It is an amazing product.
TIM: One final question on this is I'm wondering if you had any comments around the kind of the candidate cohort because it sounds like they were very receptive to this new technology. Were they a younger cohort, for example, on average, or was it spread across different age groups?
MAMNOON: It was a high so the role for which it was interviewed, and that's why it was more of a conversational screening as well. It was basically for a director-level and senior manager and managerial level, so it was 30 plus. I would say I don't, yeah, all of them sounded at least 30 plus or looked 30 plus.
TIM: Interesting. Yeah, it's great, great insights you shared on this project because it's just so fresh. As you said, these things are changing quickly. It's a beta product, but it already sounds really pretty good, so the sky's the limit. I'm excited. What about you? Because you've had this experience, has it made you now think about AI in hiring in a different light? For example, some ethical concerns around AI that you might have had before have almost been diminished because you've seen a lot of the value and seen the fact that even an early-stage product actually looks pretty good? What are your thoughts on the ethical sides of things?
MAMNOON: I think this needs to be as from the ethical perspective, I believe it will be far more ethical from this recruitment through this AI, so there are a few things: one is there is no automation-based rule automation rule-based automation getting rejected by ATS. That is most basically the biggest pain point of the applicant that they are just rejected through some rule-based automation from their CV. So that is basically it. If I'm applying for a job, I'm actually giving time and effort, and nowadays most of the candidates are tweaking their CVs according to the job description and then just getting rejected without even being given a chance to basically explain what they have done. Was a bit disheartening, but rightly so, because it was basically from the employer's perspective. It was not possible to actually recruit to test every candidate. Now this is in place, I think that experience of the candidate side would improve because then they will believe that they have at least been given a chance to explain what they have done and not just been rejected because of some automation. So that's first. I think it is more ethical in that regard and courteous as well, and second, I know the HR music people have been trained not to be biased, but there is always an element, like there's no way a human can be zero bias. They can be close to zero but not zero. I believe AI may not be now, but eventually could be trained to be zero biased, so that is where I think it will be far more ethical than doing the initial screening through some human-based process rather than an AI-based process.
TIM: Yeah, I agree with you, and I feel like a lot of the fears with AI in general are completely justified, but I feel like in the hiring context in particular, the traditional way of doing it is so flawed and so biased in so many ways that I'm struggling to imagine how AI could make that worse. Even just the example you've outlined there is a perfect use case. And I'd love to share a quick study that I've seen; in fact, there have been lots of studies like this in different markets around the world where a university or someone doing research would get thousands of different CVs and apply for different roles and then measure the callback rate, and all they vary on the CVs is just the names. They're trying to test basically if there is any discrimination against people from different backgrounds, and the literature is very clear: like, there are catastrophic levels of discrimination based purely on your name depending on the market you're in, so AI could surely help solve that problem.
MAMNOON: So there's been gender level bias. There's been an ethnicity, so there's been bias, as you mentioned in those studies. There have been so many experiments around that everything has been the same except the name, except the gender, and in every result of those experiments, it turned out to be biased.
TIM: Yeah, and so I feel like AI could help with that in no uncertain terms, so you've already used AI as a screening tool, as an interview screening tool. Are there any other bits
MAMNOON: right
TIM: process that you feel like AI could help and any bits of the process you'd like to automate away with AI or any other kind of tool
MAMNOON: If some AI developer is listening, go ahead and find a solution for scheduling, please. It's so common sense, and it's such a problem to schedule an interview, especially so for example in one of another client we have three or four rounds, so after the screening round there is a first round, which is basically with three different individuals of three different departments, each having a different priority for that role; for example, for me that role is of immense importance and urgency. I will make my calendar available today for someone else. It could be, Yeah, I don't have time until next week, and then the second round was with three more different people in the same department but with different seniority, and then the final with two co-founders, and it was so problematic for recruiters to actually find the right time and right So please, someone needs to build that.
TIM: I agree completely. I would personally love to see that tool as well because it's just one of those tedious time sucks. I don't know anyone who wants to screen CVS manually. I don't know anyone who wants to schedule interviews. It's just killing your time that you could spend either speaking to the actual people having a conversation with them or doing something else and not spending hundreds of hours to hire one person, so I hope that problem is solved soon. What about thinking now to the candidate side of the equation as AI is changing so many things? What do you think candidates should be thinking about? Is the skill set of a data professional going to change fundamentally in the next few years? Are there going to be certain skills at the moment that, for example, would be essential, like SQL or Python, that maybe won't be because you could just get an LLM to do that for you? What are your thoughts on the changing kind of expectations of skills?
MAMNOON: Data professionals have to be much more like they were before they actually started building their skillset. They have to understand that, at least at the entry level and mid-level, they don't understand there are two paths: one is AI developer, and the other one is AI consumer. Everyone is, in some way, shape, or form, stuck in between these two AIs. If they are, they need to go towards AI development; that's a hard skill, which they need to do. Problem AI development properly is a different skill set than the second one, which is AI consumer. This is where I think most If I have to do a bell curve, most of them basically should fall into this category and should strive for this. Because you then have a lot of tools, you have knowledge of a lot of tools, and basically, so when initially the data analysis came, it was only R and Python available, then there were BI tools that were developed, Tableau and all that, then some basic skills were not needed. And then you can churn out more insights and more analytics by using those tools, so I think data professionals need to understand which way they want to go if they want to go towards the AI side of being an AI developer. Just get your math; just get your math. If you don't enjoy math, if you don't enjoy statistics, if you don't know mathematics, don't go there. But if you really enjoy providing insights or providing analytics, I take it you learn as many tools as possible, expose yourself to as many AI tools as possible, and test them. Have a demo of those tools because right now there are so many platforms, even if they are aggregators. Yeah, last week I went to an aggregator platform, which basically is the Google of an AI tool. They had around 47,000 entries, 47,000 AI tools, and I'm sure not every tool is listed there. 47 AI tools—I know some of them would be just done for the hobbies, but there would be some tools that could solve some real business problems. Go there; find those; find those tools that are fit for your role. Provide insights into what I think and do. Two weeks back, I did an experiment. What I did was I took real data of a company performance, anonymized it, changed the number in there, and then uploaded it to ChatGPT. I think it was the GPT-4.0 model that I used to do that, and then I asked the questions, which basically an analyst needs to answer. It was giving almost to that extent. It's not yet quite there, but with the right context built on the GPT on the LLMs, I think the data analyst job could be evolved further using those sorts of tools so they wouldn't need to, and we might go to a point where, oh no, this could be a very bold statement where we don't need graphs, where we don't need any tables, where we are just asking My revenue numbers are down from last week. Tell me why the AI is saying your revenue number is down because you have two types of revenue streams: you have new user revenue streams, and you have recurring users. Now record your renewal rate; it is fine. So that means all the users who were acquired previously Everything is fine. Your new user revenue is down. Now, within your user revenue, it is impacted by the number of downloads you get and then from downloads to free trial activation and free trial to install a paid activation, so your download numbers are fine. Your install to install the free trial activation is fine, but the free trial to purchase has gone down. and for the reason it's most likely gone down is because of the market proportion it has changed. You were getting U.S. clients mostly; now, in this last week's performance, you seem to have only 10 percent of the U.S. clients, which were the highest, so I'm saying these sorts of insights could be easily I'm not saying it is happening today, but it could easily be a pro. We can easily go there; these are the very low-hanging fruit that analysts need to start with.
TIM: Yeah, it's such an interesting reimagining of that kind of BI layout because until now it's all been about, okay, let's get all these different data sets, get them into a warehouse, get them clean, get them in a way we understand, pop them into these endless end-user reports and dashboards, but that still requires someone to know what question to ask, to look to the graphs, to filter, to understand the metrics, to do all those kinds of things, to then dig, dig, and segment, to then find the answer, but you're right, why have all that? That's a lot of noise, right? I have to look through a hundred different dashboards to figure out what the problem is. Surely you can just tell me to do X because we've looked into these hundred different things, and then, yeah, what would that look like? I guess the next step would then be automating the decision, is that right? Because you've figured out the problem and the solution, maybe you could just
MAMNOON: An ideal insights division should have a framework. What has happened, why it has happened, and what action it can take now is what we basically do: build insights and all that, then answer the why. We actually have to go through multiple dashboard slices and dices. Look into data sets from different angles. so that is why piece What I mentioned can be easily done is why piece Then comes the action piece. That is where I think the next step would be as to what action you can take as a marketing department so that there is no every person basically having their own data analyst. in their chat, answering them straight away if I am head of marketing. I will be having the conversation from that context if I am a marketing manager of Facebook and Instagram, basically running advertisements on Facebook and Instagram. That is the conversation I'll have, and the good thing is it will have No data analyst will have an idea about what Facebook has rolled out yesterday. What is Facebook, and when was the last time the Facebook algorithm changed? If that tool is basically built up, it would be brilliant, especially if that tool had access from multiple companies; it would be able to contextualize and benchmark a lot of things for you. So that would be a great analyst to have.
TIM: Yeah, that's just a fascinating thought about where we could go. It reminds me of a conversation we had recently where you mentioned you were exploring this project to try to see if you could almost automate the analytics piece, and I'd love to understand your thoughts on why you're going down that project and what you've learned so far.
MAMNOON: So basically this is what exactly we are trying to explore. I already had a call with two of the platforms. I don't think we are there yet. It's just like you having a GPT on top of the dashboard rather than on top of your data sets, so having GPT on top of the dashboard is just okay. Read this dashboard for me; what I'm talking about and that's what I was expecting: having a GPT on top of your data sets where it can go into your revenue data, your marketing data, and your transactional data and come back with an answer, so maybe not doing the data engineering side of things but at least building queries if needed. to join and all that, so yeah, I was thinking that there has to be, but it doesn't seem to be the case so far, and I'm not so hopeful that it is possible right now.
TIM: But yeah, maybe it can't be that far away, so we'd need to have a sort of data warehouse AI user that would then have all the context and direct access to all the data and some kind of data definitions, maybe as well, and all sorts of other things to then be able to go off and start running analysis and answering questions. What, in your vision of how this in theory could work, are you feeding it questions, or does it already know that it is? Is it like continually monitoring, and is it looking for the problems that it knows what they are? Like, in an ideal world, if you could do this, what would it look like?
MAMNOON: So in the ideal world, it would look like we only have data engineers who are basically configuring the data and building the warehouse as per the need and the requirement and the context needed for that tool, the XYZ tool they are naming, so there has to be just a data dictionary, and they need to be using the right context and right dictionaries, and that is what they are doing. as long as that is in place and following the standards and the guidelines of that XYZ tool is basically answering every inbound query by going into real time into that data set Merging using the various fields, various tables, and various data sets, for example, we have Stripe data in one table, downloads data in another table, and conversion data in some other table. So it is going into those different data sets and tables and coming back with the insights, so that is what I was thinking. I was out shopping basically, and so far it's just okay. Here's the dashboard. Just read this dashboard to me. So, so this is what, in terms of inbound But the next step would then be the outbound alerting mechanism or finding the opportunities mechanism that you have. Your n 70% of the U active U of the users who are due renewal next month have not used their app in the last three months. Do something, or else you will lose this kind of outbound or alerting mechanism. Would be, I think, the naturalist steps. natural naturalist step towards So first going inbound and coming back with insights, and then the second one would be outbound: do this, or else this will happen.
TIM: And so then it's a case of just who needs to know about those alerts, which is a similar problem to what we've had before: Who's subscribing to those reports or dashboards? Who's making a decision?
MAMNOON: Exactly, and then this is where a lot of people are talking about, Okay, will I take my job? He will take his job, and that job. This is where I think data analysts need to up their game; they need to first need Whenever this sort of tool is launched, if it is not already or not currently being built, they need to get a grasp on this tool because I am sure it would need a bit of configuration, but once done right, it would be an amazing experience for the end users. So data analysts need to get to start learning or, in fact, scavenging these types of tools. I would say start scavenging for these types of tools, so that's one, and then the second is basically they would be able to, so if I am getting 105 insights today, if I am providing only five insights of growth projects in a quarter, I would be able to provide 50 or 100 or 150 insights. I will be able to ask more in-depth questions, so it would mean that more in-depth insights would be needed. Some high-level insights would be taken care of by end users, thank you, but more in-depth, more actionable insights could be taken care of by analysts and insights, so I think that they would have more chance of exploring the domains, and it would be an amazing experience for them as well.
TIM: And like any revolutionary tool, if they come to the fore, then you would have to use it; otherwise, you would not be anywhere near as efficient or as effective as someone who is using it.
MAMNOON: exactly
TIM: force
MAMNOON: Yeah, imagine how fast the company would be that has these sorts of tools versus another company that doesn't.
TIM: And it sounds like, then, so let's say some product like this was created; maybe the ratio of decision-makers to analysts could be higher because each analyst is more impactful because they can produce more analytics or some of the basic insights are being taken care of by an AI tool. I wonder whether then almost the next bottleneck would be the decision-making because then each decision-maker is now getting 10 insights a day. it's oh shit I can only really manage two of these, so maybe the next automation layer might be the decision itself, perhaps.
MAMNOON: Yes, it would lead to a decision overload, but to be honest, I think we are today, so that is a far better problem than not having to make any decision throughout the week, so that would definitely lead to a decision overload, but again, it's a much better problem to have than no decision. So in that case, what the entrepreneurial skills of an individual need to come out with is that, okay, these are the hundred things that I would be able to do, but what are the five things that I must do? These are what the five most impactful things are, so the conversation within the organization would not be structured around What is happening? Why it is happening is about these hundred decisions we need to take and the actions we need to take. Let's pick out the action and the conversations about which is easier to develop and what would be the impact and all that, so yeah, I think the whole conversation and the culture within the organization would change once they have this sort of position overload problem.
TIM: Yeah, and I wonder if there are also certain domains in a business that are already well set up to consume these recommendations and automatically action them. For example, online marketing optimizing a Google ad spend based on insights automatically generated is already the way it's done. So I think it's probably not a great deviation versus product development, where normally the initial stages of deciding what product to build are very manual. It's conversations; it's whiteboarding sessions. It's writing JIRA tickets. It's looking into the code base like that's very manual at the moment, so maybe it might be a little bit harder to feed in product insights to automatically start creating tasks, but for things like marketing, at least online marketing, it should be pretty seamless, I would have thought.
MAMNOON: Yeah, and even in the product development, I think the two data points are generally overlooked by most of the organization, which is the customer support data, be it audio, be it email, or be it chat. That is our ears and eyes into the customer mindset and customer feelings, which is overlooked. and then the PMF surveys and NPSS scores and the feedback cancellation feedback a lot of organizations do capture it, but they don't use it, but again I think Having this sort of data set into the AI modeling would provide AI with a lot of context as well; it could suggest your cancellations are happening because these users are all 70 percent of the users who reached out to customer support had canceled their subscription. There is some problem now as I have now, in terms of if I am an AI, as I have 70 percent of users, it is related to these problems having these problems fixed or building some sort of feature that is related to this to solve these customer problems, so again it would move towards more customer involvement, so it would basically be the eyes and ears and a loudspeaker of the customers feeding into the decision-making as well.
TIM: Yeah, that's such a huge opportunity. I Think if I even just think back to the last five years of running a Luba, I feel like we've been at our best when we've had a really close connection between the features we've developed and specific conversations with customers. Now, a lot of times that was like in a video call or face to face. and that was me taking that and translating it, but now most calls are recorded, the transcription is generated automatically, very accurately, and then all of that, as you say, the emails, the live chat, and the support tickets That's a really fascinating data set that sometimes I wonder if I'm really going to make a better decision about what to build. because I'm so biased, then an AI just looking at 10,000 data points and saying, You know what? There's a blind spot here. If you just added these three features, you'd solve 50 percent of these complaints.
MAMNOON: exactly, especially in the B2C market, where companies like having received 50,000-60,000 emails in a month or charts 50,000-60,000, I think it's a game; it's a huge data set.
TIM: Yeah, you've inspired me certainly to At least do an extract from our customer support system and at least get ChatGPT to get up some ideas on what we should You
MAMNOON: Yeah. Start and upload
TIM: Exactly, and the magic will happen if you could ask our next guest on the show one question. What question would that be?
MAMNOON: Just limiting it to one question in this adventurous era is really tight, so there are a lot of questions I basically would like to understand, but in terms of trying to understand, this is more of a tangent from just the AI piece: how far do generally the hiring teams look for to identify their success metrics? So if an employee is retained for three months, they would do what they call a success. If an employee is retained for six months, do they call it a success? But at the end, I think it is very much misaligned with the company's objectives. How should That we looked in what should be the first because I have hired people who are so amazing that I was able to evolve in my career because I was able to just—they were able to replace me, and I was able to evolve myself as well and take on extra roles and take on seniority. but those sorts of hires shouldn't need to be traced on to what was the hiring strategy employed in that specific interview, so yeah, it was more of an open-ended question. I'm not able to—or maybe I'm not able to articulate the problem, but it is about how do we ensure that every rock star we have hired—are we able to basically zero in on what the hiring strategy was or what exactly the hiring step was that we took to get to that decision? How should the hiring team's success be aligned with the organizational success? retaining three months Six months for an employee is not any organizational success exactly. TIM: Having a good conversation around what success actually looks like Is it just not firing them? Is it that they get promoted? Is it their performance review metrics? Is it, as you say, I hire someone, and they make my life easier? That sounds amazing. What are the right metrics, and how could we collect them and use them to make better hiring decisions? is definitely somewhere we have to get to in the next few years, I think.
MAMNOON: exactly
TIM: It's been a great conversation, and I've personally learned a lot, especially from your recent experimentation, and I really liked how, when you're talking about the experiments you've done in hiring, how balanced you were and how you really had thought a lot about the pros and cons and things that worked well and didn't work well. and so it's really refreshing to get that nice broad perspective on the space, so thank you so much for sharing all your insights.
MAMNOON: It's my pleasure, and I'm sure I've made some mistakes that need to be unearthed as we evolve into this amazing journey of AI. It's just a start, and we're living in an amazing time.
TIM: We certainly are, and Mamnoon, again, thank you so much again for sharing your thoughts.
MAMNOON: It's my pleasure; thank you very much. Tim