Alooba Objective Hiring

By Alooba

Episode 125
Svetlana Zavelskaya on AI and Human Interaction in the Modern Startup Workplace

Published on 3/11/2025
Host
Tim Freestone
Guest
Svetlana Zavelskaya

In this episode of the Alooba Objective Hiring podcast, Tim interviews Svetlana Zavelskaya, Head of Backend and data engineering at Quanata

In this episode of the Objective Hiring Show, we are joined by Svetlana Zavelskaya, head of backend and data engineering at Quanata. Svetlana discusses her journey from Wall Street to leading a team at an insurtech startup. She offers insights on the differences between working in startups vs larger companies, especially around hiring dynamics. The conversation also delves into the nuances of integrating AI in coding and hiring processes, emphasizing the importance of cultural fit and curiosity in candidates. Svetlana shares concerns about over-reliance on AI for coding and the potential pitfalls of automated systems. The episode concludes with a discussion on maintaining candidate engagement from the initial interview to their start date.

Transcript

TIM: We are live on the Objective Hiring Show. Today we're joined by Svetlana. Svetlana, welcome to the show. Thank you so much for joining us.

SVETLANA: Thank you for having me. Looking forward to our chat.

TIM: It is absolutely our pleasure to have you with us. And where I normally like to start with guests is just to hear a little bit about Svetlana. Who are we listening to today?

SVETLANA: I'm currently the head of backend and data engineering at Quanata, which is an insurtech startup and a technology innovation arm of State Farm. We are building our own insurance platform, and we have our own insurance line, which is telematics-based. It's called high road, not available across the entire US, just a few states, and before that, I spent about 20 years serving on Wall Street in different roles around infrastructure and data.

TIM: Nice. And you mentioned Quanata is a startup. How, I'm wondering from your perspective, does working for a startup differ from working for a larger company? Is there anything that's exciting or new about it?

SVETLANA: It's really exciting. It's a much smaller company, so everything you do has much bigger visibility and probably much bigger impact in terms of the company. It's much faster moving, very good energy, a lot of enthusiasm, and their faith in what we do and believe that what we're doing is actually for good. Because one of the things we're trying to do is to reduce distracted driving and save lives. And when you engage in something like this, it tends to be an inspiring mission.

TIM: That's fantastic. And what about hiring then? Have you found that the way that hiring is done in a startup is different from in a larger company?

SVETLANA: Overall, yes, I've also noticed there is a difference between how hiring is done in different industries and also from the east and west coasts.

TIM: I'm interested in the east coast versus west coast divide. How does that differ in hiring?

SVETLANA: I would say It's probably much more formal on the West Coast, but also people tend to come much more prepared to an interview. Sometimes already they would look at your resume; they would have prepared questions particularly for conversation with you, and there would be questions that would be very strategically asked and focused on things that are important for the company. While very often honest about cost, especially for the big company, sometimes the interviewer can be looking at my resume for the first time during the interview, and a lot of times the questions are just created as we go.

TIM: Okay, and then what about the startup's first enterprise lens? Is the startup hiring, I don't know, a bit more organic or more data-driven? What have you noticed as the main differences?

SVETLANA: I think because the team is smaller, everybody is interested in hiring fast. So the process is probably a little bit more optimized, and it's faster. TIM: Yeah, and that makes sense because that's in line with the general pace of the environment, so it's all connected. Last time we chatted, you had mentioned you'd hired. Or you've been hiring for a long time across many different roles. How has your approach changed through time as you've moved up the leadership ladder?

SVETLANA: I think you, because you start hiring at a different level, start looking at different qualities and the individuals. If you're a technical manager, probably your main concern would be the technical people of a person you hire, the technical skills of a person you hire, or maybe can they get along with the team. And that should be sufficient for any junior or mid-level engineer. As you start hiring, let's say for a manager, or somebody who would be staff or a principal engineer or an architect role, somebody who works cross-functionally, you start thinking not only about technical skills but also about their culture fit and how they would be able to work with their counterparts and peers across the company and across different roles.

TIM: Okay. So as you get older, you naturally get a broader perspective. You understand more; you have more skills yourself, including softer skills. And so you have this wider perspective on what you need.

SVETLANA: Correct. You need the person as they get more senior, let's say engineer versus principal engineer; they should be able to reach cross-functionally to other people, lead others, and not only just do their work. But lead others and also understand business or be willing to understand business; maybe learn business and see what's important to business because technology by itself is pretty cool, but if it doesn't bring any business value, then why?

TIM: I was playing around with an idea recently that I wanted to run by you, and that is, as large language models develop and get more and more advanced, Claude seems to be quite good at programming, at least small kinds of things, these days. Do you think that the blend of skills that, let's say, an analyst or data scientist or data engineer might need to have is going to become more weighted towards the softer skills and business knowledge because maybe some of the easier technical things now, they would need to prompt an LLM as opposed to knowing how to do it themselves?

SVETLANA: I think time will tell, and I think it will also depend on the area. LLM is not bad at coding. But because it does it so fast and would have an after, let's say, it codes or any engineer code. There is a PR review that takes place. And because LLM is so fast, the person who does that PR review would tend to click on the accept button faster, and it probably can cause more issues to slip through than if they were checking their peer's work. for example, and that's a tendency. Also similar to probably everybody who listened to this podcast, I ever played with ChatGPT and noticed that it tends to be quite wordy. So similar to how ChatGPT is wordy very often, especially for somebody who is junior and doesn't have a good experience in coding and doesn't customize that. coding tool for their needs and their style, it would generate a lot of extra code that's not necessary. And especially for a junior person, it's sometimes difficult to figure out. Also think about not only bugs that can be occasionally generated, but also if the tool is hacked or was used by a hacker. Maybe a security vulnerability would squeeze in. So it's, and this is just like some minor concerns. There are absolutely many more, but I'm just looking from a practical coding perspective. At the same time, can it do a nice job generating, let's say, dashboards or working on similar things? I

TIM: Is one of the limitations the context window? So in a big enterprise where you have a very complex code base. It might need to make several changes across several files that it's just, it's too much context, and maybe it would be better off if it could operate in really tiny microservices, but maybe having some huge monolith is just too difficult. Is that part of the problem?

SVETLANA: I think the lack of context is overall part of the problem. So, for example, if you have very strictly defined requirements, let's say if you're talking about data and we're trying to write ETL and you're saying transform X into Y and we provide very clear guidance, it's much easier to implement than, for example, if you're talking about a backend application. And we are reading a ticket that's written by a business analyst who assumes that a developer worked in the company for a while and has all the business context, and again, LLM does not.

TIM: I'm interested in again another related idea. So if you were starting a brand new company right now. Would you set it up differently for them than you would have five years ago in the sense that, like, I'm playing around with the idea at the moment that maybe most of our colleagues soon will be AI agents, not humans, and we would need to maybe optimize the environment to cater to them a bit more than humans or some kind of combination? What do you think of that idea?

SVETLANA: It's a very attractive idea, but I don't think we are there yet. A lot of members of my team are using different tools to help them with coding. And there are a lot of processes and automation. But I think one of the confusions that happens later when people talk about AI agents is sometimes what it is. All it really is is just a process automation. And sometimes we also focus so much on LLM lately that we forget that LLM is just a significant part of overall artificial intelligence. It's just one aspect of it. There is a lot of mathematics and statistics and other things that play here.

TIM: You mentioned something really interesting in passing around, let's say, the AI doing the coding and then getting to the PR, the pull request stage, and people going, Yeah, fine. Accept and maybe not scrutinize it as closely as they would. If. They had a human colleague who'd written it. I'd love to hear more about your experience there because that's a really interesting phenomenon that is happening.

SVETLANA: I think that's probably about it. You tend to go with the speed of your partner, or the way your partner operates tends to affect you. And when AI is so fast and we have to agree, it rides much faster than any human can. You tend to try to do a review faster. It would be just a normal human reaction. And as you go faster, you start missing things.

TIM: Yes. That's such an interesting observation. And is there also something to be said for maybe overconfidence as well in the LLM? part of it?

SVETLANA: Of course, if you look at it, I think it depends on how skeptical you are. If you look at it as a magic tool that does it all. Then you will be more willing to accept things that it proposes to you. I think it also has to do with the seniority and experience of people who are using it. What we've noticed is that junior people actually don't benefit from it as much. Because they rely on it too much, and when it produces garbage, excuse me, but it is what it is. It's not so easy for them to understand it. While senior people, it can really help their productivity a lot because they much better understand what they want to do. And they can probably give better prompts, and they are also better at analyzing the feed, what they get back.

TIM: Okay. Is it then fair to say that LLNs are like a leverage tool or a multiplier of your current skill base? So if you're a really skilled engineer, it makes you better. It's not like it makes a junior engineer better than a senior engineer who's also using the LLN.

SVETLANA: I think so. I was actually reading an interesting study done in one of the major universities recently where they took a group of students, split them into three groups, and asked them to write a code, I think, in one of the languages that's not very much used, probably something called like Fortran or something. And they gave them three options. One team got full LLM, and they could use it to develop a project. One was giving, like, Google and a lot of information, and one was giving a book and said, Figure it out. And of course the one that had a full LLM did it the fastest. The one that was figuring it out from scratch did it the slowest. But when it came time for the test, they were all given a task to do something now on a test without the use of any tools. The one who started from scratch and actually learned it did the best, and the ones that used LLM pretty much failed or took the longest, and the third one was in the middle. So it tells you something if you don't, if you don't really do it yourself, and it's almost like being in high school or in college and not doing your homework.

TIM: Exactly. What? Okay, that's really interesting. What's then your view of, do you happen to have children, by the way?

SVETLANA: I don't.

TIM: Okay. Are you currently in high school, university, or college? Okay.

SVETLANA: middle and high school.

TIM: Okay. What's your view on them using ChatGPT or Claude as part of their homework or research? Like, how do you currently think about it? Knowing things like that study that you've seen?

SVETLANA: So they tried, and I think it was a great lesson because they tried to feed a math task to ChatGPT, and they came back to me and said, Oh, ChatGPT is stupid. And I said it is not, but don't forget it is not a math model. It is a language model, and you're trying to apply a language. And another example would be They just watched this weekend on YouTube. They, one of them plays chess, and they all don't know chess, and it was a pretty funny sort of video with two LLM models playing chess against each other, and of course it didn't go well, and they found it pretty funny. But at the same time, there are a lot of existing programs and algorithms that can play chess pretty well. Which brings us back to a conversation of, like, where, what is the right place to use what? LLM won't help you to play chess, but another analytical tool that trains and knows how chess figures move can help there.

TIM: Okay. So they learned, yeah, the right tool for the right job and not to use it as a blunt instrument. What about, I'm interested in maybe their tasks in, I don't know, their English class or their history class, whether it's very text-heavy. Do you think they should just be able to use it and have at it, or should they be doing it themselves? What do you think about that?

SVETLANA: I think they should be doing it themselves because this is how they develop skills. And plus, I think it's always pretty obvious when the tool is used. especially at their age. Even

TIM: Yeah.

SVETLANA: For us, we just went through the performance review cycle, and you can see who used LLM 235, what's the right, and who didn't.

TIM: On their own, like self-evaluation and that performance review, you mean, or reviewing the team?

SVETLANA: It is a self-evaluation or providing feedback to others.

TIM: And what was your perception of that? I'm interested.

SVETLANA: If it was just used to, let's say, I'm not a native English speaker. And when I came to the U.S., my English was far from perfect. It was pretty bad, to be honest. And if I had to write at that time, something like this would be extremely useful to me, to just maybe correct a grammar mistake or make my writing better. And I don't have anything against it as long as the main idea is not lost. But if I write a two-line sentence and it turns it into two paragraphs, I think, will it deliver my message clearly?

TIM: I feel like there's something very different between taking a long text and using it to summarize versus taking a small amount and using it to make up a lot of stuff.

SVETLANA: and I don't want to sound like I'm negative or super skeptical about LLM. I'm actually very enthusiastic, but considering my role, I'm trying to also be mindful and reasonable with my enthusiasm.

TIM: Speaking of the sledgehammer approach and saying anything can now be solved with an LLM when there are existing better, more appropriate tools. I wanted to run one idea past you, which is, I've heard of a few companies recently over the past six months. That have said to their teams, and these are substantial businesses, like 500, 1000 people who've said, All right, stop everything you're doing in your day job. Just put a pause on it. I just want you to go and figure out what you're doing, and can it be changed or improved or automated or whatever with Claude or ChatGPT? Two different companies with two different models. Now, normally that sounds like a solution looking for a problem, which is normally not the great way to do it, but if this technology is so powerful and so groundbreaking, maybe it's almost justified to think like that, because without that, you won't have the time to really think about everything you're doing and something that previously just had to be done in a certain way. Now, suddenly, actually, we could automate this entire process. With this amazing technology, what do you think about that approach?

SVETLANA: I'd say if there is something that you have to do on a regular basis or every so often, it has to be automated. Because you want it to be automated, right? Because if you work somewhere, you want to develop your skills and work on something interesting instead of doing the same thing manually again and again, because then when it gets to work experience, you start, and somebody can ask you, Do you have 10 years of work experience, or do you have one year of experience doing the same thing for 10 years? And I think this is important. So I'm always a big fan of automation, just automation done, and also keeping in mind that It may work 99 percent of the time, but when that one time it fails and it fails automatically and that failure gets propagated across your system very fast, the impact of that mistake and failure will also be much more dramatic. But I think that's a reasonable risk to take. We just need to understand that the risk we are taking is not omitting. What I think AI can do is actually help to connect the dots in the complex processes. Or maybe help to create the initial map of those processes. Then, of course, any output that AI does will have to be reviewed by humans.

TIM: Is there something to be said for, so part of the problem with the LLMs is the randomness in the content they produce and the hallucinations, let's say the inaccuracies. Is there something to be said for just needing to pair it with other bits of the stack, like better automated tests? More data validation at the other steps in the application layer or whatever, rather than just expecting the LLM to produce the right content with your prompt. It's just combining with other things.

SVETLANA: Of course, it has to be validated. Just be careful about AI writing test cases for AI for DS CoN.

TIM: You've got some experience in it, especially failing here. I'm gathering.

SVETLANA: Oh, no, but no, I actually, my personal experience and my team's experience with AI was extremely positive. And probably that's why I'm so cautious.

TIM: Because it seems too good to be true, or.

SVETLANA: You know that you just have to think about you, you're, I'm sure you read about a lot of issues. And the more you read and the more experience you have, the more you probably know that things are not as rosy. And we are using AI. And there's nothing, and I think let's say writing a test case for a code is a great example, and as far as I can say, sometimes AI can, when developers develop code, do a unit test is not their primary goal. They probably need to ship code fast and the unit test basic functionality. There is, of course, another QA team that would do a bigger. Do more testing. But if you employ AI, it can probably come up with more use cases and more tests faster and think about things in quotes about some age cases faster as well. So it's great. But then again, AI testing, especially. If it's produced by the same model, at what point do we get into that loop and it becomes useless?

TIM: And is that where? Having multiple models available where I don't know, Claude's writing the code and Chachapiti's QAing it or something. Is that where it becomes a bit more powerful?

SVETLANA: I don't know. I did try feeding the output of one model to another, and it was fun to play with, but that's not something I will do at work.

TIM: We recently built a pipeline ourselves to generate question content in our platform, where what we came up with was Chachapiti writing a question and Claude reviewing it in this nice automated pipeline. It was cool to see it go back and forth. Get it all ready. And then adding data validation after that as well. And it was, yeah, really amazing and powerful, but I think we gave it a task that it was going to be pretty good at; it'll be interesting to see how it went with coding. Can you imagine a time soon where a new product is created fully with? AI-written code. So humans haven't actually written a line of code. The entire product, the entire software, is just prompted, and a series of LLMs have written the actual code.

SVETLANA: There are LLMs now that are targeted at writing mobile phone apps or websites. Here's an example for you. It's already here.

TIM: And these products work like they were shipped, and it's good enough.

SVETLANA: It works. I heard they were sufficiently good enough. Maybe they would require some human touch. I would imagine so, and I, based on what I've seen, they were pretty good.

TIM: What about in the context of hiring? So have you started to dabble with AI in any bit of the process yourself? Has your team started to dabble in it, either in screening or interviews or anything else?

SVETLANA: No. We don't, obviously, use some automation when going, when doing initial scripting. It is primarily done by our talent acquisition team, not my team. We work with already pre-screened resumes. But with talent acquisition, especially now where you can get hundreds and thousands of resumes for the job posting, I would, it would be natural to implement some automation. But it would never, it was never used to reject a candidate, just more to maybe pick the resume that would appear as the most fitting. In terms of having an AI interview people, we've never done it, and we had a few cases of people sometimes liking or not liking using AI for note-taking, and what we found out is that people usually don't like it, so we don't do it.

TIM: Oh, I'm interested in that. Okay. Because in my head, I thought a kind of no-brainer AI and hiring entry-level kind of step would be a human interview, but with an AI note taker. Summarizing things, taking notes so that an interviewer doesn't have to. So I'm interested in the feedback of why Was it the candidate who doesn't like it or the interviewer?

SVETLANA: Yeah, there were a few cases when they would ask us to turn it off.

TIM: Then the candidate has asked that. So more like a privacy kind of thing? Interesting. Was there anything else about it? For example, I don't know Because almost similar to the studying thing, because the AI is doing it now, you don't do it; you are not writing the notes; therefore, you've forgotten details yourself. Was there some other weird consequence of using an AI notetaker?

SVETLANA: I think it has more to do with the fact that it is an interview and most likely it would be one person facing two or more, asking them questions. And sometimes people get nervous and don't feel that comfortable. And adding that another AI taking notes stresses them more? That's my guess.

TIM: Huh. That's interesting.

SVETLANA: That was actually one thing that we heard—that the interview process itself people often find stressful.

TIM: My view is, and maybe this is just a repositioning and reselling of it and the processes that. It could probably be pitched as we're going to record this; you as a candidate also get a summary of what we've discussed. Maybe they could get the videos that they have. And maybe then it's also we're going to use this so that as interviewers, we can focus on you rather than getting distracted, taking notes. That's the way I would think about it. Do you think candidates would receive it? If it was pitched that way.

SVETLANA: I think I think because candidates, especially if it's somebody who needs a job, will probably be willing to comply. But based on what I hear, the majority of the people still don't like

TIM: Interesting. Okay. What about the idea of an, I don't know. I can interview the assistant. So thinking, let's say more from the company spec perspective. You, as a hiring manager, is there any potential value add for you there to have this assistance? I don't know, maybe pre-fill an interview scorecard or come up with its Hey, like you said, here are the five things you're looking for in this candidate. I've done my own personal evaluation of this candidate and how they scored across these skills, based on the conversation I've seen, and I'm almost doing some of that legwork for you as an interviewer. Would that be valuable? Or is that something you're just going to do yourself anyway?

SVETLANA: It. That would mean that the interview assistant will also be present at an interview, similar to the previous case. And again, it's personal how comfortable or uncomfortable the interviewee would be, and based on what we heard and the feedback we received, they don't like it. And I think an important thing to keep in mind is when people are looking for work, they don't look only for salary and job description. They also look for an environment where they will be working at and people they will be working with. So trying to replace humans with AI doesn't give them a feel of the place and the work and probably will make them less likely to move forward well.

TIM: My devil's advocate to that would be, wouldn't particularly, let's say in analytics and data science, a candidate expect themselves to be going into an environment where AI is encouraged and supported because they're working in the field?

SVETLANA: Could be. Maybe because AI is so new and people are not so much used to it, it doesn't make them overly comfortable, but as it gets used wider and wider, maybe a year from now it will become absolutely common.

TIM: Yeah, I feel that like when I've had discussions with people about the interviewer being an AI, so not just an interview assistant but like the actual interviewer, it seems a bit too much at the moment. Although products are out there now, I think as we start to interact with AI on a day-to-day basis, calling up any call center and speaking to a bot, I don't know, we just get more exposure to it. I guess it'll be normalized in society. And then as long as the product's good enough and you don't feel like you're interacting with some AI, like you have a good experience, then it'll just be normal.

SVETLANA: I guess so, but at the same time, we've seen the cases when people were trying to use AI to answer questions during interviews.

TIM: And what's your perception of that?

SVETLANA: It's pretty obvious.

TIM: Is that because they're like looking at two screens and they're talking like a robot and it's clearly not in their own words? Is that part of the giveaway? I don't know about you, but I feel like if I were a candidate, I would not try to do that because that would be more stressful for me. I'd rather just try and answer the questions myself, even if I don't do very well. It's going to be more complex to somehow read the output of an LLM, put it into my words, and then say it back to you and pretend like I'm not cheating. It just seems like too much.

SVETLANA: For me too.

TIM: And what if, though, okay, maybe a slightly different scenario. What if the candidate said, Hey, like, I'd like to use an LLM during this interview. It's like a sparring partner. Are you okay with that? Would you perceive that then?

SVETLANA: I'd say I would probably contact my HR to get their guidance in general, right? Because different places take using AI differently. And as you should fall in within the company policies and information, right? Depending on how the interview goes and what you're going to talk to people about, you may touch on things that are private to the company business. You don't want to feed it to ChatGPT.

TIM: Oh, of course, if the candidate is recording it themselves, then they would have access to that. Yes. Okay. If you, if we exclude the privacy aspect, let's say you are making up the policy, and you could just choose for yourself as an interviewer whether or not candidates could use AI. What's your own personal view on this?

SVETLANA: My own personal view: I don't mind doing it during an interview because there are more important things to look at in a candidate during interviews than that. But while it would apply to interviews, I'm not sure if that would be as acceptable in other places. And I'm thinking about colleges and education.

TIM: Yes. Yes. And with your example before of. People who'd used it as part of the performance review, it suddenly reminded me of last time we were doing hiring, not that long ago, where we were hiring some salespeople. And one of the questions in our overall application process was, Imagine it's your first day at a looper. What are the three things you need from us to have the best chance of being successful in your role? Something that was a question I really wanted the candidates to answer. Like, it's quite a specific personal question for them, but I don't know, at least 40 percent had clearly whacked it through ChatGPT and just copied and pasted the output, which to me is weird because I don't care what an LLM thinks. I'm not hiring the LLM; I'm hiring you, an individual. And I found that to be, yeah, unhelpful in the extreme. And I would have rather they had just answered it honestly, even if their answer was. I don't know if it's grammatically incorrect or whatever. I don't care about that. It was more about the content. So I feel like that was, yeah, not a great use case of ChatGPT. I'd love to get your thoughts on, like, using data in hiring versus using a gut feel, just something that comes up on this show quite often. Like, at the end of the day, once you're making that final hiring decision, is there a gut feel component? How big is that component? How do you weigh it with other metrics you might have on the cabinet skills or things like that?

SVETLANA: I would say the components that matter maybe not as much gut feel as they are about cultural fit to the company. Because every company has their own culture, and if the person doesn't fit, it doesn't have to be like a hundred percent fit, but it should be aligned. It should be the person who is aligned with the company values and culture and the way things are done. And I think that's important; otherwise, they won't be successful. Somebody, we want to set that person up for success.

TIM: and you're evaluating alignment to these values through behavioral interview questions or something like that.

SVETLANA: Yeah, throwing through the questions.

TIM: Let me throw a devil's advocate idea at you when it comes to cultural fit. I was recently traveling, and I went to some very different cities in the world. So I live in Sydney. I was lucky enough to go to Berlin and Riyadh in Saudi Arabia and Bangkok, three entirely different places from each other and very different from Sydney. And so I remember, for example, in Bangkok, it's a very different place to hear. I'm not Thai, but you're there for even a few days, and you start to get a glimpse into the customs and the way people behave and the differences, and you can just learn that. So as a tangible example, if you go into a cafe and buy a coffee and they're giving you the coffee, you don't just grab it from their hands and say thanks and leave; like, this is almost this extra second of thanks. And. Zenness, that would be polite and would be impolite to not do. And so after a day or so, you're like, Oh, okay, I get it now. So everything's just slightly slower and slightly nicer and friendlier. Lots of, they call it the land of the smiles for a reason. And so as an outsider, you can adapt fairly quickly if you're willing to do that. In some ways, do we maybe not give candidates enough credit for being able to adapt in the sense that we're judging them in that interview? On their cultural fit. But really, people can adjust as they get into an environment. See, oh, everyone behaves this way. Here. I now have to adjust; I now have to be more direct or less direct or whatever.

SVETLANA: When I said a cultural fit, I meant something a little bit different. I would say I have a very diverse team, and there are people all over the world. And I've managed teams across the world. I used to manage teams in Europe and Asia and across the U.S., and my current company is absolutely diverse, and people are from everywhere. But If you're an engineer and you're part of the team, you have to be a team player, and you'll have to have. Respect for the rest of the team and being able to work with them, especially in a senior role, because in a senior role, one would have to lead the team. And it's a pretty common conversation in the technology space about a genius jerk versus a regular person with regular skills, but a team player. And while we all tend to like geniuses, and they bring the ideas, sometimes their ideas are very off, and thinking about themselves as geniuses and not willing to hear feedback from the rest of the team, they'll just confront the team. And at the end of the day, the solution they propose is very often not the best. Or, for example, it's a solution that works for a three-person startup but won't work when the company starts growing.

TIM: What about if you had the proverbial 10 XR, someone who's just adding, you think is going to add way more value than a typical engineer? Like exponentially more. Because I feel like that is the case. The difference between an outstanding engineer and a good one, I think, is not a little bit; I feel like it's a lot. What happens if they're a jerk? So they are a genius; they are a jerk, but they're a 10Xer. Is it worth having them on the team? Or is it just too much of a disruption?

SVETLANA: It's not only about disruption. But if your team is 10,000 people, then you can probably get away with it by almost like taking them on the team or putting them on the particular projects or giving them a role somewhere in, let's say, architecture or elsewhere, right? But in architecture, actually, working with people is very important, so I'm not so sure. But if you have a small team, every individual will have a big impact, especially on the morale of the team. And are you willing to sacrifice your entire team, or probably several teams, because that person won't be working alone within the team? They would have to work with others for the sake of their potential genuineness, and then now maybe it can work as fast, but

TIM: Yes, AI is appearing to be a genius sometime soon, so maybe we can get a very cheap compliant genius so we don't need the genius jerk anymore.

SVETLANA: I think morale of the team is very important. And when you have somebody on the team who is affecting that morale negatively and the team is small, then over a long period of time, it's not worth it.

TIM: And so then to tie this back to a discussion around cultural fit. Part of it sounds like you're screening out candidates who you feel like might end up being that negatively destructive influence. But I'm assuming that must be pretty rare, that in an interview someone comes across as so abrasive that you think, Oh, we can't hire this person, like it has to be rare. I'm assuming. Okay,

SVETLANA: Probably more than I used to epect. It does happen, and sometimes it's pretty obvious, sometimes not so much.

TIM: And how does it manifest? Is it when you're asking them to come up with a general solution and you give feedback and they're very defensive, or what are some of the red flags that you've noticed?

SVETLANA: It's not necessarily about them being defensive, but when you talk to a person, sometimes you can see it.

TIM: If we were to strip back hiring to just the bare essentials, Is there any one thing that you think? Actually predict success in a role. You could just measure one thing or no one thing. What would you choose?

SVETLANA: That's a good one. Curiosity, if I have to say one thing. Because curiosity then will translate into two other things. One being the person will keep learning their subject matter field and become better and better. In our case, we're talking about AT, so they will become better engineers, and for them to be in that space, they have to be. And second, they will be learning from their peers, and they will be learning about business and being curious, which would help them to connect with people around. Plus, Technology is a space where you need to be always learning; if you don't like learning, you won't succeed.

TIM: Yes, because it changes so quickly. And so you currently try to gauge someone's curiosity in an interview, I'm guessing, and if so, how do you do it?

SVETLANA: I think in an interview you get that curiosity unpacked into two aspects: one is technical knowledge and maybe how open they are to other ideas and feedback. And second is being a team player because probably people who are not curious and just focus on themselves won't be good team players.

TIM: Svetlana.

SVETLANA: Back to a previous question,

TIM: Yes. If you could ask our next guest, I've asked you a lot of questions. If you could ask our next guest one question about hiring, what would you choose to ask them?

SVETLANA: I would ask them How do you keep the person you're interviewing genuinely engaged? during, from like the first time you have a screening interview with them and through the entire hiring process until they start. How would you do that?

TIM: That's a great question and not something I've asked our guest. And I also liked that you said until they start, because, of course, candidates can drop out even after they've signed a contract, after they've had the offer tabled at them. Okay. Very important to get them in the door. Svetlana, it's been a great conversation today. I've really enjoyed it. We've covered a lot of different ground. And thank you so much for sharing all your insights with our audience today.

SVETLANA: Thank you very much. I really enjoyed it.