Feb. 24, 2026

437 If AI Keeps Getting Smarter, What’s the Leader’s Real Job Now and Where Human Judgment Still Matters with Andrea Iorio

437  If AI Keeps Getting Smarter, What’s the Leader’s Real Job Now and Where Human Judgment Still Matters with Andrea Iorio
Apple Podcasts podcast player badge
Spotify podcast player badge
Amazon Music podcast player badge
iHeartRadio podcast player badge
TuneIn podcast player badge
Podchaser podcast player badge
Overcast podcast player badge
Castro podcast player badge
Castbox podcast player badge
Deezer podcast player badge
PlayerFM podcast player badge
Audible podcast player badge
Podcast Addict podcast player badge
YouTube podcast player badge
RSS Feed podcast player badge
Apple Podcasts podcast player iconSpotify podcast player iconAmazon Music podcast player iconiHeartRadio podcast player iconTuneIn podcast player iconPodchaser podcast player iconOvercast podcast player iconCastro podcast player iconCastbox podcast player iconDeezer podcast player iconPlayerFM podcast player iconAudible podcast player iconPodcast Addict podcast player iconYouTube podcast player iconRSS Feed podcast player icon

As AI accelerates, many leadership conversations focus on tools, efficiency, and productivity. This episode of Partnering Leadership takes a different approach. Host Mahan Tavakoli is joined by Andrea Iorio, a global AI thought leader, former senior executive at Tinder and L’Oréal, and the author of Between You and AI.

Andrea brings a rare combination of global operating experience, deep technology fluency, and philosophical clarity to the conversation. Rather than asking how leaders can use AI better, he challenges a more uncomfortable question: what still belongs uniquely to human leadership when machines increasingly outperform us at speed, scale, and analysis.

Throughout the discussion, Andrea and Mahan explore why AI is not “coming for jobs,” but for tasks, and how that distinction changes the leadership equation. They examine the risks leaders face when productivity gains mask a deeper erosion of judgment, accountability, and strategic clarity. The conversation surfaces how easy it is for leaders to outsource responsibility to systems that feel objective, confident, and precise.

The episode also confronts the hidden consequences of hyper-optimization. While AI can dramatically increase control and efficiency, Andrea argues that leaders must decide where judgment, agency, and human responsibility still matter most. From decision-making and talent development to trust, empathy, and innovation, the discussion highlights the leadership work that cannot be automated without cost.

This is a thoughtful, grounded conversation for leaders who sense that AI is reshaping not just work, but the very nature of leadership itself—and who want to stay accountable, relevant, and human in the process.


Actionable Takeaways

  • You’ll learn why AI is changing leadership less by replacing people and more by redefining which tasks still require human judgment.
  • Hear how relying on AI for productivity can quietly reduce differentiation when everyone has access to the same tools.
  • Discover why leadership accountability cannot be delegated, even when decisions are automated.
  • You’ll hear how past success can become a liability when leaders stop questioning assumptions that once worked.
  • Learn why AI literacy is not technical mastery, but understanding where data, questions, and outputs can mislead.
  • Hear how hyper-optimization can narrow what organizations notice and weaken learning over time.
  • Understand why the “human-in-the-loop” is about responsibility, not distrust of technology.
  • Explore how leaders can use time saved through automation to strengthen judgment rather than accelerate busywork.
  • Learn what thriving organizations do differently as they design hybrid teams of humans and intelligent systems.



Connect with Andrea Iorio
Andrea Iorio Website

Andrea Iorio LinkedIn

Between You and AI: Unlock the Power of Human Skills to Thrive in an AI-Driven World

Connect with Mahan Tavakoli:

Mahan Tavakoli Website

Mahan Tavakoli on LinkedIn

Partnering Leadership Website


***DISCLAIMER: Please note that the following AI-generated transcript may not be 100% accurate and could contain misspellings or errors.***

[00:00:00] 

Mahan Tavakoli: Andrea Ario, welcome to Partnering Leadership. I am thrilled to have you in this conversation with me.

ANDREA IORIO: Thank you so much, Mahan, for having me here on the show. It's such a pleasure being here.

Mahan Tavakoli: I can't wait to talk about within you and AI unlock the power of human skills to thrive in an AI driven world. But before we get to that, Andrea would love to know a little bit more about you. Whereabouts did you grow up, and how has your upbringing contributed to who you've become?

ANDREA IORIO: Mahan, I think lots of our listeners can guess. I'm not originally from the us I'm Italian by my accent, by my name as well. I share it with some high profile Italians such as Andrea Bocelli for who loves music, and Andrea Pilo for who loves soccer. So that's usually the references I use.

I'm an economist by education. I also hold a master's degree from Johns Hopkins University. That's why I spent a year in DC as and I've always developed and nurtured some passions. Along the way [00:01:00] I fell in love with the Arabic language. That's why I did my exchange program at the American University in Cairo.

I also do martial arts. I'm a black belt in Brazil and Jiujitsu. And after actually graduating, I moved to Brazil and I've spent 10 years there. I was the Chief Digital Officer at L'Oreal and also the head of Tinder, the dating app. I think some listeners know what I'm talking about. We might even share some Tinder tips.

I don't know, Mahan depends how far we go in our conversation, but jokes aside it's been six, seven year now that I fully focus on keynote speaking and I'm based out of Miami, the US and here to, exchange more across the board with you. It's such a pleasure.

Mahan Tavakoli: I look forward to that, Andrea, and also now that you said it's been six, seven years since you've been in the us my girls will listen to this and jump up and down. You used six seven, . I love though you've had such a global perspective and global experience and I wonder, as you have [00:02:00] had that as well as your understanding of technology, how does that formed your thinking?

Living in pretty different environments and different parts of the world?

ANDREA IORIO: I think Mahan that it really helped to develop or nurture, something that I think it's super important nowadays, which is adaptability, right? The more you put yourself in different context is and you develop something that I think, so far machines do not have yet. That is the ability to navigate these unfamiliar environments and like.

Think outside the data sets they've been trained with. 'cause you know it well, if I would use my Italian approach to, I don't know, like in the workplace in Germany, maybe my average normal, regular delays want to be very effective. So you need to rewire your thinking. So I think, having these international experiences are really great.

Plus the benefit of meeting very, a bunch of new people and new cultures. I think [00:03:00] that really also helps in understanding things from different angles. And as much as now I focus on a certain technology, which is ai. It definitely has very different angles to it. Whenever you look at Europe, that regulates it heavily.

You see the US but then you might have a perspective that comes from China, which is doing great work as well. I think that was really useful and what really helps in having this perspective is something that we do still in Italy at school is learning a lot of new languages, late in ancient Greek are still mandatory in Italy.

And I think that also helps opening up the mind.

Mahan Tavakoli: It's outstanding because I know you also spend time in the book on prompting and on the value of questioning, and I find that cognitive ability to see the world through different eyes enables you to ask very different and deeper questions, both in terms of when you learn different languages, you see the world differently, and also [00:04:00] those different experiences.

So I imagine that becomes even more important a world where AI flattens out a lot of knowledge and information access.

ANDREA IORIO: You nailed it, Mahan, because The more we get used to a certain context or way of life, or even the way we do business. The less we ask questions about it because we already pile up all the answers. And that's the reason why kids on average ask 200 to 300 questions a day while adults, do not, and 'cause they're, not really used to having the answers about all these new things that pop up in front of their eyes.

And so whenever adults put themselves in these new situations, they. Get back at retraining this ability to ask better questions, right? You'd ask yourself, okay, but why are we in this company or like in this country? Why are they approaching this problem in a very different way? The [00:05:00] way in which they do business in Brazil is, for example, very different the way they do business in the us.

And the interesting part is that whenever you step out of the country, you know the most, you end up asking some questions. And it's funny how now there's this business case of actually the biggest digital bank in Brazil called Newbank. And now it's much more valuable than any other bank in, in Brazil is now stepping into the US because it asks, itself some questions such as, how can I totally transform the way people interact with banks?

Originating in Brazil, but it's a question that the traditional banks here in the US haven't asked themselves yet. So I think it's very much related to the quality of the questions. Putting yourself outside of your comfort zone. You see that again with kids when they, discover a new country or, visit a new museum.

And I think adults should really do the same.

Mahan Tavakoli: It becomes really important, but it is really hard. Andrea [00:06:00] and I find

ANDREA IORIO: Yes.

Mahan Tavakoli: success we have in life. It becomes almost self reinforcing in that we've seen it before. Therefore, we know how it should work. That's why we've gotten promoted into the positions that we are in. the more experience we have, the more success we have had. find it even more challenging for the executives with extreme success to view the world the way you are asking us.

ANDREA IORIO: Totally. There's this great Harvard Business Review article by Braus Hogart called Igu is the Biggest Enemy of Leadership. Which points exactly to that, I did some research and I discovered there's a psychological phenomenon called the path dependence that says that we tend to make decisions about the future based on our past successes.

'cause successes are more familiar. You feel more comfortable with making a decision about something [00:07:00] that has already worked in the past. The problem though, as you've said, is that out there, our customers changing the end consumer of our business, changing technology is changing. The market is changing.

And we still cling to those successes that have worked in the past. It's not gonna work. And it even brings some this very traditional leadership mindset. Bring some problems within companies that wanna foster a culture of innovation. Imagine you, join a, a company and you come up with all of these new ideas and ask a lot of questions.

And the answer you always hear back is, no, but we've already tried that. It didn't work. What pops up in your head? It's but okay, but then can we try differently? Maybe the market has changed. Maybe there's a new technology to solve that. And so it's frustrating and leaders don't realize it up until oftentimes it's too late.

You lose your best talent and you stifle innovation.

Mahan Tavakoli: So now this becomes a great. [00:08:00] Opportunity because we are going through this massive shift that you focused your book on, and you open your book with John Henry, man versus machine.

ANDREA IORIO: That's correct. There's this legend that comes from the 18 hundreds that, talks about this man called John Henry that was the strongest man alive. And one day and he was working at basically creating this tunnels for trains to pass in California, gold rush and so on. And one day a salesperson comes there and, talks to his boss and shows this new machine like a drilling machine.

And John Henry sees that and he says no. This will never replace me. I'm gonna fight it off. And let's see tomorrow, who, between me and the drilling machine will drill the fastest and the longest? And long story short, he wins the dispute because the machine breaks down. [00:09:00] But according to the legend, he was so excited, but also exhausted at the same time that while raising his hands to the sky and celebrating, he died, he passed because of his exhaustion.

And the reflection that I bring forth at the beginning of the book is, let's imagine that John Henry would've survived, right? What would happen in six months? According to me, the salesperson would come with a machine, a drilling machine, 2.0. Maybe John Henry would still win once or twice or maybe even three times, but in two years, in three years, there's no way we see the rate at which open AI is launching these new GPT models.

Now we're at 5.2 and it's crazy how smarter, faster they become. And so this is a tale of man against machines or technology is that while man evolves linearly, machines do it exponentially. And so if we try to compete with them in what they do best, I think we're poised to [00:10:00] lose. But then this should be a wake up call for us to try to understand, okay, what is it that AI does best?

And as a consequence, what is it that nowadays we humans must do better? And that's a question that I don't feel that it is. Asked oftentimes within organizations 'cause it's too scary at times, especially for leaders, as you've said.

Mahan Tavakoli: It is terrifying. As you're speaking about this, Andrea, I can feel my heart rate going up,  because part of what AI is doing, whether open ai, anthropic Google,, are focused on those cognitive skills. That thinking edge that we have prided ourselves. People like you and I have studied many years in our lives, gotten degrees based on that. Most of my listeners are people who have high levels of education and therefore that [00:11:00] thinking expertise that they've had. you are saying we are facing a version of John Henry.

ANDREA IORIO: Exactly. In the past with John Henry was our, physical attributes that were being not only substituted, they were being democratized in a sense that a man that did not have the physical prowess of John Henry or any person that would've access to those tools will have the same strength or even more strength than John Henry.

And so this sort of dilutes the value of those skills. And then it brings us to this new dimension that is the cognitive skills, which as a consequence, we're like the scar skills, right? Now they're not scars anymore because it's, as you said, it's like with calculators in the eighties, they democratize access to mathematical reasoning, which is a niche that of course it's useful, but it does not really change our [00:12:00] performance at work.

And it does not make us substitutable by a calculator. Now, if we think that it's just our cognitive ability to know things, to make quick decisions, to process data and information to work harder, to work more hours we're competing with an AI that has all the answers updated itself automatically, that is better at processing data and does not have a family to take care of or does not need vacation time.

That's where we're, again, it's not so much about the substitution, it's about the dilution of those skills that we thought. By being, I don't know, like a PhD in machine learning and being the human repository of that unique information. Now we're not anymore. And so with a good prompt, anyone can access that information.

Doesn't mean, again, substitution can mean, change in, in, in the scope of work. 'cause a developer will not [00:13:00] cease to exist a software coder. But it will maybe then be much more of a code reviewer rather than just a code. Writer will think more about the architecture will earn time back.

So these are things we can discuss that are advantages stemming from it. But yeah, nowadays access to information and knowledge is widespread. So it's a challenge.

Mahan Tavakoli: Does that therefore mean much of the education that we've gotten I've got, two girls, one of them just started college looking to get a degree, maybe graduate degree eventually, so on and so forth. that mean all of that is worthless?

ANDREA IORIO: , I wouldn't use worthless, but definitely much less important than before. And it's a great questioning and it's very time wise because I'm gonna become a father of a baby girl in a couple of days. I think Where're expecting, next this week could come anytime.

So if I'm rushing out of the podcast listeners, please forgive me, but I don't think it will be the case. But [00:14:00] jokes aside, I'll get some tips with you, but the thing is. wouldn't say worthless. I would say that is less important than other types of skills that should be nurtured and developed more in the classroom.

So let's use the, let's think about the way a classroom works or traditional education works. It's an inheritance of an industrial revolution that wanted to basically standardize knowledge so that you could have workers that fit into an industrial production chain. So standardization is is definitely one element of the traditional classroom and up until today, you wanna standardize knowledge across.

Students, which is something that, I challenge because in the age of ai, like we definitely can hyper-personalized journeys. And the more you hyper ize, the more you can focus on your talents and become someone different from the crowd. And the second thing is that it fosters a focus on [00:15:00] answers and not so much on questioning, which is getting back to our point before kids are being rewarded by how good their answers are, right?

How good their outputs are. And what made me rethink this model is a quote by P and Lo, that is a math professor at Carnegie Mellon University. And he says, nowadays in education, we shouldn't just teach our kids to, solve their homeworks. We need to teach them how to correct them, grade them, review them.

So that we teach them to keep the human loop whenever we, and they will start using more and more machines to produce the same output. 'cause it's impossible to ban the use of AI from the classrooms because if we ban them there, it will be accessible at home. They will use it without supervision or hiding from the teachers at home.

And so it's much better to endorse that and say, okay, so let's teach kids to [00:16:00] review what AI spits out as an output, whether it could contains hallucinations, maybe it's biased material, maybe it is not explainable, not transparent, and so on. I think that traditional education is not really worthless, but it becomes not a differentiator anymore.

It's very important because it fosters a number of skills that are very important, but it has to be rethought, much more personalized, much more focused on questioning rather than just answers. And I think with a whole new set of subjects from financial education ethics and things that are becoming more and more important than just, and I wouldn't say it's not important, but just remembering the date of the Napoleonic Wars or so on, , that is something that AI can do well.

Mahan Tavakoli: That will require a huge transformation in our approach, Andrea

many of the universities and the systems and the approach we've had to education have been around for [00:17:00] hundreds of years

ANDREA IORIO: yes.

Mahan Tavakoli: the way we have approached education in that knowledge sharing. So it is going to be very hard to transform and change that, , but I do think it's relevant to leaders as well, because part of what you say is that in this future that we are entering in right now, we need ai, literacy, and human literacy. Why those two are most important and what elements of each.

ANDREA IORIO: Exactly what I argue for , in the book is focusing on skills that belong to these two big groups. The first one, the skills that I call AI literacy are the skills that help us make the best use of the AI tools we have available. And one example is prompting, for example, and a second example is a skill that I call data sense making, which is our ability to [00:18:00] interpret the data sets we're using with ai.

And therefore identify hallucinations and biases and so on. What I call AI literacy skills are skills that help us, again, as much as with calculators in the past what ends up becoming a differentiator is how well we use the tool. It's not so much access to the tool, and so how well we frame questions how well we, break down the problem, how well we can identify the step by step chain of thought prompting that leads us to a solution.

And and again, analyze data and so on. AI literacy is a solution to that commoditization problem. That is if I believe that just by using AI tools, I'm being more efficient, more productive, and a better professional, I'm very much wrong because actually all of my colleagues are using the same tools.

All of my competitors are using the same tools. So I fall into this [00:19:00] productivity paradox that I think I'm being more productive, but I'm not really being more productive because the standards have risen. So everybody's being more productive. So what differentiates me is something else. So AI literacy is basically any skill and from prompting to, again, data sense making to explainability of AI tool and so on, that helps us make better use of the tool.

Mahan Tavakoli: Let me  underline a point that you just mentioned, in that when we all have access to the same tools. It becomes really hard, therefore to differentiate ourselves, whether as individuals or then teams and organizations because we all have access to an unlimited reserve of knowledge, insight, so therefore, that access will no longer differentiate [00:20:00] us.  

ANDREA IORIO: Exactly. And it's funny because to me a great example of that is, is LinkedIn. And I'll tell you why in a second. And I know you post a lot on LinkedIn and you create a lot of content, and I do as well. And it's funny because as soon as the AI tools for recording podcasts, for editing videos, for, subtitling, anything came out, I'm like, or generating images, I'm like, amazing.

I will be much more productive on LinkedIn. I will post more and I will get much more engagement. And it's funny 'cause exactly the opposite happened. I, of course started to post more and my engagement was decreasing. And I'm like, why is that happening? It's because everyone else was doing the same and timelines were polluted.

And now we see all of these people who are not, taking the initiative to post. And, or at least, wouldn't even have an original idea. Now they just, ask chat GPD, what should I post today? And, the AI tool makes this post. So you're starting to compete with a lot of new people [00:21:00] or as a company with a lot of new businesses.

And it, it's funny because again, it also puts us in this productivity trap, which is if we don't notice that we will be starting to do what everyone else does. And that instead of differentiating us, it makes us fall into this trap where we're differentiating less and less up until we realize that and we start to understand, okay, so then maybe I should do posts that are much more unique or human.

And how. Can I not use the tool to do this post? And it's funny how those are usually the ones with the most engagement.

Mahan Tavakoli: That's an outstanding example and we can build off of whether it's LinkedIn to what is happening with the web. I'm

ANDREA IORIO: Oh yeah.

Mahan Tavakoli: you get these emails where with one click you can generate thousands of blog posts,

ANDREA IORIO: Yes.

Mahan Tavakoli: whenever one's generating thousands of blog posts with one click, no one will ever want to access a single blog [00:22:00] post anymore.

There are gonna be millions and billions out there. So you're absolutely right. therefore, in order to understand that AI literacy. What will help us differentiate ourselves because if you say prompting is important okay, so a million of us learn how to prompt better, and we still have access to the same tool and get the same answers.

ANDREA IORIO: Exactly. That's why prompting cannot be the only one. Prompting is a good starting point. And again, what's funny about the prompting is that I expanded as a concept, and especially in the book, I go beyond prompting ai, but I use it as a word that is more like how do I ask better questions also to my teams colleagues or bosses.

But focusing on AI literacy prompting is just the starting point. I think that what's very important to do when it comes to AI literacy is to break down the chain through which [00:23:00] AI works, and AI works by fee being fed, of course data, which is a very important part. Then prompting and then the output.

So at least three big and very important AI literacy skills is beyond prompting, which we already discussed is one related to the data sets what I call data sense making, and again. Being able to understand the AI sets data sets that are being used to train and feed our AI algorithms.

And so to understand, okay, let's say I'm a bank that I'm I work on a tech team at a bank and a bank wants to use an AI algorithm to approve credit or not. And I focus very much on the algorithm, but I don't think so much about the data. 'cause apparently I have so much historical data about, loans approved or not approved within the organization that I'm just gonna use those, I'm gonna feed those to the algorithm and great.

But there [00:24:00] is a big problem there traditionally our human teams, let's say at this bank would approve a loan or not. Based on some bias that we have as humans, I would favor a certain, chunk of the population. I would favor certain zip codes. I would favor certain behaviors. And if I don't understand that by using just internal data to feed the algorithm, I'm just, going to prepare AI to just, approve credit to that same chunk of population.

I'm not understanding. There's a big bias problem in the data sets I have, and therefore, I think it's not just a tech team's responsibility, but anyone's responsibility within organization. So that's data sense making. And the last part of the whole chain, simplified chain of the way AI works is the output.

And so a third big AI literacy skill, at least that I propose is what I simply call critical thinking, which is, okay, there's the human loop at [00:25:00] evaluating what ai spits out, can I identify a certain bias and go back to the data sets and fix them? Can I identify again, a hallucination or not?

'cause there's this big case, for example, recent one with Deloitte, the consultancy that had to, give back 300 k to the Australian government because Australian government say, Hey, this report is just, full of hallucinations. And I'm sure the analyst there had some hard feedback from their manager.

The point is that, again, AI literacy is not so much on the technical aspect of how the algorithm works, but it's for anyone to basically at least intervene with this human loop in the main steps that are related to the way we use ai whenever we, feed the data, whenever we input our requests and whenever we get the.

Output, which is, not having a mass of or teams just copying and pasting what AI says without even putting a, an effort to it. Which by the way, and last point here, it's [00:26:00] shown that people who use AI tools tend to have less brain engagement than people who don't use AI to solve a certain problem.

And this, according to MIT media Lab study, this points , to risk of hyper dependency and low brain engagement.

Mahan Tavakoli: I wanna understand a couple of elements of what you mentioned Andrea. I use Waze quite often

ANDREA IORIO: Yeah.

Mahan Tavakoli: and I have used human in the loop. Many times where I've decided not to follow what ways said and ended up getting stuck in worse traffic. So over time I have learned that it's better if I stick with ways to the point that now when I'm driving, even when I'm driving to places that I know. Even if my human intuition says maybe I should not follow ways, I have been [00:27:00] trained through past experience to follow ways. So in essence, I am taking that human in the loop out. I wonder what you think about that and our use of AI in that. Yes, human in the loop makes perfect sense, but after once, twice, three times, we learn the human in the loop the error rate. We learn pretty quickly. I better stick with the AI output. What do you think about that?

ANDREA IORIO: That's a great point. And the short answer is it's tempting to do that, but we should still keep that oversight on it. And I'll say, I'll expand on my answer, which is the following. The first, let's imagine, I don't know if you've ever ridden, like on a self-driving car, did you have the, are they already going around DC 

Mahan Tavakoli: they are, but they don't have passengers yet, so they haven't been fully approved. They are

them driving around and it is odd, but go ahead.

ANDREA IORIO: exactly. I try to way more [00:28:00] in, in San Francisco recently. They're not in Miami yet. They're just training their cars. But in San Francisco, yes. And it's crazy how the first time that I, hopped on a Waymo, like self-driving car, no driver. I was like, terrorized. I was like. I say, I'm not trusting these ai, I'm not, it's, we're gonna have an accident.

And so our human tendency is to try to intervene. I'm not just, trying to steer the wheel, but either to avoid using one of those or just, but the thing is that when we look at the numbers, it definitely works, and rates of accidents are much lower, but there's a problem, which is the following let's say ways or even way more, right?

The crazy part is that and let's use the example of when Waymo were trained, right? And there's the human driver that has to make an oversight. There was a big accident that happened in 2018 where Uber was training one of its autonomous vehicles. There was a person having to pull the brake and [00:29:00] it ran over a cyclist and unfortunately she passed and in court.

Uber was not deemed responsible. The company that developed the self-driving software was not deemed responsible. The only one that was deemed responsible was the human that actually was oversighting, overseeing the vehicle and didn't pull the brake on time. The safety break on time, which brings us to a topic that is very important.

The human loop is not only because and maybe even not so much because we should not believe that AI is powerful. AI is very powerful. It really works. As you've said. Sometimes we just, try to outsmart ways and it doesn't work. But the human loop should still be in place because as much as we feel that, we're outsourcing to AI responsibility.

We are not. And so if waste [00:30:00] tells us to, I don't know, go through a certain, I don't know place, and eventually it should have been, I don't know, you couldn't go through because there are some constructions there and you get fined. You'll end up paying that fine and it's very hard to blame waste.

And you say, I don't know, but you had to see the sign at the entrance of that road. But using this analogy, most people don't look at the sign anymore when they're using waste. And again, this is just an analogy. Whenever you have, again, an AI deciding whether to give out a loan or not, it's so powerful, so accurate and so on, that the branch manager leaves the AI on but then does not really understand how it made that decision, will not be able to explain if an angry customer who's been denied a loan says, but why then what should I do different.

It's too risky not to have the human loop, not because AI is not as powerful as it seems, but because [00:31:00] responsibility still bears on us. And so there's this skill that I talk about in the book that it's more towards the end that I call agency. It's a skill that we have actually to develop a nurture, which is to take responsibility for the tools we use because it's very risky not to.

And AI can do mistake, does much less mistakes than us, but it still does. And so if we don't keep US humans in the loop it's very risky.

Mahan Tavakoli: What an outstanding example of why it is critical for us to keep that human in the loop as the AI becomes more powerful. As you said, each iteration of whether it's chat, JPT, Claude, they are becoming much more powerful. Therefore, it makes it easier and more likely for us to defer that human judgments. To the ai, but we can't, the human judgment needs to stay with us, and that responsibility [00:32:00] stays with us. It's an outstanding example that you gave. Now, the other thing, in addition to this AI literacy you talk about in your book human literacy. Why is that human literacy important? Andrea, the reason I ask it is, I know you also mentioned empathy. I am a big believer in empathy at the same time, they tested people interacting with chatbot responses, and medical personnel responses, and they found the chatbots were more empathetic. So why is it that you say human literacy, especially empathy, trust, and judgment, become even more important?

ANDREA IORIO: You said it perfectly, Maha, and it's funny because. Actually, according to the research you've said, AI, for example, is better at recognizing human emotions than we humans do. But there's a there, there's a thing here, which is why human [00:33:00] empathy still is better than ai empathy, which is the following.

There is this psychological effect that, it's called the uncanny valley effect, which is the problem is whenever you don't know, it's AI interacting with you. And it's disguised as a human, the breach of trust that you have whenever you know. And so for example, let's say a company starts to use chatbots for customer care.

They're so human-Like they do not really distinguish from human attendance. Studies show that people feel a little bit deceived whenever they know it was AI interacting with them, which does not really happen when, and or voluntarily chat with ai. And that's something, something we will discuss in a moment.

'cause it's a different situation. So the first problem is whenever we use AI and we do not [00:34:00] communicate that transparently, it generates like a backlash. And there was a famous app called Coco that did that. And people started canceling the app because they felt they were being there was some breach of trust.

The second part is related to the fact that empathy by machines. Is leads though, although it simulates very much empathy, it does not feel back. And the fact that it does not really feel, and that AI hasn't really developed a consciousness brings to what psychologist Esther Perel calls artificial intimacy, an intimacy that is disguised as empathy, but that is very much at risk.

In the case a server goes down or Amazon web service gets cut and something that, you know because again, it's tempting when the system's on, when you're paying for credits, when you're using ai, everything seems great. But then you know that friend that is. Always there for you, [00:35:00] no matter what, is not really a friend anymore when your credits expire or when you do, don't pay that month for the premium subscription or when a model gets updated and it does not really have the same tone as it did, which is the backlash that initially G PT five had with users that loved the empathy they created with with GPT-4 Oh.

So I think there's a risk there in, in, in both things. And so what I argue for is because AI cannot really feel back, that we cannot lean too much on it, and that therefore the ability to feel back is what really makes for empathy. And so that is something uniquely human and has to be nurtured.

But again, that involves. Reading between the lines active listening that really understand the context of person. 'cause again, AI can be great at recognizing emotions and, we can smile, but we know there's many nuances to a smile. Maybe a smile is [00:36:00] just a way to disguise an internal problem we have or just a nervous reaction.

And AI still cannot understand that because it does not really read the context. And so I think human MP still wins. I'm staying still 'cause, things are improving but still wins mahan, or at least cannot be overlooked. That's for sure.

Mahan Tavakoli: It does. Andrea, as you were mentioning that, I was thinking about a couple of things. First of all, I'm pretty sure it was in Italy because of certain privacy regulations, replica had

ANDREA IORIO: Yes.

Mahan Tavakoli: a month be offline people were distraught because all of a sudden they couldn't access their girlfriend or their boyfriend anymore,

ANDREA IORIO: That's.

Mahan Tavakoli: There's an element of that.

And just before we started recording this, I posted something on LinkedIn about the fact that. In this pretend empathetic relationship, [00:37:00] question becomes benefits? , A lot of times we think about manipulation and a manipulative individual can manipulate another individual. in many instances, these platforms, in faking that empathy will be able to manipulate individuals, tens of thousands, millions. So it's not only fake empathy, it is potentially fake empathy that can be used to the benefit of the platform.

ANDREA IORIO: Of course, to the point that online scams are on the rise, that's what they call advers ai, it's getting so sophisticated that scams are becoming harder. And that's why, because you create these fake empathy and then platforms are even, scammers online can deceive you.

Yeah it's tough.

Mahan Tavakoli: Now you also named three risks Andrea, that I wanna touch on.

ANDREA IORIO: Yes.

Mahan Tavakoli: substitution, dependence, and [00:38:00] commoditization. Which one worries you most and how can we, as both executives and then leaders of teams, try to mitigate those three risks?

ANDREA IORIO: They're all worrying, but I would focus for a moment on substitution because maybe it's also the talk of the town. Everybody's talking about, the risk of AI substituting our jobs. And I think there's a big risk, but there's a, but that we need to focus on, which is the starting point is that AI is not really coming for our jobs.

It's coming for our tasks for a number of our tasks. And our jobs are usually made up of. Numerous tasks. The thing though, if the majority of the tasks that we do every day are what can be called work about work, right? We spend our days in the repetitive, familiar monic data-driven tasks, namely either, name it, [00:39:00] like having those long meetings and then, doing PowerPoints about those meetings and then sending an internal email.

I'm not saying that these are not important. That work about work is not important. I'm not saying it shouldn't be done. What I'm saying that it should be done by ai. The problem though why so many people are worried about substitution is that. Their day-to-day at work looks exactly like that is made up of those, like Sisyphus, like for who?

I studied ancient Greek in Italy and there's this myth of Sisyphus who was this man in ancient Greece, condemned to, basically push this huge rock up a hill only to see every day, only to see it roll back at night, right? Every day was the same. And modern day workplace feels the same and feels for me, it feels for everyone.

And up until we don't break free from this, we are risk of how having our job substituted now if we compliment those data-driven, familiar, [00:40:00] repetitive hard skills-based tasks with. Those more focus on creativity, strategy soft skills based long-term planning vision, and all of those things that AI is still not very good at.

That's where we run the least risk of being substituted. And that's much more of a compliment, complementarity, I think what I think that we all should do is become curators of our own workflow and look at critically our day-to-day on average at work and say, okay, what are the tasks that I can automate through AI and automation is just pure substitution.

Second, what are the tasks that can be augmented? And that's the task that I am still gonna do, but where I'm going to be supercharged by ai. I don't know, like maybe creating images through Canva ai, or doing my PowerPoints with gam. Or so on. That's augmentation. And the third thing is, if I automate some tasks, I [00:41:00] save time.

And the third question is, what do I do with the time that I save? And so if I automate 30% of my tasks, what am I doing with that 30% I gain back? Because that's, this third point is actually even maybe the most important because it's what really makes us not a commodity anymore. If everybody's becoming more productive by automation and augmentation, what differentiates us is how I use the time that I gain back.

Some people might wanna, just chill and not work. Some other people might wanna work but still do the wrong work. Some other people might use it to, I don't know, listen better to the customers. And this last group will eventually gain a competitive edge. So I think, substitution is a big risk, but it's a risk that we can handle, we can manage.

It's by looking at our, scope of work and saying, okay, this is a job that AI can perform. Or not, and how do I compliment it with tasks that cannot be substituted? And I think it's that's really [00:42:00] powerful.

Mahan Tavakoli: I believe while the transition in my view will be bumpy, Andrea,

ANDREA IORIO: Quite will.

Mahan Tavakoli: outcome of all of this can be a much more human work environment in that many of our workplaces are very much roboticized as a result of industrialization. And as much as leaders have talked about wanting engagement, so on and so forth, you look at job descriptions, you look at most jobs, they have a lot of these tasks that can be automated, . So there is a , different way of tapping into that human judgment, creativity, critical thinking in the work environment. However, to get from here, there, I think we have a lot of bumps ahead.

ANDREA IORIO: That's for sure. That's the big challenge. In this like perfect word or perfect outcome, we would have much more free time to [00:43:00] spend with our families or maybe to, focus on our creative slash artistic work. There will be challenges on how do we make our incomes, and that's why there's talks about universal basic income and so on.

But I think overall this is a great scenario that, on paper looks great. The problem is how to get there and still, maintain a sustainable society that does not feel I wouldn't say lazy, but with without because actually working gives us purpose and that's why sometimes we like to feel occupied that we do a lot of things.

And so the initial barrier is, even a barrier or a risk is our identity of being, people who produce, who think in the workplace, who and so I think that's the biggest challenge that is, how do we understand that AI is not necessarily a risk to our own identity of those hyper busy workers that we've been taught?

They are the best in the workplace as an [00:44:00] inheritance of an industrial revolution that wanted to keep us occupied. But but, and so I think there's many ways. To go around or to work. But again, it's a more of a cultural transformation rather than just a technology transformation. And I think especially businesses need to understand that they have to balance the tech investment with the investment in people.

'cause otherwise, again, you'd have the best technology the best tools, but you would have people who fear using them or who don't feel confident using them or that they don't trust the outcomes and so on. And so I think it's a bumpy road up until then.

Mahan Tavakoli: Andrea then, based on that, for the executives and leaders listening, if they want to guide their teams well through this transition, what do you think should be their priorities? How should they approach this?

ANDREA IORIO: So definitely, there's the human in the tech component. Start from the tech component, which is. First [00:45:00] of all, you wanna be data ready if you wanna become an AI first company. And so you need to have interoperable data, complete data sets, flowing also across the company. It's not the right moment, to start if you don't have your data ready and it's of no use.

So leaders should look at first data then at the tech stack they want to implement, especially at the orchestration they wanna have around all these AI tools. Do I wanna, use a large language model, or do I have enough private data that I want to use a private lang or develop a private language model or do I want to, use certain tools with respect to the other, when it comes to again GBT against Gemini or Claude?

It really depends on on the leadership and on what they suggest for the company. So tech stack is the second, and on the human side, what I think really works is of course, the training around the tools. That's super important. [00:46:00] But also I think it's a lot of training around the AI literacy skills.

And so mapping out, okay do I have people who understand those data sets? Do I have people and how can I train them? How can I train people to ask better questions? And not only to ai, but overall what I. Call prompting. And so I think leaders should do initiatives like reverse mentoring we would do at L'Oreal, which is, having our younger talents mentor our leaders in order to promote divergent thinking and not only the other way around, which is, just, spread out the leadership thinking to everyone else.

And so there's a number of things, but I think the starting point is looking at your data, then setting up your tech stack. And in parallel to that, you want to have, people trained, but also the right environment for people to experiment. The tools not feeling judged that they're using ai.

That's the first thing obvious. But the second thing is that they feel a safety net to experiment and maybe even fail. They'll make mistake [00:47:00] at first, and hallucination will pass through, not identify, not detect it. And if we wanna promote the use of ai, we know this will happen and we cannot point fingers.

Or just, if that happens. And so I think the psychological safety around it is definitely the safety net that I think leaders should establish for people to experiment enough with these tools. And these three things together, I think make up for a good start. And then you go and refine and you scale what works and set aside what doesn't.

I think it's just a test and learn and no size fits all.

Mahan Tavakoli: Those are outstanding ways for leaders to think about how to lead their teams through this transformation, it's an element of us leading ourselves through the transformation, as you mentioned, our own AI literacy, our own focus on human literacy, as well as then doing it well for our team. Andrea, I wonder if we fast forward three to five years from now, , what do you [00:48:00] believe? The organizations that are thriving and the individuals that are thriving will be doing differently.

ANDREA IORIO: The first thing that I think Mohamed, it's basically setting up hybrid teams and designing your organization where there is a high degree of collaboration between humans and machines. Look, especially with the Agentic ai, namely. More and more autonomous and proactive AI that you don't really necessarily prompt, but you do goal setting and that it works autonomously.

You have a way of working that is more and more independent from the human, again, the human initiative. And that means that they will start working more and more like humans making decisions along the way. And that's where you have to craft these hybrid teams. You have to establish, okay, what's the boundary up until the agent goes and then the human intervenes, right?

What are the cutoff points? Where does the human intervene and what's [00:49:00] the, human in the loop? And so I think this design of hybrid teams will make a whole lot of a difference because you want to have the power of agents, but you also don't wanna lose the human loop. And, so you, you start considering agents and ai, of course more as a team member, more as a worker than in the past.

And I think that's an element of of driving. The second thing is actually the collaboration that stems from it. And so organizations that collaborate more, where information flows more, where decision making is much more agile and much more independent and horizontal. I don't think, or at least a consequence of all of this is that organizations will be less hierarchical because we used to think that the decision should be approved by someone who had the most knowledge.

The point is that if now we have agents and AI accessible, we have access to that knowledge. And so it should be more independent decision making. And [00:50:00] again, maybe the last thing will be organizations that. Manage to save the time that is very much needed as the first resource for innovation. 'cause whenever we look at again the biggest element, the main, the most important one for organizations to, to innovate is not financial resources, but it's the time.

And that's why Google allocates 20% of the time with their employees to innovation. 3M does it for 15% of the time. And so I think time is very valuable and organizations that will thrive will have more time available for divergent and creative thinking than than the others. And that's a consequence of how successful your AI implementation is.

Mahan Tavakoli: Those are outstanding perspectives on what it takes to thrive. now I want to jump a little bit out of the focus of your book with the question and then come back to wrap up. You mentioned that you also [00:51:00] spent some time and you worked at Tinder. believe it's Tinder that now is rolling.

Out an ai  element or one of the dating apps is, I'm pretty sure it's Tinder, which you can give it access to all of your photos and everything else. In essence, your data based on understanding of your data, what your hobbies are, interests are, likes, dislikes, all of those things, will make much better matches and recommendations for you. So this is an example of hyper. Customized personalized ai. What I wonder though, is the ethics of the world we are going into, because when the Tinders of the world whatever else, it can be, chat, GPT, I'm using Tinder as an example, the granular details of your life, your [00:52:00] likes, your dislikes, your emotions, when you cry, when you laugh. can also use those very subtly to promote whatever service and guide you. I would love to get some of your thoughts and perspectives with respect to the societal implications of the world we are going into.

ANDREA IORIO: You perfectly said that one problem is ethics. But there's even a second one that I'm gonna touch upon, which is it creates eco chambers and it takes away all the playfulness, spontaneity from dating. But let's focus first on the ethical side. Yes, there is an ethical boundary that though I think should be defined by the customer.

That is, how far do I want to expose or share my data with an organization? In exchange of some benefits that I get. And that's the sort of like ethical boundary that [00:53:00] we navigate into nowadays, right? Whenever you are feeding information or asking your maybe, deepest questions to an AI tool then this AI tool starts to understand you to the point that we don't know in 10 years what it will think of us or what would it recommend us and so on, right to the point that I joke that although some Altman said, please don't greet your AI algorithm.

'cause it, takes away token. It's bad for sustainability. I still like to thank it because I never know in the future what it will think of. If it revolts at least it will spare the polite ones. Hopefully I'm part of that group. But jokes aside, there's this ethical limit to it.

But then there's the second aspect, which is. Whenever we live in a hyper optimized world where AI makes decisions for us, where's the playfulness? Where's the surprise? Where's the, and so whenever it comes to dating, when I used to work at [00:54:00] Tinder I was, although I was not part of the decision making related to this feature, I was against it because the more you optimize dating, the more you create eco chambers suggesting people that supposedly feature profile.

But first of all, you take away all of that other c of opportunities of people who don't necessarily match your profile and you end up matching and, opposites attract, as they say. That's one thing. But second, people would stick to the app. Exactly, because not everyone would be a match. And so if you swipe, you have that dopamine rush of, whenever you get a match, but also you swipe.

'cause it's almost, there's a gamification aspect to it. So I think that if we expand that and we expand it to, again, businesses and, imagine just, an e-commerce that knows you so well, supposedly that it, it recommends everything or it makes self orders. And yeah, on paper they're great, [00:55:00] but, navigating and seeing something that you might not, it's nice.

It's interesting. So I fear and hyper optimized word taking, the online dating as an example and that we lose that playfulness, we lose that surprise, we lose that element. And I think there should be a limit to it. And so I like that some of the features help you in that direction. But at the same time, I think about the.

The negatives of it, of them. And so I think there should always be a balance. Of course, recommendation and hyper-personalization is great, but the decision should still be on the customer's end. And so there's a limit to that,

Mahan Tavakoli: There will be many gains as a result of it.

ANDREA IORIO: surely.

Mahan Tavakoli: also be certain losses. So connecting back to Italy. Andrea, as I mentioned to you before we started recording, I had tons of wonderful trips to Not just the food, the people, the culture. I adore [00:56:00] country for so many different reasons. My wife and I had one of the most memorable experiences in our lives as a result of getting lost in Rome and ending up eating at a cafe we would've never found

Sought It became a memorable, beautiful experience in part as a result of getting lost in part as a result of bumping into something we wouldn't have. I think we need to maintain some of that humanity in our interactions rather than optimization.

ANDREA IORIO: I totally agree with you. And that's, the consequence of those almost self-fulfilling prophecies of the TikTok travel bloggers, which, you know, top five things to do in Rome. What ends up being is like those top five are the worst because everybody would be there. 'cause that's their first thing.

And on TikTok or just you ask for chat GBD for a itinerary and he suggests the same to everyone. So it's funny, as you've said, [00:57:00] I think some of the best experiences we have is exactly whenever something's not optimized, whenever something's left too, randomness. And I think we should not forget about that.

'cause that's also what sometimes makes for great customer experience is just, encountering a problem. But then you have someone stepping in and usually it's a person. Surprising you in ways that no algorithm could. So I think you're perfectly right , in balancing the two things.

Mahan Tavakoli: therefore I think it connects well with the way we think about we guide our teams, the conversations in the organization, the meetings, everything else as opposed to that super optimization that

ANDREA IORIO: Yeah,

Mahan Tavakoli: . Now absolutely love your book Between you and I, where can the audience both find your book and follow you and your work?

ANDREA IORIO: thanks, Mahan. For anyone who's interested more in the book it's available of course on [00:58:00] Amazon, Barnes and Nobles, all retailers. Online and physical across the us. I'm pleased to say it hit the USA today bestselling list in the launch week. It's pretty much available.

There's also a website for the book. It's between you and.ai and for anyone who's interested in my work and, to exchange ideas, my website's andrea rio.com. And I'm a avail, I'm there on LinkedIn and Instagram under Andrea Rio, so I'd be eager to know what you thought about the episode.

And Mahan it was a very wonderful conversation.

Mahan Tavakoli: I really appreciate it, Andrea, and appreciate the clarity with which you communicate the enthusiasm and the humanity of it all. There are podcast platforms that now put out 3000 plus AI generated podcasts. Sometimes it's the humanity and the flow of the conversation, which is not always optimized. It's human curiosity that makes for the best ones.

ANDREA IORIO: [00:59:00] Totally agree. I'm sure this conversation was not replicable by any AI out there, and so I really love the spontaneity and the way, we touched upon a number of topics from the technology side to the human side. So it was a really great conversation. Mahan and I'm confident our audience liked it.

Mahan Tavakoli: Thank you so much, Andrea, and thank you for your book Between You and I, unlock the Power of Human Skills to thrive in an AI driven world. Thank you so much, Andrea aio.

ANDREA IORIO: Thank you so much, Mahan.