Ranked in the top 1% of all podcasts globally!
April 18, 2023

254 Organizational Strategy Using Artificial Intelligence and the Strategic Implications of AI with Yuval Atsmon Senior Partner McKinsey & Company | Partnering Leadership Global AI Thought Leader

254 Organizational Strategy Using Artificial Intelligence and the Strategic Implications of AI with Yuval Atsmon Senior Partner McKinsey & Company | Partnering Leadership Global AI Thought Leader

In this episode of Partnering Leadership, Mahan Tavakoli speaks with Yuval Atsman, Senior Partner at McKinsey & Company. In the conversation, Yuval Atsman discusses the difference between good strategies and why many organizations have lousy strategies. Yuval Atsmon goes on to share why CEOs and executives must embrace and experiment with AI to improve decision-making in the strategic planning process. Yuval Atsman then shares potential applications of AI technologies, including generative AI. Finally, Yuval Atsmon talks about the importance of AI readiness for businesses and how CEOs and senior executives can move quickly on AI while bringing their team members along to align and effectively execute organizational strategy. 



Some Highlights:

- The difference between good strategy and bad strategy and why so many organizations end up with a bad strategy

- The most common mistakes with organizational strategy and how to avoid them

- Yuval Atsmon on the most significant challenges in implementing strategic plans

- How to use AI to support strategic meetings and conversations

- Why team member awareness and readiness are critical factors that will determine the speed of AI adoption in organizations

- Yuval Atsmon on AI-enabled business strategies

- The importance of considering tolerance level for mistakes with generative AI

 - How to use AI to get insights on strategy relevance and align it to generate value-creation opportunities

- Yuval Atsmon on why AI readiness will be crucial for businesses 

- The human factors that support the adoption of AI in organizations

- How to use AI for a positive impact on the quality of decision-making in boardrooms.

- Why companies that don't make AI a default approach will quickly fall behind 

- How companies can balance the potential benefits of AI with the need for human oversight and awareness

- The most critical and time-sensitive actions CEOs and executives need to take on AI


Connect with Yuval Atsmon:

Yuval Atsmon at McKinsey & Company

Yuval Atsmon on LinkedIn



Connect with Mahan Tavakoli:

Mahan Tavakoli Website

Mahan Tavakoli on LinkedIn

Partnering Leadership Website


Transcript

***DISCLAIMER: Please note that the following AI-generated transcript may not be 100% accurate and could contain misspellings or errors.***

[00:00:00] Mahan Tavakoli: Yuval Atsmon, welcome to Partnering Leadership. I am thrilled to have you in this conversation with me. 

[00:01:50] Yuval Atsmon: Thank you, Moham. It's a great pleasure to be with you. 

[00:01:53] Mahan Tavakoli: I'm really excited knowing the work you've done with respect to strategy and ai, and look forward to that conversation.

Before we get to that, though, Yuval would love to know whereabouts you grew up and how your upbringing impacted the kind of person you've become. 

[00:02:09] Yuval Atsmon: Sure. And thanks for asking. One of the things that I've learned in my work is obviously the more you get to know about people's background, all of a sudden the more you can connect with them and appreciate them and even we're not gonna cover all the hardship or special experiences, I think it's one of the things that I've learned sometimes the hard way, just how meaningful it is to know people, backgrounds. For myself, I was born in Israel 1976.

I went to school there other than one year where my parents relocated to Dallas for. a job that my mom actually had. Which was still fairly unique. She was a very senior woman in the software industry. She was the only woman in the management team. And certainly moving abroad for a year because of the women's role was less common.

But my dad has his own career and both of them I think were in that way inspiring for me. To be interested in tech and be interested in business. Overall, we're one of those families that, topics of economy as well as politics and others were common dinner discussion topics. But before I started my education and career, like other people in Israel, I also spent pretty meaningful period in the military.

I was a officer. I did the normal training and. Continued to officer training, which also meant I did a slightly longer service in the Israeli military which I continued for a few years also as reserve duty all the way to a major in reserve in our armor corpse, which obviously gave me all kind of different experiences of leadership a fairly young age.

I think when you're. Dragged into it rather than you have time to study what it means to be a leader and you do a lot of mistakes. But for me I don't know what would've happened if I had to go at 18 to university. I think the fact that I had that break for someone who at least was a very kind of curious but slightly disorganized teenager, it made a really big difference in.

Attitude to what I want to achieve in life. And I think by the time I was in university, I took things a lot more seriously than I did in high school. 

[00:04:03] Mahan Tavakoli: I appreciate that background, Yuval, and I appreciate your recognition of the fact that, I know you work a lot with AI in that, if anything, it's more important than ever for us to connect with the humanity of the person and know where those insights are coming from.

That's the value add of the individual. And one of the things I've enjoyed, not just your background, is seeing some of your recommendations, reading some of your posts on LinkedIn. It speaks to your values, which are just as important to marry those with your insights on AI and strategy. Now, you did have some experience in business before then coming back to McKinsey to.

Do consulting what brought you back? 

[00:04:50] Yuval Atsmon: Yeah I did something which is relatively uncommon in consulting or for us at McKinsey, although slightly more common these days than it used to be, which is I left a firm and came back and I left at a fairly senior stage. In fact, I left pretty quickly after I was elected senior partner, which 

for many people felt a bit strange because you worked so hard to get to that point and now you are leaving before you can catch on your new status or your new role and so on. But actually, for me not being financially driven, I was thinking of that as. If I don't leave now, will I ever leave? I will get just more and more comfortable in the McKinsey role and in a similar way to what I did in 2006 when I decided to go and spend six years in China because I felt China's changing the world.

I was very curious about tech disruption, especially AI disruption in silicon. and I got to connect with a very exciting company called Globality. That was, an AI native organization that was trying to use AI as a way to connect small businesses with large companies and disrupt the professional services market.

And, I went there and I spent a bit more than two years building that business with some, great investor and board members and other partners. Obviously, learned a lot about what it takes to do it and did all kind of new mistakes for myself. And also a few things that I'm quite proud of.

Continue to help Globality after I left. Globality also, I think found out by the time that I was there that it'll take too long to. Build a marketplace model the way that it envisioned when I joined. So some of the original vision was really becoming Alibaba Professional Services, but actually the AI technology that the company developed seemed to be more readily monetizable with large companies that can themself use that to just better manage every professional service.

Provider, whether it's a small one or a big one, of course most of them do a lot more work with the big ones. So it became, to some degree, a procurement platform or for professional services. I think the company clearly continues to have a much bigger mission, but at least for me, the role became more about implementing essentially large software deals or software like deals with big companies.

And that was a little bit less exciting at some point and made me think about what to do next. eventually brought me back to the firm. But in that period, that experience also being, surrounded by very talented founders, engineers, people that approach AI almost from the other extreme.

The sort of what can be engineered and then let's figure out how to use it as opposed to what's needed, , and how do we build it? Taught me a lot about what's out there about NLP in particular, which was big, long before, generative AI became the flavor of the year . I saw that this is gonna be pretty transformational for how people can start to engage in a kind of human machine interaction because I think we all know the power of AI long before it replaces humans.

It's augmenting humans and that's the interesting. 

[00:07:46] Mahan Tavakoli: It is very interesting and I've had conversations including with Louie Rosenberg, who's one of the pioneers in virtual reality and has been involved in ai, he talks about that augmentation and the role it's going to play. In the future of work, before we get to that AI part of it though, all organizations that I deal with Yuval have a strategy or strategic plan.

 You had an outstanding conversation on the McKinsey Podcast with Richard Ramo. He said trends have been toward bad strategy and I tend to agree with that. How do you define strategy? , would you agree with Richard that there's a lot of bad strategy out there, and if so, why? 

[00:08:28] Yuval Atsmon: So I think I would agree with him, unfortunately, that bad strategies is as common as good strategy, if not more common.

But indeed, as you say, depends a little bit of what you expect from good strategy. And I think that, you expect from good strategy, of course, that it'll. Not only be based on the right insights, it would also drive the right commitment to action that set a company on a different trajectory than before the strategy.

And then eventually helps to align in many cases, large organization to move in more unison with a certain strategy. And sometimes you. Actually apply the strategy by inorganic moves. So you don't necessarily need to change the entire company to to make the change in strategy.

But more often than not, I think where strategy fails is because you actually don't bring people on board to execute it. In some other cases, and I think this has been one of, and Richard and I talked about it on that podcast, one of the most common failure modes that I see on strategy is you start with a few priorities.

and then you add more and more things that needs to happen until you have no priorities anymore . And it's very hard to say that anything is not important, partly because. , you don't want to tell your current people that they're doing something not important. No company, as the non-important business unit links to the priority business unit.

Some of them have the core and the high growth , but no one has the sort of non-important business group. And it's very difficult for people to pull resources away from things or even give up on the hope. A great manager, a great leader, or a great, product innovation will actually succeed despite a lot of historical problems in that business unit or even indication that the market is not attractive. There's always that optimism that something will change and that I think is one of the reasons why many companies end up with either a strategy that very few people understand it in the same way, or a strategy just tries to do too many things. 

[00:10:22] Mahan Tavakoli: Lighty Klutz. He is a professor at the University of Virginia, has a great book, subtract, and he gives many examples, full reasons why in life and in business we tend to look to addition rather than subtraction.

And many of the strategic conversations I've been involved with as well are. An addition. So it continually builds rather than subtract and focus on the strategy front. So that's one of the major challenges. Now how do you define artificial intelligence? 

[00:10:56] Yuval Atsmon: I've found that it's easier to put more stuff in this suitcase of artificial intelligence than trying to be, too semantic about what it is and what it isn't.

Because actually a lot of stuff that is pretty basic, like automation, which I don't think it's artificial intelligence, which is I just want this thing to be done repeatedly and effectively with whether it's RPA or other ways of automation, is. in my mind, a pillar of efficiency that many companies are not utilizing.

And by the way, some smart things like that could still make a difference. There's been a few people that have compared shed, G P T to Eliza, which is this virtual pre-wired psychiatrist ai which obviously was no ai 55 or so years ago. And I think when you consider how much can be done more with automation, even the more basic analysis descriptive and simple in my mind, should the package as part of the transformation that companies needs to do with analytics.

So I think for those that are getting too hung on what AI means, I think probably, it's a symptom that they're just not doing enough. , most of the time, . But at the same time I obviously accept. It's not enough to do only the things which are more rule based and more automation. A lot of what's gonna be pretty game changing over the next 10, 20 years is, the places in which AI completely changes the approach of how stuff gets done.

Not only augment or. , 

[00:12:19] Mahan Tavakoli: I've seen the application of AI in organizations in many instances in the automation that you talk about, but I have not seen it used in the strategic planning and in strategy conversations. How can AI be used in a way where it augments the strategic thinking of an organization?

[00:12:38] Yuval Atsmon: So I think you are right that very few companies are, using AI in a meaningful way as part of their C-suite discussions or even as inputs to prepare for the right discussions. I think you're seeing more that are leveraging it in some spot analysis and prepar. , they might use it for m and a scan or IP scan in a more systematic way.

So I would say it has become pretty common within the life science industry, for example, to provide some ideation for new innovation or new opportunities by using AI to crawl through a lot of, applic IP patent application data and you.

Some cases where that's becoming more common. Of course, the investment companies are probably most advanced in being creative, in finding more and more sources of AI based or automation based research that can gives them significant, edge as they would call it for their investment. We're seeing it, including in the work that we do we're obviously seeing analytics being.

As part of the strategy development process for specific things growth analysis of the past, performance in a more automated way or scaled way to get to a higher level of granularity in the portfolio. But those, I would say are still. Far and few and little used what you almost not see at all.

I think I haven't yet to work with a client, put it this way. I've heard people talk about it, but I've yet to see it that ev literally the AI in the room almost says your, seventh executive where you can actually in the discussion use ai. And I, And I used a very simple thought example.

 I don't think it's complicated to use ai, for example, to. A voice counting and remind the c e o at the end of the meeting you have spoken 70% of the meeting. You may have not solicited enough input or based on tone of voice analysis. Everyone seems to agree with you.

Maybe it's good to spend a little bit of time to explore alternative opinions. And, this is just a few , tricky examples, but there's a lot that can be done even at that level. that can have quite a big impact on the quality of decision making in board. 

[00:14:38] Mahan Tavakoli: I liked you up to this point, Yuval.

Now you scared me because that's part of what I do with some of the senior teams I work with. I'm the one that pushes back on the individuals when they are all quickly agreeing with whatever it is one executive has said and move on. But all kidding aside, there are those types of applications that can be.

in facilitating the conversation, you also talk about the six stages of AI development. Do you mind talking through those six stages and where do you think the biggest potential is for most of the organizations that you interact with? 

[00:15:14] Yuval Atsmon: The reason we have been spending some time to try to codify or think about those six stages is first and foremost to move away from a binary.

Thinking about AI that, when you are making an effort to leverage AI in strategy or in other things, it's all about something highly autonomous highly prescriptive and highly intelligent. That makes, the c e o not needed anymore. And we all know that technologically we're quite far from that.

[00:15:40] Mahan Tavakoli: Yal, one of the reasons I find a mind block with people thinking through AI applications at the leadership team level and in the organization, it's because they. Immediately go to that binary mode that you talk about of this autonomous artificial intelligence unit that makes the decisions rather than seeing it as gradient of applications all throughout.

[00:16:05] Yuval Atsmon: I think that's exactly right and it's remarkable at one level you and I are discussing it as if it's the most obvious thing that there's, gradients and we all understand that in other parts of our. We have seen our car starts to park itself, but still not driving us on its own.

So we know there's stages in other parts of life, and yet, many of us seem to be completely numb when it comes to business applications. 

[00:16:28] Mahan Tavakoli: Yes. So that's why thinking through the six stages helps for the executives and CEOs to ask, where are we at and where is the biggest potential for our organization? 

[00:16:41] Yuval Atsmon: And, the first stage again in the discussion we had earlier about. Using AI quite generously. You would almost think that the first stage is pre ai. It's what most companies do have today in their, different business intelligence systems where it's a set of automated analysis that you are doing that gives you a description of the status of your business or the status of the market, or the status of your consumer.

And again, even there, I would say the ability to do that. Improves, moving from information overload to what really matters and thinking about how to work with that. There's some room for companies to improve, but I would say that's overall mature capability for many organizations.

I think in many respects, you move from data to insights and in the second stage we talk about how to use AI for diagnostic intelligence. You talked about Richard Roel. Richard Roel say every strategy starts from simply understanding what's really going on.

Many people jump to conclusion without spending enough time to understand what's really going on, or they describe a plan without starting from why that plan will be different. where they are today. And to explain the difference, you have to explain how you got to where you are today. So diagnostic intelligence is, , using AI to try to draw more insights into where you are today.

 Technologically, it's quite mature. Depending on the use cases, you do need to have domain experts both in strategy and in your industry to make it meaningful in terms of what kind. Diagnostical analysis would be valuable. But often, it's as simple as let's take as an example, consumer company with a fairly large portfolio of SKUs and large, set of markets trying to pull in from the diagnostic.

Where do you really have more material differences of inflation or sales demand or. Marketing changes in spend, , which of course, the AI can do a lot more patiently. The AI doesn't get bored from doing the analysis, a hundred thousand different ways and try to provide for you based on some thresholds of materiality and relevance.

to provide you some insight. Diagnostic intelligence. As I say, I think this is totally doable. For most organization, most things. With today's technology, you then move to the next phase, which is what we call prescriptive intelligence, which is really the ability to tell you what's gonna happen.

Now I think anyone that want to use any application with too much reliance on the future, obviously needs to be careful. So the point In case of strategy is not to necessarily give you a very accurate prediction of the future. In the same way that if you're gonna look at the typical weather in a certain country in February, it's not a promise that it would be sunny if it's sunny 25 days a year on average in that city in February, but it gives you something that gives you a certain level of confidence and a certain statistical probability.

And AI is quite good at many types of predictions depending on historical data and which you can continue to improve, which at the very least can give you an outside view. Of what could happen. And if that seems to be contradiction to what you're expecting, can improve in many cases the process of planning for the future.

[00:19:44] Mahan Tavakoli: This is augmented intelligence in coming up with options, potentials, insights that you might not have come up with.

It's not providing a concrete answer, it's just increasing the optionality in order to be able to. Come up with the best answers or best potential paths forward. So it's augmentation. It's not making the decisions for the organization 

[00:20:10] Yuval Atsmon: completely. So all the levels we've talked about so far are all analytical tools.

At the end of the day, that should improve the quality of insights you're working with on the past and on the future, and at least provide you with a better. Opportunity to reflect on differences between your own expectation. And if it's aligned, you should be careful, , and if it's very different, you should spend time to understand the difference.

[00:20:35] Mahan Tavakoli: It's really having a more insightful data driven augmentation involved in that strategic thinking process. Now before moving on, I wanna get some of your thoughts with respect to 

what's your sense in terms of when thinking through strategy? I see most of the ones that I interact with are at best at the simple analytics stage. Yeah. So where are most of the organizations at this point? 

[00:21:03] Yuval Atsmon: I think the short answer is very few. Utilizing more than stage one. Some have spot stage two and some have spot stage three, not necessarily the same time as doing stage two.

 Needless to say, companies that are making significant bets ahead of time. For example, big capital investments in mining and similar types of long-term project real estate. , they would tend to spend quite a lot of energy on prediction and trying to understand drivers.

Some of them would use more complex, models and some of them would use a more modern AI and so on. But if you go to the more common industrial companies that you know, executives are making, So much of the decision within a budget year as opposed to a lot more than that. First of all, as we know, the best proxy for the budget is what it was the year before.

There's like a 97% correlation in our research . And even when you're trying to make this correlation, it's more social aspects of who had the political influence to get more budget than anything the AI told you.

[00:22:05] Mahan Tavakoli: I hope most of my listeners are laughing along with me, Yuval. I've sat in a lot of strategic planning meetings and facilitate a bunch, and that's exactly how a lot of approaches to strategic planning work 

thinking through both the process and application of these AI tools can help make that process more productive. In many instances, organizations and leadership teams are spending days and at times, weeks through this process going through, as you said, 97% based on the previous budget and making tweaks based on the personalities in the room rather than real data driven.

[00:22:45] Yuval Atsmon: Let's also recognize there's very human factor here. Most of us are quite happy to use our sports app when we are meeting our sports plans. As soon as we start to fall behind, the diet app or the investment app, when the market goes up, we check it pretty often. When the markets go down, we don't open it

And I think the reality is executive are never able to do everything they want. It's a pretty. Tough life for senior leaders in most organizations today. And there's almost a reluctance to get too much criticism. Or if I expose myself to an AI tool that will keep reminding me my potential mistakes or my mistakes, will that make my life a lot harder?

We joked about it having AI is your own spoke like the Star Trek Vulcan, . And we all remember that it wasn't always pleasant to deal with the Vulcan right on that TV show, . But I think there was a very human factor also that senior people don't necessarily want to deal with some of that emotionally.

Even if they don't admit it. 

[00:23:43] Mahan Tavakoli: We wanna be able to turn it off. 

[00:23:45] Yuval Atsmon: Now 

[00:23:45] Mahan Tavakoli: these are the three stages of AI development. And you talk about three more future-oriented stages of AI development. 

[00:23:53] Yuval Atsmon: I think the next three, and you can debate if they're three or two to some degree or, you can always break it up a bit further as we'll become smarter about it.

But we put the fourth one. As you're in a semi-autonomous stage, you're starting to. Not prescription, but actual recommendation of things to do. So it moves from giving you analysis to giving you synthesis implications, recommendation, moving into stage five, which is really empowered to make selective prescription or, Decision making on your behalf.

And you can imagine, for example, some of that happening in some part of your budget. Of course, supply chain AI in some cases, is already moved into that. Most email personalization of, spam filter is entirely at that stage, but you pick a few things. You're not gonna delete your inbox, but you'll allow your spam to be automatically filtered out.

So what's the sort of, equivalent of that in business? And then of course the final stages. Thank you ai, you developed a strategy for me. I don't think I will see it in my lifetime personally, but I think it will happen at some point as. . 

[00:24:54] Mahan Tavakoli: So at this point we are, as you mentioned in the earlier stages where we can use the AI to augment the thinking that happens for the strategic planning.

So a couple of other questions with respect to strategy Yuval the pace of change. Has continually gotten faster. AI contributing to that. But technological change AMA has a great book, exponential Age on the different exponential technologies, which are about to hit their exponential curve, therefore speeding up change in organizations.

So when you are advising clients or advising colleagues with respect to the frequency of. in-depth strategic thinking, not strategic planning, necessarily taking a week out, but revisiting strategy. How often should organizations be thinking about that? 

[00:25:45] Yuval Atsmon: No, it's a great question because I actually think for some of the similar reasons that we discussed before, that it's such an effort sometimes to align an organizational and strategy that it feels.

Very painful to revisit it too often. However, there's at least two types of changes that should make you revisit it without a defined frequency. The first one is that a significant assumption you've made in your strategy is proven irrelevant 

you. You're gonna be operating in a certain market environment, but now there's a war in Ukraine, or there is three times the interest rate, or there is, 10 times the inflation level, or, those are pretty game changing macro realities that may have been completely unpredictable at the time that you made your strategy.

So I think that's the first thing , many c. End up doing that change once the results hit them. Not necessarily once they're changing , the external world happens and often the denial continues even a little bit after the result starts to hit them. But that's when you really have to react.

And I can come back to how I can help on that as well. The second type of course is that the assumptions that you've made on your own capability. Most strategies assume. Some significant success in execution. And back to where we started from on strategies because they typically have a long list of things that they want to improve at the same time.

And because history shows that it's hard to make improvement in modern organizations in a significant quantum again, some strategies become invalid when big initiatives Don't work out anymore. And again, a lot of the typical executives tend to do something that behavioral psychologists call escalation of commitment, which is when a big bet in their strategy is not working.

They are now doubling down they have to prove not only that it, makes sense, but they were right to make the bet to begin with. So instead of revisiting the strategy, they go on. And we've seen that happens also with. Some political leaders in some countries, but it's actually very hard to make judgment in a efficient way that the strategy's not working or the strategy's not relevant.

And I actually think that probably the most obvious and straightforward use case of AI is to give you, you can do a weekly, monthly, quarterly, whatever is to give you. A set of insights on is your strategy still relevant? Most companies know what are the big assumptions are, in their strategies or frankly, AI itself can link it to the value creation opportunity when, as we've seen the interest rate changes a lot, it means that your cost of capital is changing a lot.

if there is a certain activity that requires a lot of capital, but continue to generate the same profit and now the capital is so much more expensive, then that activity has become that much less value creating. Now we don't need AI to calculate some of it, but again, AI can serve both the calculator and the unpleasant kind of reminder that people otherwise may not get.

[00:28:35] Mahan Tavakoli: So it can help in some instances keep us honest as you mentioned, even at the most senior levels, it's some of those human biases and heuristics that play into us doubling down and keeping up with the same strategy. So it has to be revisited. Now, one of the things that I.

Happening and not just to Google as a result of what's happening with AI is that a lot of organizations, their business model assumptions will quickly have to shift. So how do you see AI's development having an impact on business model shifts and the need for strategic thinking with respect to that?

[00:29:15] Yuval Atsmon: I think you're right that a lot of other things in technology, the exponential curve of things means that by the time you realize it's disrupting your business, it's already happening very fast. And, whether it's too late or not, you can debate. But often there is a tendency to also.

Maybe assume it's gonna happen a lot sooner than it does. We have seen in many industries examples of, promised disruptions that have, lagged significantly the expected timeline. But like the earnest Ingwei book it happened gradually and then suddenly and I think, that's definitely gonna happen.

Ai, I am not convinced we're at the suddenly stage in too many domains yet. I'm not saying we are 20 years away, but I don't think it's as imminent in many things. Now, of course, generative AI definitely feels like a change in people's attention, and I think that by itself creates an acceleration.

Because I'm a big believer that in many applications what's stopping AI adoption is human readiness, not technological readiness. I think that what we're experiencing right now could have an unexpected effect and acceleration just on how people, and we started to see that in our kids do their homework and how, companies do their marketing and videos and call center management and we're starting to see a few things where.

especially for those that are saying I'm very comfortable with a 10% error rate or embarrassment rate or whatever you want to call it. But saving 80% of the cost, it's a no-brainer trade off for some so they're moving quite quickly. So think the, the human awareness aspect is harder to predict and I think we're only a few months into what feels like a new wave of excitement.

About ai. Time will tell how that impacts 2023 and beyond. So I think we're gonna see, as always, some of that is accelerating. There might be some fatigue for some other apps where people assume too much. But I think, over time we will see more and more disruption coming from ai for sure.

[00:31:11] Mahan Tavakoli: And with that disruption, as you mentioned the people being willing to experiment and take things on will speed it up. So that's part of what I see much more willingness with respect to whether it's executives or it is kids, to seek out tools to make processes faster, easier. Having been exposed to generative ai, now all of a sudden very highly motivated, intelligent executives are saying, wait a minute, where else in my organization can we use ai?

[00:31:43] Yuval Atsmon: There. Jewish proverb when I was a kid that translates roughly to something like the fool says what he knows and the wise man knows what he says.

And the reason I'm bringing it up is I think generative AI is the wisest fool we've ever experienced in modern times. Combine on a Google or a Microsoft search platform, or if you look at the new Bing or barred, on Google, or it seems to be able to do.

A lot, very quickly. He definitely says a lot of what he knows, , but in many respects, he doesn't know if what he says is true everyone has been posting all the examples of the mistakes. But that's what I said. It depends a lot on the tolerance. By the way. We're surrounded by people that says wrong things all the time, and we're not that upset by it.

I don't think your average call center person always gets it right. Also, it's our attitude to what machine is allowed to say wrong and what a person is allowed to say wrong is also something that could be changing. And I think if that changes, that could be one of the most profound.

Changes. We may not be enough in some fields, like an autonomous car. Somehow it feels a lot worse when a car crashes a person than another human crashes a person. I think we already know today that it's most likely gonna be a lot less accident if all cars would be autonomously driven. 

I think likewise, we would feel probably much worse about a machine making a. Decision about someone's imprisonment than a human judge. Although we all know the judge makes a lot of mistakes that pro machine can reduce. But I think we're gonna see more and more domains where we live in a world where, mistakes are worth it for the efficiency.

by the way, remind ourself that human do it as well. I think that could be game changing as. 

[00:33:21] Mahan Tavakoli: It could be. Now, one of the things Yuval is that we experienced with PCs and technology, a lot of organizations ended up having chief technology officers emphasizing technology as a core part of their organization.

Eventually, they said, we're all technology companies regardless of the industry we are in. How do you recommend for organizations to think. ai. Beyond the strategy and the strategic planning conversation, should there be a function focused on ai? Should there be applications throughout? How should organizations think about approaching ai?

[00:33:59] Yuval Atsmon: So I think, similar to a few other things that have been said in recent years about technology or digital. Likewise, in ai, it's very obvious that the native companies that have. with AI as their method from the beginning. Including very large companies in some cases today that are, the big tech guys without, the names are at this point, most of them that have adopted ai, sometimes to an embarrassing level in recruiting 

but if that's been real part of their dna, n a from the beginning are obviously. Starting every question on, what can AI do before they do it differently? I think that should be the aspiration for companies that want to be in a leading position. And I think if you're not doing that pretty rigorously, you are very quickly falling into doing very little.

Of course you can in theory spend once in a while, a real scan opportunity, prioritize, decide on a few use cases. But what happens pretty quickly after, from what I've seen, is it then hits roadblocks of the organization not willing to use it, not trusting it, not investing domain expertise in getting it right.

And there's so many ways to kill it in a company , that if you don't have a default for this will happen. It ends up almost not, happening more of the time except obviously very mature applications where it's almost plug and play or can be done very quickly for you today. So I think it does take that kind of leadership to make meaningful change in a short period of time.

It is almost always evangelized by the top leaders of the company beyond the C T O. The best CT O in the world cannot change the company to adopt technology without everyone in the executive team really wanting it as well. And sometimes that means replacing the executive that don't want it

And the people that can do that are the board or the, the c e o. But it's a real change. And, we've seen a few companies adopting it massively. And getting significant rewards for it, but it's very few at the moment. 

[00:35:50] Mahan Tavakoli: It is a potential opportunity for a lot of other organizations as well in thinking through their strategy moving forward.

 Now are there books or resources you recommend to CEOs and other. As they want to think through the strategy of their organization and AI's role in strategy. 

[00:36:13] Yuval Atsmon: There's a few books that I think are very interesting in terms of to think about ai.

I don't think there is yet a book, which is, Capable of bringing it all together as a sort of a guide for a co and probably those that try end up being a little bit too focused on some big tech examples where, again, I think you can't compare yourself to an Amazon or a Microsoft or an Airbnb if you are big Telco or big bank and so on.

 It's similarly, by the way, I love Richard Romeo's book, the Two Books and Strategy, good Strategy, bad Strategy in the Crooks. I have almost no good strategy book that I've ever read outside his and a couple more. It's very hard cuz it's very quickly becomes someone forces too much a certain framework or a certain approach what I love about Richard is he refuses to use frameworks and he gives you.

just common sense advice about how to know you're on the right track or not. And it doesn't tell you what to do, but it tells you, bit of what not to do and how to come with a frame of mind. And I think likewise, when I think about books about ai, it's often the collecting of different books that I think gives you a lot of knowledge.

So I enjoyed, one of the books that I read last year was A Thousand Brains by Jeff Hopkins, which is meant to be about the brain, not about ai. , but it was very thought provoking in terms of if our knowledge of the brain will at some point shape how we develop AI system. First of all, there's nothing that prevents an AI from having consciousness.

Nothing, in the sort of nature of the universe that prevents it. We don't know our own consciousness why it happens. So how could we say for sure that, AI will not have consciousness at some point? Again, we're not talking about. 2025 problem. But nonetheless, that I thought was quite useful framework and also linking between.

The more our knowledge of how our brain can do so much with so little power shapes into the research. Cuz , AI is expected to take over the brains. It obviously does a lot better for very narrow applications, but it might start to do it for more universal application at some point in our lifetime.

But it's gonna still take a lot more power at the current, rate. But I thought that was thought provoking. Likewise, one of the books that came out maybe five or six years ago that was among the most useful on ai I think it was Pablo Dominance, the master algorithm, that really was just a great tutorial about the different approaches and don't say AI has moved more and more into deep learning and newer network since the book was written, but it was really going through.

actually, no one knows for sure which system will end up being an a combination of the systems is what most companies use, and I think a fluency of at least, how it gets done, what's the difference between rule-based and machine learning. I think any executive that doesn't have that basic level of knowledge is.

 In risk of negligence in doing their job. You ask about CTOs, one of the biggest frustrations of the CTOs is, and some of it is because CTOs don't communicate well, but most of the time is because there is a missing basic education for their peers in the business side about some, technologies that , as you said, could still have exponential impact in the future.

[00:39:12] Mahan Tavakoli: don't necessarily have to become a technologist. However, as the c e O and the senior leadership team, you do need to have some of the basic understanding that enables the organization to take advantage of the technologies. And in order to do that, I love you quoted. David Rubenstein.

I had a conversation with him for my podcast as well. He's an obsessive reader, reads a hundred plus books a year, and I know you read a lot as well. That has to be part of how we understand this future. So I really appreciate those recommendations. I really appreciate all of the writing.

And thinking you do as well evolve. So how can the audience follow some of your work? 

[00:39:58] Yuval Atsmon: I probably, should and could do more in terms of posting some of it. But as my day job is still to work with clients on a variety of things. And therefore as much as I enjoy it it actually.

Forces me to learn more deeply. The reason I post, the reason I write articles, the reason I sometimes do faculty programs is because I learn much more deeply when I teach, I put a pretty high bar on what I try to post rather than the frequency. But I do try to do it on LinkedIn, pretty frequently and sometimes through the McKinsey articles.

So those are probably the main ways. That I contribute outside know. 

[00:40:32] Mahan Tavakoli: Those are outstanding. We'll link to them in the show notes. I really appreciate the conversation with you Yuval, in that. Strategy and strategic planning can use AI in augmenting the thinking and really appreciate the insights that you have shared with respect to how CEOs and leadership teams can do that.

Thank you so much for the conversation, Yuval. , 

[00:40:56] Yuval Atsmon: it's been a pleasure and I think, we will see a lot more happening that I don't even know yet that will start to give opportunities for all of us to enjoy the benefits of ai.