Ranked in the top 1% of all podcasts globally!
May 2, 2023

256 Generative AI, AI Agents, Responsible AI, Chatbots, and AI-Driven Disruptions, Tom Taulli and Mahan Tavakoli | Partnering Leadership AI Conversation

256 Generative AI, AI Agents, Responsible AI, Chatbots, and AI-Driven Disruptions, Tom Taulli and Mahan Tavakoli | Partnering Leadership AI Conversation

In this episode of Partnering Leadership, Mahan Tavakoli first shares some of his perspectives on the transformation organizations and leaders will face due to advancements in artificial intelligence. Then Mahan welcomes Tom Taulli, AI author, investor, and advisor, to discuss the letter asking for a pause on the development of generative AI, the potential disruptive impact of generative AI, and the potential of AI agents built on top of Open AI's Chat GPT. They also discussed the need for serious conversations around AI ethics and the need for responsible AI. Next, Tom Taulli talked about the potential of generative AI, such as Chat GPT, for automation, customer service, and cybersecurity. He mentioned the concept of hallucination, where the AI can give false answers that seem convincing, and the importance of setting up prompts to get better responses. He also mentioned using generative AI for coding, image creation, and summarizing information. Mahan and Tom further discussed the implications of AI chat agents and voice cloning technology, including the potential for AI to deceive people and the need for regulation to protect privacy. Finally, Tom Taulli addressed some potential uses of AI in the workplace and what professionals need to do to stay ahead of the many changes resulting from AI applications in the workplace. 



Some Highlights:

- Generative AI and its Potential Disruptions

- Implications of the letter asking for a six months pause on the development of advanced generative AI

- AI agents such as AutoGPT, BabyAGI, and their potential uses 

- Applications of generative AI in marketing, customer service, image creation, and cybersecurity

- Rise of chat agents and potential social consequences 

- Implications of voice and video cloning technologies

- The necessity for human input in AI systems

- How and why professionals can develop new competencies as a result of advancements in artificial intelligence 



Connect with Tom Taulli:

Tom Taulli Website 

Tom Taulli on LinkedIn 



Connect with Mahan Tavakoli:

Mahan Tavakoli Website

Mahan Tavakoli on LinkedIn

Partnering Leadership Website


Transcript

***DISCLAIMER: Please note that the following AI-generated transcript may not be 100% accurate and could contain misspellings or errors.***

Mahan Tavakoli: Welcome to Partnering Leadership. I'm so excited to have you along with me on this journey of learning and growth and really appreciative of all of your support and comments. Keep your comments coming, mohamed mahan.com. There's also a microphone icon on partnering leadership.com. You can leave voice messages for me there.

[00:00:19] I appreciate the messages I've been getting on the various episodes, including the ones focusing on ai because I truly believe. AI's impact on our lives will be transformative, which is why I am focusing these first Tuesday conversations specifically on the latest in ai. In addition to having conversations with leading thinkers on ai, as I work with my clients on the impact of ai, both its strategic implications and tactical applications.

[00:00:54] I am seeing the potential transformations that we are in, for most specifically, the impact that artificial intelligence will have in the coming years on knowledge work. I believe you are way ahead of the vast majority of professionals as listeners to this podcast.

[00:01:16] Because you have a choice to make with your time, and I am sure, and I hope you take time to binge watch Netflix series or do whatever else you enjoy, however, With a growth mindset, you are also continuing to learn along with me so we are aware of what's coming and can adjust accordingly. Unfortunately, a lot of people are clueless, both individual professionals and in some respects, some of the executives that I'm interacting with.

[00:01:54] Of the impact that artificial intelligence will have on their organizations, whether it's chat, G P T G P T four, or other advanced generative ai. AI can now perform knowledge work, including strategic and creative work in ways we are still trying to understand if anyone tells you that they fully understand where this is headed.

[00:02:20] I am not sure you should listen to them. Since even Sam Altman, c e o of OpenAI says he doesn't fully know where it's all headed, and he is continually surprised by new applications of generative ai. There are also other significant advancements on a daily basis. Remember that we are at the acceleration phase of this exponential technology.

[00:02:47] For example, people are testing things including agents such as auto, G P T, baby A G I, which give AI agents the ability to complete complex tasks and actions with minimal human intervention. At this point, the agents are clunky but they should give us a glimpse into what's ahead now.

[00:03:14] Goldman Sachs recently released a report that says generative AI by itself could impact over 300 million jobs. They said it will cost significant disruption and that two-thirds of jobs could be automated, at least to some degree in the us. And I quote of those occupations which are exposed most have a significant.

[00:03:42] But partial share of their workload, 25 to 50% that can be replaced. Now, I don't mean this to be an alarmist, actually. I am very hopeful, especially for leaders and professionals like you who take the time to educate themselves, inform themselves, and act accordingly. However, we do need to also be mindful of the reality and have a greater sense of urgency at all levels of our businesses in higher education and in government to understand and prepare for what's coming.

[00:04:20] Jason Calacanis, who is an internet entrepreneur, angel and venture investor and host of the All-In Podcast a few weeks ago tweeted AI is going to nuke the bottom third of performers in jobs done on computers, even creative ones in the next 24 months. He continued white collar salaries are going to plummet to the average of the global workforce.

[00:04:51] Companies are going to get dramatically smaller and more profitable. Top performers are already leveraging these tools to increase the distance between themselves and low performers. It's getting polarizing. This isn't about phone operators. Farmers or dishwashers. This is about knowledge work. Your knowledge has been commoditized.

[00:05:17] Your ability to be nimble and learn new skills is all that matters. Now, Calacanis can be a little melodramatic,  I think his main point is valid. We need to be nimble and learn new skills. In my conversations with CEOs and executives from organizations of various sizes and industries, many seem to have the same misunderstanding that the public at large does.

[00:05:48] So Pew Research just recently published the survey that Americans views on AI and its potential impact on the workforce. 62% believe that artificial intelligence will have a significant impact on job holders overall. On the other hand, just 28% believe the use of AI will significantly impact them personally.

[00:06:16] With roughly half believing that there will be no impact on them, or that the impact will be minor. Many CEOs and executives have the same misunderstanding. They believe that AI will disrupt and transform many organizations and industries just not theirs.

[00:06:37] I believe that AI will transform the vast majority of our roles in the coming years and will significantly transform many industries as well. That's why I will focus on bringing you more conversations. Both with AI and leadership thought leaders.

[00:06:56] So you, my podcast listeners are well informed and can guide your teams and organizations to navigate the disruptions ahead by learning, unlearning, and relearning with a growth mindset. And that's what brings me back to my genuine confidence in you. And in our future and the future of our teams and organizations, because we are choosing to spend our time focused on conversations with leading thinkers so we can learn and act.

[00:07:41] To impact our future. As I mentioned for this first Tuesday conversation, I have invited back Tom Tulley so he can give us an update including the letter that was signed, asking for a stop to further development of powerful generative ai. Potential uses and limitations of generative ai. The rise of chat agents and AI companions, the potential consequences of voice and video, cloning technology, the importance of human input in AI systems, the need for professionals to develop more competencies and much, much more.

[00:08:23] So it's a rich conversation with Tom. As always, I really enjoyed it. I'm sure you will as well, and will learn a lot from it. Now here is my conversation with Tom Tulley.





[00:00:00] Mahan Tavakoli: Tom Tulley, welcome back to Partnering Leadership. Looking forward to another conversation with you on artificial intelligence and what's been happening

[00:00:08] Tom Taulli: great. Looking forward to it. 

[00:00:10] Mahan Tavakoli: Tom, I wanna start out first with a letter that was published 

by what was initially a thousand plus people in the AI field, a lot more have signed it since then. Asking for a pause on the development of generative AI beyond the G P T four level. 

Would love to know your thoughts, 

[00:00:35] Tom Taulli: you from the Washington DC area, politically connected. There's this thing called dead on Arrival. 

Definitely dead on arrival. And Elon Musk, of course, was one of the signatories of this and, blasted it out on his Twitter. The irony is then after that, he's in the process of billing his own generative AI company to call truth G P T, or whatever it is.

So some of these folks speak at both sides of their mouths, kinda like politicians do. And I'm not too sure where they stand. Gotta take this with a huge grain of salt. Part of those who are signed on are academics.

They're truly concerned. The business people their concern might be more competitive, that they're seeing open AI just blaze a trail. And if you could find a way to stop them. So put a pause, so we can catch up to what you're doing. Our business doesn't suffer as much.

I think there's some of that's going on as well.

 So I think the letter, was good at generating a lot of buzz. 

And I think the other thing too is even after that, Sam Altman said that, the era of these large language models is over. We've hit the limits here because once you've scoured most of the information publicly available on the internet, where else do you go? . So it was interesting. It did raise some eyebrows, but it quickly faded and I can see why. 

[00:02:01] Mahan Tavakoli: From my perspective I believe we have to have serious conversations around artificial intelligence, not just generative ai, which a lot of people are familiar with, but that's not the only way AI is used and is going to be transformative.

So we need to have a broader conversation on the societal impact of AI in all aspects of how we work and live.

[00:02:30] Tom Taulli: Yeah. It is out in the wild. You can't bring it back in. I also think this is intellectual, so what are you gonna tell people?

Stop thinking. These researchers are always thinking about creating better models. That's their life. So you're gonna tell them, oh, for the next six months you can't think about this. Don't go to your Python notebooks and, do your coding and run your inference models it's ridiculous. 

We're gonna continue to think we're gonna continue to innovate and be creative. And you can't stop that. It's like the, the Catholic church with Galileos put 'em in a room somewhere and, hopefully it just all blows away. And of course it didn't, we had the Renaissance and this explosion of knowledge and creativity and you just can't keep that in the bottle. I say let it bloom, let it flourish. At the same time, be very mindful and careful of what this technology can do and how it's applied.

[00:03:24] Mahan Tavakoli: You mentioned the creativity that's resulting from. Generative ai, specifically chat, g p t. One of the things that has been happening over the past few weeks are these AI agents that are showing some capability, they are clunky, they have issues with them. Would love to get your understanding of AI agents that are built on top of open AI's Chat, G P T, how they can be used and what their potential is for the future.

[00:03:57] Tom Taulli: I think one of the first ones was the auto G P T, autonomous agents. Now keep in mind this is something that you'd have to understand Python, you have to go to GitHub and download it. You have to sign up to be a part of the open AI api. So this is not just, for the faint of heart.

This is supposed to save a lot of time and make your life easier if you happen to be a very technical person. Now the idea is that, writing all these prompts and refining these prompts and trying to get to the right result can be time consuming. So what these agents are doing is trying to automate some of this for you.

So you really start with a goal and then it'll self prompt on your behalf and figure out what is the best way to get to that end point. So it could be something like I want to buy a smartphone at this price has these types of features It doesn't do this and then it'll go through all the steps to figure out how to do that.

Maybe even go and buy you the phone if you wanted to. So it's taking, the next iteration of this technology and finding ways to automate it. And it's blown up. And if you go on Twitter it's everywhere. And then I think the God mode one, that's actually on a website, so you don't have to know Python and those types of things.

It's right off your browser. But apparently it's not perfect, just chat. G p t makes mistakes. These things make mistakes too. And it could do unexpected things on your behalf that you did not anticipate. It's really for someone who's on the cutting edge and experimental, has a high tolerance for risk , definitely a very early adopter type area right now.

[00:05:46] Mahan Tavakoli: Greg Brockman, who is a co-founder of open AI and President, had a great TED Talk talking about how these third party services and apps can build on top of g p T four and open ai. I could visualize a future where you can have an intern or someone who carries out a series of tasks for you without needing to specifically give commands. So a couple of the examples that I saw that seem just for the audience to be able to visualize it someone had built a job finder, G P t linking to their LinkedIn profile linking to the types of jobs that they would be interested in, and asking their job finer G p T to apply for all those types of jobs on LinkedIn.

So it was an agent acting on behalf of the person. In this instance, as you mentioned. Even if there is an error, it's not a costly error. The agent would be applying for a job that you might not really be interested in. That is one example I saw. Another one is a paper reader, G P T, where the agent had been asked to find the top papers on ai, summarize them and summarize the abstracts.

So then the person would be presented with, these are the summaries of the top papers on ai. So , it's a question that then initiates a series of actions with a result back to the person. 

[00:07:25] Tom Taulli: Yeah. And on the second one, you can do , with prompts and chat, G P T, but the thing is, automation traditionally has been something that's been very structured.

There's something called robotic process automation Excel as an example, you can create a macro for a process or some type of task that you do quite a bit, and you just press that button and it does it for you and automates that process. And it does it correctly every time.

But , with generative ai, it may not do it the way you want it to be done it can go off the rails. So it's an interesting form of automation because it's a little bit more of a free form creative way of automation as opposed to, I think what we've been accustomed to is, very structured step by step approach to automation to get a job done.

[00:08:20] Mahan Tavakoli: Tom, I know you are also working on a course and can't wait for it to be this fall Yeah. On generative ai. Yeah. So my question to you is, what would be the best uses of generative AI and what could potentially be the best uses of.

Future versions of AI agents,

[00:08:44] Tom Taulli: yeah, so you mentioned the heirs. So there's a concept called hallucination where it comes up with false answers, misleading answers that seem very convincing, seem very true. But it's funny so let's say you ask it a question and it comes up with something and clearly there's some errors in the content for whatever reason, you could actually then tell chat chip pt, go find the errors.

And a lot of times it'll figure out what the errors are. You could actually use chat g p t as a way to check the problems that it initially. Created. And part of that is due to the structure of the model. The model is really about probabilities of sequences of words.

It's not about accuracy. It's about this is how content, should be related to this type of prompt. If you don't tell it to be accurate, then it might be a little bit more creative. And you could also go to the playground the open AI playground, there's something called temperature, and you could make it so it has a lower temperature.

When it has a lower temperature it's less creative. And then you might have more accurate answers as well. So there's a lot of ways you can play with this. The other thing too is Creating these prompts is setting it up so chat, g p t knows what you're getting at.

 Some of the best use cases are marketing and copywriting. Someone may just go write me a tweet that, provides a discount for someone who, enters a sweepstakes and it'll come up with some cool things.

But you may wanna preface that by saying, you are the world's best copywriter. You write content that gets people excited and they want to buy your product, so write a tweet that does X, y, z, so you're nudging it in certain directions to get better responses.

So this whole setup concept, is important to get better, content or responses. In terms of other use cases I think customer service, we're gonna see a lot with that.

We're just in the early stages with that, but there was a survey recently where a company did an experiment with customer service and they did a control group they had one that said, powered by ai and then one powered by chat G B T, and the one that was powered by Chachi bt, people thought that was more trustworthy than one that was powered by ai.

It's interesting. Yeah. So the Chachi BT brand in and of itself has become something where it's become the standard where people think, oh, that, that must be the way to go. Even though it does give a lot of wrong answers. I don't think a lot of people realize that, that's the case. And also the other thing that's been very transformative with generative AI is image creation.

I don't know Photoshop. I go to Campa and, I can drag and drop some shapes and change it, but it doesn't look that great. With Mid Journey or Dali or whatever program you want, you can create some compelling images with just a couple lines of text that you explain what you want and it's out this world, it would've taken me years of education and knowledge to get to that point, to create something like that.

These things create. And I think the other thing we've seen too is using generative AI for coding. Co-generation. That's a very powerful tool. And it's getting better and better. And then the other one Microsoft has released co-pilot for cybersecurity.

 Companies have all these tools for cybersecurity. The big problem is just managing those tools and all that log information that comes in. There's just not enough time to do this in real time to respond to it.

So these types of co-pilots can distill that information, summarize it, and figure out, no, this is the threat to focus on. These other things are really not that important. So I do think there's so many different areas. That this technology can be applied to and is being applied to, and we're just getting started with it.

 I've 

[00:13:00] Mahan Tavakoli: seen some initial applications, pulling on internal organizational data, whether for frequently asked questions or marketing and customer service that are very well done. One of the reasons I imagine people prefer chat g p t to the typical AI agent, part of it is the credibility associated now with the name chat, g p t, part of it is, I imagine most people like me having interacted with a lot of chatbots pre-chat G p t, were That's right.

Frustrated because the chatbot never got what they were saying or got the exactly right answer. So now this new generative AI. Has become a lot more powerful, especially the ones that pull on the database or information that the company internally has that. I imagine there aren't cases of hallucination when they're pulling on specific information that they have 

[00:13:58] Tom Taulli: That's correct.

And by the way, on customer service, when you'd use the traditional IVR system, you know how many people press that zero to get a human, cause they're so fed up they wanna get through that thing cuz it doesn't understand what you want.

Yet I was at the Atlassian conference last week in Vegas. Atlassian is a pretty big cloud company and they developed tools. They originally developed tools for developers, but they've evolved over time into customer service primarily for internal it capability. And so they've teamed up with open AI to use generative ai.

For those types of tools to summarize information, to answer questions to basically be a virtual assistant. So if I need to reset my password I need to do a requisition for a printer or a mouse it'll work. They've just started using it the last couple months but they found in the last couple months is that it's resolved more than half of all the requests internally.

And what they do is cuz you know what usually happens is, you can create these chatbots, but you need a data scientist to take that data that you have internally and to run models against it, but you have G B T, you just. Tell it, you just basically give it the information and it figures it out for you on your behalf.

So there's very little typical onboarding that you have to use to start using this technology. And because you're restricting it to this corpus of knowledge. You're correct. You're and you're saying, don't go and go off the rails and start talking about, the universe or cosmology or what, the chat PT likes to do.

And you just tell it, this is your knowledge. This is what you're working with, and it'll figure it out. And the hallucinations they're still there to some extent, but a lot less of a problem. But, who reads the benefits, documents, no one, but people have a question about, okay, when is the enrollment period?

Or can I get reimbursement for my plan, whatever. It'll figure it out. But you don't have to do all the training and all the hard work processing that information. It does it for you. So that's transformative. That's gonna have a huge impact on companies that start using this technology.

And again, like with Atlassian has shown they've already seen major benefits from using this. 

[00:16:25] Mahan Tavakoli: That is outstanding to hear and see the potential both internally for the employees to be able to access their internal version of chatbot and have conversations with, and then externally in customer service.

Now there are a couple of these chat agents that I. Ran into, would love to get your thoughts on One, cracked me up. Replica has been growing really fast. And going to their website. It says, always here to listen and talk, always on your side.

And Ai companion who is eager to learn and would love to see the world through your eyes. Replica is always ready to chat when you need an empathetic friend. So it is pretty interesting to me. And if you go to the replica site, there are people that have been in relationships with their created replica.

Yeah. Partner. Yeah. Some for a couple of years now so would love to get your thought with respect to these chat agents that are making their way into our social lives 

[00:17:42] Tom Taulli: as well. Yeah. Maybe they're just really good friends. They're programmed to do that.

You prompted for that and you set it up they're gonna be your friend. The one thing is that this is nothing new. We've talked about this in earlier shows. There's something called a Eliza that was created in the mid sixties. It was basically the same thing. A lot cruder technology.

It was used as a virtual therapist and all it did is just what therapists do, they like to ask questions. It just basically, kept asking questions some people thought this was real. And they were more willing to talk to it than they were their own therapist.

Just like if we lost our phone we would think, a part of our body was amputated. It may sound silly to a lot of people, but we do attach ourselves and presume certain human aspects to machines or objects.

And we've actually done this for centuries. I'm not surprised that something like this has taken off. As to what are the social consequences of this? I can't imagine. I have no clue. But there's another site called character.ai that's similar. 

Let's say you're a fan of Winston Churchill, you can create your friend Winston, and start talking about World War II or whatever you wanna talk about. Or you can create one for Elon Musk. Of course, there's a ton of Elon Musk out there that you can p around with.

So clearly there are people that are into this. And we're talking millions of people. We're not talking a small minority number of people. But there's something that is interesting is snap that has a Snapchat app. They're one of the first to implement G P t four they call it my ai.

Although you can change the AI to whatever name you want, and if you go to staff it's at the top now. And since it's launched, there's been. A surge in one star reviews for Snap, but it's a little bit more nuanced because there are also people who actually gave it high scores, but they were tempted to give it a low score because of the ai.

And then there's some people that really love the ai. 

The other thing is so what is this thing doing with the information I'm giving it? And it also sometimes says really creepy things it's an experiment snap, is, doing things on the cutting edge. But it may not work for a lot of people either and they may take it the wrong way and they may prefer to actually talk to and communicate with other people instead of having a friend, being an AI friend.

And I wonder, 

[00:20:14] Mahan Tavakoli: Tom, how much of it is also the fact that we are now being introduced to machinery, foreign objects, whatever you want to call it, that we are not used to interacting with. It's almost a beginning of seeing cars on the roads or people seeing airplanes fly and fear associated with them.

So my 16 year old daughter has Snapchat and she was showing my wife and me her interactions. With the chatbot, which my wife immediately said, oh my God, this is creepy. And she would've been one of those to give it one star. Yeah. Because the chatbot was being somewhat flirtatious, like a friend who might be a little attracted to you.

Not saying anything overtly inappropriate, but being flirtatious. When my daughter said, I went to the park today and asked about her experience at the park, what did you do? Oh my God. And she said, I went on the swing and it's all That's great. And yeah, so part of it was odd because this is an AI chat bot interacting with you this way.

And I imagine for many of us, it is a foreign experience because we have never had these things in our lives. And I wonder where we will be a few years from now. Yeah, when in many of our business and other interactions, we are interacting with chatbots and can't tell the difference whether it's a real person or a chatbot.

[00:21:46] Tom Taulli: The one good thing about the snap situation is that they do say it's ai. I think it's problematic if a company uses AI and doesn't tell you that they're using it. So I think it should be disclosed upfront but yeah, I think it's going to get to the point where people will just, get comfortable with it. But in the social context, a 16 year old girl There's gotta be some guardrails or something like that.

Snap, they, they could get in some hot water if, this thing, starts saying certain things. They gotta be very careful with it, I think it's one thing if it's, some e-commerce retailer dealing with a purchase of something as opposed to this being your friend you're in some territory that could be 

[00:22:32] Mahan Tavakoli: an issue.

Now something else that builds off of these AI agents and chat agents is that now voice, especially video is not far behind voice. Voice cloning has become really good. There is conversation of Joe Rogan with Sam Altman. Neither one of them were part of the conversation, but I highly doubt anyone can tell the difference.

And it doesn't need hours long sampling of someone's voice with a few minutes sampling of a voice. The AI agents can duplicate the person so Tom Tali and Mahan can be having a conversation. Sure. People hear us even though we never said those things, and video is not far behind it.

Would love to know your thoughts on that and how it's going to impact both on a societal level, but also organizations where when the CEO calls or says something you can't rely, it's your c e o anymore. And pretty soon, when the CEO of an organization is on video saying something, no, you can't trust that was the c e o of the organization on that video.

[00:23:52] Tom Taulli: Yeah, I think on the video side that's getting closer than we think. Invidia just recently introduced some technology on the video side, so that's just a matter of time. Wall Street Journal had a really good article about the voice creation systems and how they're being used, for audiobooks as an example.

But they do indicate that this is an AI voice. And then I think they had one narrator who was narrating books but had died 10 years ago. So you can revive people who are no longer with us and, bring 'em back to life. And then there was this one. Computer scientists who said that you should save your loved ones voices and videos.

So when they do die, you can replicate them and you can communicate them for the rest of your life, which I thought was really creepy. But all that technology is going to be here. It's, fairly soon. And you raised about, can you trust who you're talking to, there will be a time that it'll probably be impossible to figure that out.

[00:24:54] Mahan Tavakoli: And that's something to be mindful of. I've seen some deep fake videos that is not far behind. So within the coming year, we are going to see that, which unfortunately then means seeing is no longer believing exactly to the 2024 election.

It's quite likely we will see absolutely deep fake videos of the candidates saying things and doing things that they never said and they never did. Which makes it really hard. 

[00:25:23] Tom Taulli: It's gonna be a challenge for the newsrooms, they're gonna get this clip. And they're gonna wonder, do we go with it or not?

Is it true or not? So this is probably where some of the human journalistic skills come into play, where you call your sources and try to figure out where did this really come from? But you can't stop that from getting on Twitter or some of these other things. Although I would say at a national level for the presidential election, we're so polarized anyway, it probably is not gonna make too much of a difference.

Where I do think it can make a difference. It's probably some of these smaller elections, local elections where some of that content could change the tide one way or the other. I'm not sure if that would be the case on a, senate level or, presidential level, cuz I think people are set.

And how they view things at that level. But yeah it's going to cause a lot of issues and problems and it's inevitable. I know something will happen. I don't know what it is, but something will happen, which is 

[00:26:21] Mahan Tavakoli: why it's important for leaders to be aware of where the technology is headed and what the potential is in order to safeguard against the potential abuses part of what you mentioned, whether cybersecurity and otherwise it's important to understand these things. Now along those same lines we'd love to touch on, there's a professor from Duke University and she's the founding director of Duke's Initiative for Science and Society, Anita Farhan.

And she has written a book, the Battle for Your Brain, defending the Right to Think Freely in the Age of Neurotechnology. She talks about the fact that right now many organizations have started, whether it is with train drivers to minors monitoring and tracking their brain activity in order to tell whether they're sleepy, to tell them they need to take a break.

However, that tracking of brain activity is soon going to expand to a lot of other workforces. Would love to get your thoughts on the marrying of the artificial intelligence with some of the brain activity where she says, if we don't start regulating this soon within the next couple of years, our employers and others can read our minds.

[00:27:46] Tom Taulli: The brain is, one of those areas of the human body that it's a bit of a black box still. We're very much a black box. There's a lot we don't know. So in terms of truly reading minds and things of that sort, we have for a long time we've had, tests, a lie detector tests and those are not too great.

 I can't see to some extent, maybe where there's life and deaths, like a minor or things like that. People's lives are at stake it's a lot different. People may understand that, but I think for a lot of other jobs, I think it'd just be considered intrusive.

And the other question is what are you doing with my data, my brain data? I think probably a lot of employers are gonna be hesitant unless there's a really good. Reason to go down that path.

So 

[00:28:32] Mahan Tavakoli: our thoughts are safe for now. 

[00:28:36] Tom Taulli: That's right. 

[00:28:36] Mahan Tavakoli: Which is yeah. Good to know. As I agree with you. The need for that humanity in the workplace and the. Need for people to be able to want to work with each other, collaborate. Make it less likely for most organizations to look to technologies like this. Now, I read a great quote that right now because of generative AI and other tools, the answers are free.

It's the questions that matter most. So would love to know, as you see, both with generative AI and other AI tools, more and more answers knowledge and data being accessible. What are the competencies, skill sets and capabilities that professionals need to develop in themselves and leaders need to have in their teams?

To be able to tap into the potential of AI best. 

[00:29:38] Tom Taulli: It's, the old saying, garbage in, garbage out. These AI systems are not at the point where they just run on their own and figure things out without you having to tell 'em, what to do.

That's probably a good thing. So the humans are not completely outta the picture yet. So going back to our customer service example, if using the Atlassian scenario where more than half of the requests were handled in an automated fashion, and it sounds like they were done fairly confidently which it probably means it's probably some people are not going to have a job.

What can you do to not be in that situation? One is that there's still areas, and this will probably be for a long time, there's still areas, say, customer service, where you need people that do know what they're doing. They know the product inside and out. They know that customer use cases, they know how to deal with people and they're just top at what they do.

And those jobs will probably be safe and very important. And then there'll probably be new categories and roles that emerge. Maybe it's a AI customer service manager, several of these people, monitor the systems, make sure they're updated properly, make sure they're not saying bad things and harming the brand and.

Information's routed to the right places, things like that. So I think some of these roles will probably start to evolve. But going back to what we were talking about before, about where some of this technology can go too far going to our customer service example, instead of, taking aside the brain monitoring and transparency, one thing that has been happening in that I think it's causing a problem is that this AI is being used to evaluate how employees operate.

You may be thinking you're doing a good job, but it may not be the human that makes the ultimate decision of whether you're doing a good job or not. It could be the AI saying that you're a good employee or you're a bad employee, you should stay, or you should go. And there are some people who understand that and are able to game the system.

So you could say something like, screw you to a customer, but if you do it in a delightful way, the AI may say, oh, that's a great employee, because the personality is so good and the person's energetic and things like that, without understanding that this person just said something, pretty nasty to a customer.

That's just an exaggerated way. But there's some people that are not very engaging who just by personality, are not as perky or whatever you want to call it, but the AI might be programmed in a way to value that more than other personality characteristics. So you could have the skills.

You could be smart at what you do and still may, get let go from your job because the AI is looking for other skill sets 

so I think that's a scary thing because you would think, oh, I'm doing the right thing. I'm bettering my skills I'm enhancing them. And that may still not be enough. And it could be because of that AI stuff that's monitoring what you're doing on the job.

This also stresses out people because if you feel like the AI is constantly on watch on what you do, that makes you nervous. You get an ulcer, and so what we're seeing in these customer service jobs already have high turnover rates. Apparently it sounds like it's getting worse.

And it sounds like part of the reason is because of all this monitoring that is being done in the workplace. So yeah, I think there's a lot of rough waters to navigate here. And it will definitely get tougher and tougher I think, for people in their jobs to figure out what they need to do to be effective in their role.

[00:33:39] Mahan Tavakoli: What an outstanding example, Tom, which actually in my mind, requires greater transparency with respect to a lot of these systems. And one of the parallels I would draw, as you were talking about this, I was thinking about the fact that, for example, on LinkedIn, I know every once in a while you post, I post.

You could think that I can write the best content and post it on LinkedIn, but it's not the best content that ends up reaching even a portion of the people you're connected to. It's LinkedIn's algorithm, which we don't exactly know what it priorities exactly. Yeah. That chooses who ends up seeing it.

And the people who find how to gain the algorithm, they might write something that is not worth much, but right. Ends up in front of many more eyes so on and so forth. Part of what I hear from you is that as these algorithms are being employed in organizations, they're not necessarily rewarding the right actions or activities.

That's right. It's dependent on what the algorithm is for the ai, which in my view is why it's critical for much greater transparency with respect to how the algorithms 

[00:35:02] Tom Taulli: work. I agree. Yeah. I think that should be made up front with employees. If you're using AI as, a key part of the evaluation of the performance, then that employee should ask, okay, then how is this program, what is it that is good and what is bad?

And I think for employers themselves is this AI really helping your business? It may not be, humans usually don't wanna be controlled by machines. 

[00:35:30] Mahan Tavakoli: That's another reason why Tom, I really appreciate these conversations and I think leaders need to understand the field a lot better, even when and if they are not at all technical. I've had conversations with a couple of CHROs and CEOs that in their organizations they use applicant tracking systems.

And in most instances, they can confidently tell me that they know their ATSs don't bias any type of candidate and their confidence comes from the fact that the vendors have told them. But 

[00:36:06] Tom Taulli: that's right. Don't bias anyone's. 

[00:36:08] Mahan Tavakoli: That's right. So they have no idea what the algorithm is prioritizing. All they know is they have bought a system that their confidence that doesn't bias certain candidates versus others.

So as we have more of these algorithms in our organizations, we need both the transparency and to ask the right questions to know how that decision making is proceeding. 

[00:36:33] Tom Taulli: I totally agree. The black box is, I think one of the biggest issues with ai and it's even more of a problem with these generative AI systems because they're so large and complicated it's almost impossible to figure out how they're coming up with these answers.

[00:36:49] Mahan Tavakoli: Tom, I really appreciate, the regular conversations where we get a chance to chat on the latest in ai. I'm really excited with a course that will be coming out, but before then, You've got your outstanding book, artificial Intelligence Basics.

Where else can the audience keep up with your latest writing? 

[00:37:09] Tom Taulli: You mentioned LinkedIn. So if you put in T a U l I, one of the few people with that name Tom Tulley it'll probably be the first response up there. And then on Twitter, I do post on Twitter at T A U L I. And I do have a book coming out in the summer on generative ai, so it'll be a busy busy summer for generative ai.

And as we talked last time, there's always something going on and it's still going. The momentum's still there, so I don't see it stopping anytime soon.

[00:37:39] Mahan Tavakoli: I am excited to continue learning from you, Tom, and sharing you and your insights with the partnering leadership community. Thank you so much for the conversation, Tom Tali. 

[00:37:50] Tom Taulli: Thank you. Thanks very much.