Ranked in the top 1% of all podcasts globally!
Feb. 23, 2023

240 AI and The Future of Work with Dan Turchin, CEO PeopleReign and Host of AI and The Future of Work Podcast | Partnering Leadership Global Thought Leader

240 AI and The Future of Work with Dan Turchin, CEO PeopleReign and Host of AI and The Future of Work Podcast | Partnering Leadership Global Thought Leader

In this episode of Partnering Leadership, Mahan Tavakoli speaks with Dan Turchin. Dan Turchin is the CEO of the AI company PeopleReign and host of the podcast AI & The Future of Work. First, Dan shared thoughts on the entrepreneurial ecosystem in Silicon Valley and the potential of hybrid or fully remote work in organizations. Next, Dan Turchin mentioned a Worknet future where organizations hire the best talent and create a fluid work structure. The conversation then turned to AI advancements and applications in organizations, including PeopleReign, the world's smartest virtual agent for IT and HR employee service. Dan Turchin also shared his thought on why humanity and empathy are essential in the age of AI. Finally, Dan shared potential applications of AI in the workplace and resources for organizational leaders to stay on top of the fast-moving developments in the field.  



Some Highlights:

- The impact of remote work on the tech ecosystem and the future of work

- The necessity to rethink education and invest in human skills as a 

- Dan Turchin on the impact of AI on the future of work

- How AI can enhance employee productivity

- The vision behind PeopleReign, the intelligent virtual agent for IT and HR employee services 

- Dan Turchin on the Turning Test and whether it's still relevant

- The potential of AI technologies to benefit humans

- The need to set guardrails for AI to ensure responsible AI practices

- Limitations of AI technology and the need for responsible use, including facial recognition

- Dan Turchin on the use of AI for augmenting human intelligence

- The power of data and the power of algorithms to make human life better

- What organizations and policymakers need to consider before blessing AI models and sending them off into the wild 


Mentioned in the episode:

Partnering Leadership conversation with Gary Bolles on The Future of Work

Partnering Leadership conversation with John Rossman on The Amazon Way

Partnering Leadership conversation with Ed Hess on How to Adapt to the Speed of Change

Partnering Leadership conversation with Tom Tully on Artificial Intelligence Basics

Partnering Leadership episode on The Future is Now: Open AI's Chat GPT and the Exponential Changes Ahead

Partnering Leadership episode on Three mindsets of Top CEOs, Thoughts on Team Alignment & Collaboration, and a Strateg

Connect with Mahan Tavakoli:

Mahan Tavakoli Website

Mahan Tavakoli on LinkedIn

Partnering Leadership Website


Transcript

***DISCLAIMER: Please note that the following AI-generated transcript may not be 100% accurate and could contain misspellings or errors.***

[00:00:00] Mahan Tavakoli: Dan Turchin, welcome to Partnering Leadership. I'm thrilled to have you in this conversation with me. 

[00:00:06] Dan Turchin: Mahan. Such a pleasure to be here. Thanks for having me, 

[00:00:10] Mahan Tavakoli: Dan. Soon after I. Saw the tsunami that I believe a lot of professionals and organizations will face as a result of ai.

I looked for resources and one of the first ones I came across was your brilliant podcast, AI and the Future of Work. I have binge listened to it, recommend to all of my listeners to do the same thing. In understanding AI's impact on the future of work, so can't wait to get some of your thoughts and perspectives on that, as well as the fact that you are c e o of People, rain, which is an AI company, but would love to know, Dan, whereabouts you grew up and how your upbringing impacted the kind of person you've become.

[00:00:58] Dan Turchin: Mahan, it's a pleasure to be here and introduce myself to your listeners. I've become a fan of your podcast as well. and the future of work, as you mentioned we've now published over 175 episodes and like you in this podcast, it's my passion project. And we were talking offline about what an honor it is, for us to be able to host some amazing guests and share these important conversations with the community.

In terms of my background, I was born in New Jersey, grew up in San Diego. My dad was very much a working class hero and Was a pioneer in the field of recycling. Worked outdoors his whole life till his hands were dry and bleeding, picking up aluminum cans and scrap metal and and hauling it around in a truck.

Oh, wow. And one formative day, he decided he had lived through one too many New Jersey winters and looked at the map and said, I bet it's better to be a junk man in California. And I grew up in San Diego. . I moved to the Bay Area for school in the nineties and never looked back.

There's something special about the Bay Area. Nothing to do with the tech economy, more to do with the physical environment and how open-minded a culture it is. I love being in a place where I'm proud to say that my kids' schools look like the United Nations. Every language, every style of dress.

Background. Every value system is celebrated. And so the combination of that open-mindedness and being surrounded by the mountains and the ocean and an amazing city like San Francisco to the north of us I think it's a special place to raise kids. And I'd say I'm very much a product of my environment, but I also chose the lifestyle. And the culture of Silicon Valley cuz it reflects my own and my family's value system. That says a lot about who I am and what's led to me. I've now started seven companies most recent being People Reign. But the lot of my philosophy goes into everything we do at People Ran and everything that has gone into the other companies that have started in.

[00:03:04] Mahan Tavakoli: Philosophy that you talk about, Dan, is really important and comes across your values during the conversations that you have, and I appreciate you mentioning the ecosystem that the Bay Area has. It's interesting because a lot of different communities have tried to duplicate that ecosystem, but ecosystems are not that easy to duplicate.

So I'm curious. With respect to your thoughts with more companies doing more hybrid work and some deciding to go fully remote, do you think that ecosystem will be as important a part of a physical geography that companies are present in or the importance of it will go down?

[00:03:49] Dan Turchin: So this is a common topic on my podcast recently. Two amazing guests who both are from Scandinavia Otto Sutherland c e o of a company called Speech Lee, doing amazing work in N L U and N L P, And a gentleman named Michael Oster Reader, who is using generative AI to synthetically generate stock images.

 I asked them a similar question, what is it like to build a tech first company, an AI first tech first company outside of a traditional tech hub? Their thinking and their philosophy around how they're building their organization shape my thinking as well. And it's led to something I say frequently.

It's almost become a cliche, but at PeopleReign we no longer hire the best zip code, we hire the best talent. And I firmly believe that the workplace of the future is this fluid. , what I'll call a WorkNet instead of a workforce. And the WorkNet consists of people who are passionate about performing a specific job or a task it's not necessarily a title on a business card, and it's not a seat in a cube in an office.

It's about doing your best work. And that can be done from. and surrounding yourself with people who share the passion for doing their best work along with you. I think that the most productive organizations in the organization in the future will consist of. , everybody doing the best work of their lives, extracting all the friction out of traditional work, the stuff that you don't like.

Dealing with hr, being treated like a ticket, commuting things that just don't necessarily need to be a part of the fabric of work. And I think if you extract all of that from the drudgery, then work no longer becomes a four letter word. And we're working with people we love working with who are passionate like we.

to me, that's the way to improve productivity, and that's how we all become better humans. 

[00:05:51] Mahan Tavakoli: It is interesting as we are transitioning to that, Dan Peter Drucker was talking about this, and I remember even back in the nineties Tom Peters was mentioning how the future of work, in his view would resemble more.

What he saw as movies were people came together on specific projects. You have one best director for that specific project, and that director would pull together people who would be the right people for set design for that movie, so on and so forth. They work on that project and then Dispa, so if that, Sort of the future of work as you also see it, what does that require with respect to the way companies think of themselves and what does it require with respect to how professionals manage their own careers in that world?

[00:06:49] Dan Turchin: Very important topic first requires us to reimagine the education system. I think both of us have daughters, I have two daughters, one ninth grader and one in seventh. , I believe yours are of similar Yes. The ages. 10th and seventh. Exactly. There you go. So the traditional educational system does a disservice to emerging leaders because it teaches us that the label on the diploma hanging on the wall defines who we are.

And it doesn't define who we are far from it. The skills that I want my girls to be investing in are the skills that are required to be leaders in the future workplace. And that's no longer. Necessarily aligned with the skills that you get out of a textbook. I believe to be the best human and to be a leader in the next decade plus means investing in skills that can't be replicated by machines.

I believe that will coexist with machines, but the best of humans combined with the best of machines, is the future that I wanna create and invent and invest in, and that requires graduating. Students and encouraging them to invest in skills like empathy and rational thinking, being good humans who are able to teach lead and coach.

These are things that bots chat, G P T will never be great at. As you demonstrated with your great episode about Che g p t, it can credibly, I think this is your daughter's idea, I believe it can credibly write a play about volleyball in the style of Shakespeare. And it's great at that.

But you know what, it was your daughter had the idea to do that. And what do you do with that content once check G p t produces it? How? take the output of these text prompts where, maybe we will use some automation to come up with cute parlor tricks, but it's always gonna be up to humans to really figure out how to make each other better and how and when and where to leverage technology to invest in our humanness.

[00:08:59] Mahan Tavakoli: That's a beautiful way of putting it down and. a positive, inspiring way of putting it as well, because a lot of people that I've conversations with who have played around, whether it's with Chat, j p T or some who are familiar with some other aspects of AI are terrified. But what you are saying is, That humanity, that Gary Bolus talks about this in the future of work.

Ed Hess talks about it also that humanity, that empathy, those are things we need to leverage. Unfortunately, as you mentioned, the school system is not best at getting people to do that. It's best at getting them to jump through. And as you heard in that conversation, I get frustrated when my daughters are asked, so what do you want to do for a profession?

I understand the intention behind the question that the school system wants to generate conversation, but a lot of times they feel like they need to learn the specific skills now for that end, rather than the humanity that is so important in this age of ai. . You also, after getting your degree in industrial engineering at Stanford, as you said, you have started several companies back in 2019.

You started your podcast AI in the Future of Work, and then February, 2020, right before the Pandemic. Then you started your current company. People Reign. , how did you get involved in AI in the first place, and then what got you to start? People 

[00:10:42] Dan Turchin: Reign take you in the way back machine back in the nineties, my passion was about how people interact with technology and how civilizations get built up around big, bold ideas and how technology can be used to accelerate progress.

Back when I was in undergrad, I was studying leadism, the movement in the industrial revolution in the 18th century, primarily in Britain. The machine breakers. I was just intrigued by these waves of technology where we tend to not wanna be on the wrong side of innovation.

Innovation usually wins, and yet innovation can be a threat. And we often just as a species fear the other. And oftentimes technology is the other. I started work at Disney as an industrial engineer using my training, and was just smitten by the culture and by, what it means to delight, so much of the world with a strong brand.

I was so proud every day. I pinched myself. I couldn't believe they paid me to do that work, . And I got a fateful call from a friend of mine from undergrad who was starting a. and he said, would you mind helping me out for a weekend? I want a review on this business plan that I'm writing.

I left Orlando where I was at the time, and I never went back. The bug bit hard. And as much as I could have been a Disney lifer, I helped launch a company called Search Button, which was a very early now we call it a SaaS company. But hosted site search. And that led to the first company I started called Aero Prize.

And what Aero Prize was doing was what we now call ai. But back in the day, in the nineties, we called a self-learning personalization engine. And the problem that we set out to solve was a problem that I lived when I was at Disney and that. , every time I, or any one of 350,000 Disney employees had any kind of a technology request, it required a lot of effort and a lot of unnecessary delays because the technician would have to print out a paper ticket describing Mai or someone else's problem and cofi over to someone's desk in a cube in an office.

They would almost always not get credit for having fixed the problem in time cuz they'd have a service level agreement in sla. And so they would get dinged and, feel like they were never doing a good job. And it was because they couldn't really get credit for the work they were doing in the field.

 So we said, what if you could condense a big trouble ticket application, a client server system at the time into essentially a small enough amount of content that it could fit on a pager. The pager could actually become a computing device because there were no smartphones, there were no blackberries, and so what we did is we decided if you could figure out, based on the context of the work, what components of the trouble ticket needed to go on the pager, then maybe you could actually write a little lightweight app to the firmware on the pager, push out the trouble ticket information, and just like magic, the tech would have a better life, they'd be more productive, the users would have less downtime.

and that's really the way technology should work for people as opposed to people having to work for technology. Fast forward that idea carried me through the next several companies, this idea of making life better for employees in large organizations interacting with technology. But the itch I never got to scratch until a couple years ago was the ability to really use what we now call AI to do this at scale.

So along came neural net. Five to seven years ago, really popularized by Jeff Hinton and a number of other real pioneers in the space, and all of a sudden we could do things like what we now know Chat G P T could do. You can essentially think of people reign, which as you mentioned, we started in interestingly, February of 2020 as being chat G P T for the employee.

So what if as an employee. You had essentially a digital concierge that follows you around and understands who you are and what you need, and even anticipates the kinds of questions you might have. And if you could proactively deliver this concierge like service to the employee, all of a sudden you give them back probably an hour a week to be better employees.

But also, what better spouses, because it means in that hour a. , you're getting to your kid's soccer game or you're getting to the piano recital and all of a sudden, an hour a week that you get back, you're a better friend and a spouse and you can pursue a hobby. And so the vision behind People Reign again, the seventh company, it's all been an evolution to people reign.

But it is as our name implies, I firmly believe that technology is an enabler. It's a facilitator. It's very powerful. But even in the. where there's a chat, g p t people reign. So that's very much a part of every line of code we write, every day we get up and think about how we can make people better by using technology.

[00:15:51] Mahan Tavakoli: That's a great perspective to have. And additionally, Dan, I wanted to underline the fact that a lot of times people view entrepreneurs like you as technologists, it's that you saw a business issue problem, a friction point. It's not the fascination with the technology that got you to start the company is you saw a.

Problem. And that's what you were looking to address with those initial forms of ai. And now fast forward as. , the AI has improved. Your ability to solve those problems for the companies has improved. So your intelligent virtual agent, is specifically an agent for IT and hr employee service, which by the way, I love.

The fact that you focus adds a lot of value, I'm sure, in the functionality of the agent. What are the types of companies, that use this? 

[00:16:54] Dan Turchin: People is the best fit for large organizations that span geographies where many different languages are spoken. There are many different time zones that are supported, and oftentimes employees who aren't in or around headquarters feel like second class citizens. So one of the key attributes of our virtual agent is it speaks 27 language.

Obviously by the nature of being a virtual agent, it spans 24 time zones. It's always available. It gets smarter with every question it's asked. And the way we got started is we trained a neural net on about a billion historical trouble tickets in HR cases. So it's what would happen if you took all of the accumulated knowledge in every case worker and IT technician's head around the.

Translated into 27 languages and exposed that as a service to every large organization. That's people reign. And so it's the best fit where they're complex processes, high volumes of calls, where these patterns are really hard to disseminate across large numbers of employees, trying to support end users.

So the more complex, the higher volume, the more time zones, those tend to be the organizations that are the best fit for. . So 

[00:18:16] Mahan Tavakoli: as you run this organization, Dan, you are also a student of AI and its impact on organizations and society. So I want to get some of your thoughts and perspectives.

First of all, there is a lot of conversation around artificial general intelligence versus artificial narrow intelligence. I wanna see. The turn test is even relevant anymore. So for the audience, Turing test you would have a conversation and wouldn't be able to detect whether it's a human or it's a chat bot.

 Is it relevant or not? 

[00:19:00] Dan Turchin: As you implied, the Turing test is a test to see whether or not a machine can trick a human into thinking that it's human. . Now I have an opinion about it and it probably comes through in the way I describe it, but I don't think Alan Turk ever intended or ever really thought through the ethical implications of confusing a human, like what's required to pass the touring test.

So when we talk about a gi artificial general intelligence, we talk about a machine being able to replicate any tasks traditionally reserved for a human. The state of AI today is. We can get to a nni, artificial narrow intelligence very effectively, and a lot of the best applications of ai, factory floor automation or robotic prosthetics for amputees things that reduce the incidence of seizures in patients accelerated tools for adaptive learning for kids in schools with learning impediments.

Auto translating documents for people who are, underprivileged in third world countries detecting mines in mine fields, things that you never want a human to do. These are all amazing applications of artificial narrow intelligence where you can train an AI model with very specific data to do a very specific task that would otherwise be done by a human.

We'd often talk about tasks that are dull, dirty, and dangerous, being good candidates for a n. I I think we should stop thinking about Agi I as some kind of milestone or, the pinnacle of ai. I think we should more think about it in terms of AI is just a tool, it's just a technology, but we should look for the ways we can make human life better.

And oftentimes it's gonna. Good applications of artificial narrow intelligence, which our current technology is very appropriate to support, even though, depending on who you talk to, most experts would say artificial general intelligence is 30 to 50 years off. You know what? That's good. I hope it never gets here because in the meantime, that's 30 to 50 years that we've been spend solving the real hard problems that ai, I think is best suited to solve.

 And 

[00:21:14] Mahan Tavakoli: the way I think. Dan is that right now it's been a few years. AI can play chess better than a human can. If the need that you have is chess playing and that AI does a great job of. Playing chess, there's no need for that specific AI to be able to do other things. So there's this desire sometimes in the AI conversations for people to want to see when will AI become more capable than a human being?

A human being can play chess and play checkers and walk and do these things that, is interesting. However, there are really powerful applications , with artificial narrow intelligence, and we need to focus on that, as a tool 

[00:22:05] Dan Turchin: I encourage my team and my guests and my listeners to always ask one important question when thinking about ai, and that is what could go wrong. We tend to get so enamored with what can go right that we often don't think about the potential adverse impact on humans if automated decisions are incorrect, and some examples that I think are important for us.

acknowledge and be aware of as we start to do more with generative AI technologies and other AI related technologies. There are things like facial recognition and the ability to institutionalize the racial bias embedded in the data that we use to train ai. What could go wrong? A lot could go wrong if we use AI to mass produce or mass recreate or masser.

the biases. They're inherent in society. That's not using technology to make the world better. And similarly, if we use AI to decide who can get incarcerated or who gets a job or who gets an interview, who gets a loan from the bank. These are things where we have to realize that all of the training data has biased and coded.

 I had a fascinating guest on the podcast a few weeks ago. Her name is Marva Hickock and she started the site called ai ethicist.org. And I'd encourage your listeners and all my listeners who haven't heard it, listen to Marva. We talked a lot about the AI Bill of Rights, the proposal from the White House office science and technology policy.

But it's a really important discussion that frames what happens when we don't. What could go wrong the framework that I have for what I call responsible AI is AI must be transparent, predictable, and configurable. Transparent in the sense that if AI was used to make a decision that impacts your life, you should know about it.

And it's really important as we do more with chat, g p t, there, there needs to be a label slapped on anything that's generated by chat, G p T. That's where it came from. Predictable, the same inputs should generate the same outputs. . So some people call that explainable no black boxes is we're using AI-based systems to make really critical decisions.

It's important that the integrity of those algorithms is maintained. Then they should be configurable. If we decide that some information from our past that was incorrect was used to make a loan decision on our behalf, we should be able to correct that information, configure the algorithms up, weight down, weight.

The automated decision making can be improved over time. I think, as a global society, I hope that we embrace some kind of an ethical framework that has us all agree on a standard set of decisions and a standard way to think about what it means to practice AI responsibly. 

[00:25:03] Mahan Tavakoli: So Dan, first of all, I love that episode and I do encourage my listeners to listen to it.

But that brings about the question around potential public policies in that with social media it evolves so fast and there's so much positive with social media, but there were also negative aspects of it. It evolved So, that there have been very little restrictions placed on it. And it's embarrassing to reflect on the fact that even a year ago when some of the representatives were talking to Zuckerberg and they had no clue whether in conversations with Zuckerberg or executives from Apple on even the basic functioning of the technology they didn't know how Facebook makes money. We've got incredible public servants, a lot of great people, and a lot of great representatives.

Many of them are clueless about technology. So how are we to have the kind of conversations that are important to make sure there are the controls that you talked about including as you mentioned, around transparency, 

[00:26:21] Dan Turchin: this is such an important topic and you're in the crucible of where all these conversations are starting in DC I posed a similar question to Merva.

I said somewhat cynically doesn't. Relying on the federal government to regulate AI equate to giving big tech a 10 year free pass. , essentially self-regulating. We're asking them to grade their own homework for a decade. Yeah. . It's six pillars of the proposed blueprint for an AI bill of rights sounds great, and it makes for good stump speeches.

It's not gonna get implemented in a way that will affect big tech, apple and Microsoft and Amazon all these companies that control your. They are absolutely fiddling while Rome's burning. And I think we need a solution that's gonna be more effective. And I think it requires some kind of a private body where big tech has to send representatives as well as ethical institutions and other organizations that have.

Humanity's best interest at heart and not just a profit motive, but I think there's a quicker path than relying on the AI Bill of Rights to work its way through Congress. Certainly, thinking of how far society has evolved since, oh, November, when chat g p t was introduced till now, just imagine the pace of innovation and how many people could be harmed if.

Really get serious about reigning in some of this innovation, which is great for a number of ways, but it's also dangerous if we don't provide the right guardrails. 

[00:27:58] Mahan Tavakoli: So as those conversations are being had, then here is what I am wondering about Yuval Harari who was very scared and skeptical of where all of this is going with AI was mentioning something very interesting in that, first of all, AI is able to process so much data that no human can, and the decision is not based on a single data point.

So I will refer to something in chess a few months back, a Grand Master in chess. Was beat by a 19 year old. And the reason everyone said something is wrong here is because they said no human would've ever made this move. So the move was beyond what any human could have made. They looked into it and their allegations of use of.

AI or outside assistance in winning that. And what Harari was saying is that AI is processing thousands or tens of thousands of data points, for example, for those mortgage applications. So it is not that this one data point contributed to Dan's being accepted and mohans being rejected it.

Shades of tiny bit of this and tiny bit of that, and a lot of this and a lot of that. Therefore, it makes transparency almost impossible when data has made the decision. So how are we supposed to think about it and not just go on the other end of it and say, I accept that AI must have come up with the right answer.

How can you have transparency when there are so many data points being. 

[00:29:48] Dan Turchin: It was a fascinating episode of AI in the Future of Work with a gen named Krishna Gade, who's the ceO of a company called Fiddler. And all that Fiddler does is practices explainability in ai. And I learned a lot from hanging out with Krishna about what it takes to be able to unpack a decision made by these phenomenally as you mentioned. Sometimes these neural nets are millions or tens of millions of layers deep. And so to really understand the components and the training process and the weights, we call 'em hyper parameters in the algorithms, it's certainly not a task suited for human fit.

Fiddler is actually a platform. What Krishna says, use the word perturbs. You introduce small perturbations, or you introduce some nuances into a model when it's making a decision, and see how the model responds incrementally in response to these perturbations. And I don't know if that's the ultimate approach or not, but just having the conversation about not settling for, you know what it's magic.

We stirred the pot and the cauldron, and it started to bubble and we're going to taste, whatever comes out of it. That's a really dangerously naive approach. Whether it's fiddler or other approaches I mentioned, three principles of responsible ai.

The middle one is predictability, and part of predictability is agreeing. Before we just, bless these AI models and send them off into the wild. is requiring that there's some way to intro inspect the decisions, to verify the integrity of the algorithm, the integrity of the data, answer questions like what could go wrong.

I use the example on Mypac podcast. I think that every vendor that uses ai, which is every vendor of any technology product, increasingly should have their algorithms scored. Like we score the hygiene of a re. , I know that I don't particularly wanna take my kids to the restaurant whose kitchen got a B or a C, cuz they're probably roaches in the kitchen or something that I really don't wanna have my kids exposed to.

Similarly, I don't want my kids exposed to the equivalent of, a roach motel because the vendor doesn't pay attention to the principles of responsible ai. I think that's the kind of framework and insistence upon. Enforcing discipline that we need to have, and I think we don't have the benefit of 3, 4, 5 years.

That has to be a 2023 objective. 

[00:32:26] Mahan Tavakoli: That's an outstanding perspective, Dan, in that I love the analogy. You don't necessarily assume that because it's a restaurant, it's going to provide you safe, healthy, good food because it's AI doesn't mean. There is the transparency and the organization has done what it needs to do to make sure those biases are accounted for or eliminated, because what ends up happening on the other end of it, a lot of times as humans, we tend to believe that the technology or the AI is right regardless.

What process it has gone through? Little while back there was a incident with an African American man who was arrested, based on facial recognition. The cops didn't question it because the AI had flagged him as the suspect, even though there was a 40 pound. Difference between him and the actual suspect.

So that's why we need to question the AI and what you are saying makes perfect sense in that we need to come up with ways of measuring and making sure that whether it's the vendor or the AI system that is being used is one that meets and exceeds the standards, not just it's magic. Trust us, it's going to give you good results.

[00:33:55] Dan Turchin: One of the things that I encourage everyone to understand about whether it's Chat G P T or any other AI-based system is it's essentially really good at pattern matching at scale. So there's such a phenomenal amount of content that was Fed Chat G P T that it learned on, and this is based on a 2021 version of the internet purchased through a system called The Common.

and it essentially mashes up stuff that it's ingested or based on things that are read out there on the internet. I don't know about you, but I don't believe everything that I read on the public internet , and so we should not be surprised when check G P T authoritatively generates nonsense because there's a lot of nonsense that was on the web in 2021.

What Chat G P T does really well is. the next word based on a sequence of words. And when we see it an example you gave on your , January 3rd podcast about, help me schedule the launch of a startup. You could take a leap of faith and imagine if it's seen a billion examples of startup launch plans.

It could put one together that's pretty credible based on the parameters you gave it. Is it magic? It seems , but when you actually unpack what it's doing and how it's doing you get a sense of, I don't necessarily wanna rely on it for everything . It shouldn't be my therapist, right? Life and death decisions I don't want made by a system that's that fallible . But I think it's important just to remember what it actually is and what it does and why it's okay to be amazed by it, because it is amazing technology, but it's not a panacas. 

[00:35:37] Mahan Tavakoli: And it's important as we talked about these public policy issues, which are important to address the caution that everyone needs to keep in mind when talking about ai That said,

I have also been amazed at the many tools available to organizations to implement AI effectively, to better understand their data, take advantage of the information that is inside the organization as you are doing in People Reign, making the interactions. Much better, easier for HR or IT needs of organizations.

So what I would love to know, Dan, is your thoughts and perspectives on what are the types of functions within organizations that lend themselves. Most credibly to use of AI at this point. So if you were leading a team or an organization, where would you look at strategically say, we need to focus on use of AI with respect to this part of our business.

So 

[00:36:53] Dan Turchin: another recent guest named Eric Olson was on my podcast recently. He's the c e o of a company called Consensus. and I love what consensus is doing because I think this is a good blueprint for, the answer to your question, where is it appropriate versus where is it's not.

So what consensus does is aggregates. Scientific journals billions and billions of pages of research published online in scientific journals. And you ask it a question, maybe it's an ethical question or maybe it's a question about drug efficacy or, is vitamin D good for me? Things where there's, differing opinions

credible researchers have different opinions and they're well-researched and well published, and what consensus does is it goes out and using generative AI summarizes the pros and cons. So when I frequently talk about AI being augmented intelligence, not artificial, there's nothing really artificial about it.

It's very clear what it does, and it's not fake intelligence. It's really good at augmenting the intelligence of humans. To answer your question, consensus is a great example of where I as one person I can't possibly, plumb the depths of everything ever published about, vitamin D on the internet.

But wouldn't it be great if I could ask, the equivalent of a chat g p t in a consensus search engine? Hey, just summarize, what do I need to know about whether or not I should be encouraging my kids to take, a vitamin D supplement and it comes back and it. Five pros, five cons.

Scientifically, verified, validated, first class scientific journals. That's an example to me of, if you extrapolate that out to education and public policy and healthcare and criminal prosecution of where I would strongly trust. Summarization algorithms or suggestions where there's always a human in the loop, there's always a human at the steering wheel.

There's always a human, pressing the button, . But AI is being used to accelerate or augment the intelligence. I think that's where we should think about the best use of this technology moving forward. 

[00:39:06] Mahan Tavakoli: And as we are doing that organizations are sitting on reams of internal data as well. What are best use examples of accessing internal organization data for similar types of insights? 

[00:39:26] Dan Turchin: I'll give you an example from people Rain

McDonald's has 2 million employees that ask tens of thousands of questions a day. When am I gonna get my tax forms? I need to update my profile. What's the, what's our reimbursement policy? For home office equipment? What what's our policy on maternity leave? , can I upgrade to the latest iPhone?

And if so, when? Just things that come up in the ordinary course of business and a great use of AI is being able to go and summarize or answer these questions that come up routinely, but in the old way of doing work. If I as an employee at McDonald's call the help. and I think that, some all-knowing being is gonna answer the phone and just be sitting there waiting for me to ask a question that person has never heard before and that they're gonna magically provide the right answer.

It's unrealistic. And that's what leads to ironically, a very dehumanizing experience because the person on the other end of the phone does exactly what they've been trained to do. They say, I'm gonna submit a ticket on your behalf and I'll let you know when I have an answer. And they're gonna go off and spend some time researching it.

Wouldn't a better. Be, if I could just ask the question to a virtual agent that knows everything about every question that's ever been asked about an iPhone, or about benefits or about anything, and using natural language in real time, I get the answer in my language of choice. That's using the power of data and the power of algorithms to make a human life better.

Granted, I'm quite biased cuz that's what People Range does. But the reason why we started People Range is because that problem exists at scale. So I think that whenever there's a question that can be answered with data, and whenever something can be predicted or learned from data, but augmented by empathy and rational thinking and all the things.

That exaggerate or require the best of our humanness. To me, that's where we should be investing in better use of the technology and certainly in corporate settings, I think we're, less than 12 months away from where that will be the experience that employees expect.

Because when we use Netflix and we get great recommendations or personalized experiences on Amazon, it no longer seems like science fiction. We expect that experience when we go to return a package on Amazon, you don't say, Hey, you know what? I need to submit a trouble ticket to the Amazon help desk, right?

Nobody says that's the way to get my problem solved, right? You just go online and you fill out the form and the package magically gets returned. We're taking those sensibilities with us into work, and so the fact that the capabilities to deliver, a high touch. Very human process to deliver better employee experience.

The technology's there. It's essential that we demonstrate that we value and respect and trust employees by giving them the experience that they're already getting as consumers outside work. Oh, I love 

[00:42:35] Mahan Tavakoli: that, Dan. And what you've mentioned a couple of times is reducing that friction, which then both makes the work more rewarding and easier for people and gives them that time back.

You mentioned the Amazon example, and I had a conversation with John Rossman has written a series of books on the Amazon way. He had launched Amazon Marketplace. . He talks about innovation at Amazon and what he talked about is the fact that what people see as truly transformative innovation is really slight reduction of friction.

And he used the example of. Returning something back to Amazon, including the fact that first you had to fill out a lot of things, then you had to put it in a box. Now when I have to return it, my daughter wanted to return her shirt. I just go to u p s and drop it off. . It's reducing that friction that AI allows for the organization to do.

Now on the professional level, how do you see individuals needing to reinvent themselves to learn, not necessarily to become AI experts or start businesses, but to be able to augment themselves in this future of work? 

[00:44:00] Dan Turchin: Great topic. I'm glad you brought that up. Mahan guests on the show named Kamal.

Alia is the president of a company called Eightfold, which is a talent management platform that uses AI to figure out whether it's lateral movement within an organization or vertical movement. How to solve the upskilling and re-skilling problem oftentimes. The best talent to fill an open role is within the four walls of a company, but it's really hard to identify it if you come from, let's say, a non-traditional background.

And if you know that calcified degree on the wall says, I don't know, you're an accountant. You're not gonna likely be the marketing team's first choice. And yet perhaps you have skills. You're a creative writer, you have a blog. are a youth sports coach. Some other attributes that tend to correlate well with people who excel in marketing.

Wouldn't it be nice if as an organization you could use AI to figure out who are the best people? Where's my talent that could best fill these roles? It costs twice as much to fill a vacant. as it does to invest in an existing employee to prevent them from leaving the organization. So just knowing that alone approaches to talent management that use AI to build a skills profile as opposed to looking at a resume.

To me, that's the future of work. And more and more I'm seeing organizations invest in a more progressive. Of talent and how they manage it. That's the vision I hope more companies start to embrace when it comes to the dire need to reskill an upskill.

It's gonna be, very real in the next 10 years. I love 

[00:45:45] Mahan Tavakoli: that, Dan, because it's, taking this tool and applying it to a resource we have that we are not tapping into now, so it opens up new opportunities and new potentials, whether within the organization. or outside.

So AI has lots of different potentials. Now. In addition, to your podcast, Dan, are there resources or practices that you recommend for executives and. Business leaders in organizations so they can better understand AI and its application to the organization's strategy and tactically within the organization.

[00:46:29] Dan Turchin: You mentioned Yal Harari homos is a bit of a dystopian read, but there are a lot of interesting ideas and a lot of what I espouse, thinking about the future of work. Tied back or actually are consistent with some of those principles. Another great guest on my show I'd encourage your listeners to read up on his name is Dr.

Mark Vaughn Ream. He is a futurist and an author and does a lot of interesting thinking about some of these principles. Another one you actually mentioned, Gary Balls is a friend he's been on my podcast twice. and I consider him a real progressive thinker and a visionary he's actually the chair of the future of work track at Singularity University.

 He publishes a lot of his work in the form of LinkedIn learning courses. So encourage everyone to go out and get to know Gary and some of the way he thinks about this stuff. That's a starting point, but so many great thinkers out there.

Oh, I'll mention one. I just recorded an episode with a professor at NYU named Meredith Brard, who is about to publish a very interesting book on the ethics of AI called More Than a Glitch, . But Meredith Bursad is

a name for your listeners to keep in mind, she has a previous book called Human Unintelligence, and then following it up with more than a glitch, but few thinkers, a few leaders who I would encourage your listeners to follow outstanding 

[00:47:54] Mahan Tavakoli: recommendations. And Gary's son lives in DC so whenever he's in town, I get a chance to corner him and have conversations.

I love his thinking as well. Now, , in addition to all the AI work, all the entrepreneurship, you seem to be an adrenaline junkie and a triathlete. So what is it that makes you an adrenaline junkie? In addition to all the training that you have to do for the triathlon? , 

[00:48:25] Dan Turchin: Mahan, guilty is charged. I find that a lot of.

The energy that fuels my passion for entrepreneurship, my passion for ai, my passion for the podcast, things like that. It's the same desire to find your limits, test your limits, exceed them. To me, that's a way to celebrate our humanness, our humanity, is by going right to the edge of what you're capable of and discovering that you're capable of more than you thought you were.

And so whether that's finishing a triathlon after, before you start training, saying there's absolutely no. This human can finish a triathlon and then doing it, or maybe it's scuba diving in a foreign place or in a setting that takes you out of your comfort zone, jumping out of a plane.

These are all things that a different version of yourself may say, that's something that other people can accomplish, but I never could. And then to go and train for it and prepare your mind and prepare your body and accomplish something that you didn't think you could do, there's. Greater celebration of your humanness than to do something like that, whether it's work, play, family that's life.

And to me it doesn't get any better than really always feeling like you're pushing your limits and living life to the fullest. 

[00:49:35] Mahan Tavakoli: You are doing a great job with that as well as I love the focus that you have for your organization in making lives better for. People that you serve in the organizations that you work with.

I also love in one of your LinkedIn posts, you had quoted your muse the man in the purple velvet jumpsuit, and you said that he said it best dearly beloved. We are gathered here today to celebrate this thing called life. Dan Turin, thank you so much for helping us celebrate life and use tools such as AI to live richer and more fulfilling lives.

Thank you so much for this conversation, 

[00:50:26] Dan Turchin: Dan. Such a pleasure, Mohan. I really enjoyed this. Thanks for having me.