Ranked in the top 1% of all podcasts globally!
April 16, 2024

318 Bracing for Impact: How Generative AI Will Disrupt Education and Learning – and How to Ride the Wave with Jason Gulya | Partnering Leadership Global Thought Leader

318 Bracing for Impact: How Generative AI Will Disrupt Education and Learning – and How to Ride the Wave with Jason Gulya | Partnering Leadership Global Thought Leader

In this episode of Partnering Leadership, Mahan Tavakoli sits down with Jason Gulya, a higher education AI Strategist, to explore the transformative potential of artificial intelligence (AI) in education and organizations. As someone who initially resisted the idea of becoming a teacher, Jason's journey is a testament to the power of adaptability and embracing change.


Jason shares how his unique upbringing, with three brothers close in age, shaped his understanding of community and individual needs, laying the foundation for his approach to teaching and learning. He candidly discusses his initial skepticism towards ChatGPT, fearing it would enable plagiarism and undermine education. However, his willingness to approach the technology with an open mind led him to recognize its potential for enhancing learning experiences and driving organizational transformation.


Drawing from his experiences as the chair of the AI council at his college, Jason offers a fresh perspective on AI's role in education. He emphasizes the need to cultivate foundational skills while leveraging technology as a complementary tool. He advocates for a shift towards a more dynamic and relevant approach to teaching, where AI serves as a cognitive partner, challenging assumptions and fostering critical thinking.


Actionable Takeaways:

  • You'll learn how Jason uses ChatGPT in his English classes to facilitate engaging discussions and encourage students to critically analyze their beliefs, pushing back against AI-generated counterarguments.
  • Hear how AI can be leveraged as a powerful brainstorming tool for organizational change and strategy development, allowing leaders to explore innovative solutions and break out of traditional silos.
  • Discover Jason's advice on embracing adaptability and critical thinking as essential skills for navigating the rapidly evolving AI landscape, where job roles and pipelines are constantly shifting.
  • Understand the importance of allowing emotional reactions to new technologies while maintaining an open mindset and focusing on practical applications rather than getting caught up in hype or fear.
  • Learn about Jason's approach to exploring AI's potential, starting with a single program and investing time to understand its capabilities and limitations before strategizing its implementation.




Connect with Jason Gulya

Jason Gulya LinkedIn 


Connect with Mahan Tavakoli:

Mahan Tavakoli Website

Mahan Tavakoli on LinkedIn

Partnering Leadership Website


Transcript

[00:00:00] Mahan Tavakoli: Jason Gulia, welcome to partnering leadership. I am thrilled to have you in this conversation with me. 

[00:00:05] Jason Gulya: Thank you so much for having me. I'm super excited to talk to you and have this conversation. And my son apparently already wants to be in on the conversation.

[00:00:12] Mahan Tavakoli: I love the fact that you just had your Second son, 

that will make talking about education and the future of education even more relevant, Jason. But before we get to that, we'd love to know a little bit about you.

Whereabouts did you grow up and how did your upbringing impact who you've become, Jason? 

[00:00:33] Jason Gulya: I grew up in New Jersey, in the U. S., and I grew up with three brothers, all of whom are very close. So we're all about a year and a half apart from each other.

And that's actually been huge for me. I'm gonna get my son in on the conversation, apparently. Awesome! We all grew up, we were about a year and a half apart. And it was so Interesting to me because that's where my understanding of community comes out of, my brothers and I really relied on each other.

My parents got, divorced when I was relatively young. I was about nine years old. And one of the things that I've come to learn is that out of that experience, my brothers and I got this really Interesting and strong bond that many siblings I know just don't have. And there are good things and bad things that came out of that.

But even to this day, it's pivotal for my understanding of community. And that is actually in a weird way, transition to my own understanding of just education. now that I've had so many conversations with my brothers about their schooling experience, and even their understanding of Our parents, one of the things that was interesting to me growing up was so I was nine when my parents got divorced, my older brother was almost 11.

Then I had, a seven year old brother and a Four to five year old brother, and we all had these astoundingly different experiences, there's a huge difference when you're nine and you're going through something like that versus when you're five And so even from that experience of just talking about what was going on with our family and then eventually what was going on with?

Our schools as we moved around a bunch it really brought to me just How different people are right now. It seems obvious. But really leaning into the different needs of people and of learners. And to this day, that kind of informs my understanding of community through basically individual practice and individual needs.

And so I still return to that. In many ways, that upbringing shaped me. I think it always does. And to this day, my brothers and I are pretty close and we're constantly throwing ideas at each other. And three of us ended up being educators. So Read into that what you will, 

[00:02:44] Mahan Tavakoli: that's a wonderful way to tap into your own experience, see the uniqueness of each individual's experience and therefore relate that to your interactions with the students as a professor.

Now, did you aspire to become a teacher or how did you end up becoming a professor, 

[00:03:06] Jason Gulya: Jason? No, not at all. When I went to college, the only thing I really knew was that I didn't want to become a teacher. And I certainly didn't want to become an English teacher. That's what I went in with.

I was political science, and then I was pre law. And then I learned something. I learned that not everyone taught. English, like my high school teachers, I realized that there actually was a different model. And for me, that was just the college model. And then it started to dawn on me like, Oh, actually, maybe that's not that bad.

I became this sort of stereotypical example of being drawn to what you used to hate. Because you start to look at it in a different light. Know I had no aspirations to be a teacher. I liked most of my teachers, but I honestly, throughout Middle school and high school, especially.

I had a love hate relationship with formal schooling. Sometimes I felt like it was fine with me. I felt like I was being fulfilled. I was finding out interesting things, finding out things about myself. And then other times I just didn't.

And especially when I had anything that was really formal, like whenever I was asked to write a five paragraph essay or something like that, something I felt was just a form being pushed on me. That's when I started to push against education and to this day that informs how I teach that I see myself as someone who's constantly toggling back and forth between formal education and informal education, especially now, with A.

I. When we have this ability with artificial intelligence to create a personalized tutor, and it doesn't have to be in a formal school setting. And for me, I know that there are a lot of worries about that and many of them are well warranted, but for me, that's an empowering thing because I think back to myself when I was 15.

And I would have loved that. I would have loved to have had access to something like chat GVT. I didn't even have my own personal computer when I was 15. And so there's just an astounding gap between what I grew up with and what children have today. And certainly what my three year old will have.

And my five month old lab, I don't even know what the world's going to look like. in 10, 15 years when they're going through the things that I was going through at a certain age. I love 

[00:05:26] Mahan Tavakoli: your perspective Jason. Your love hate balance helps you understand What works and what doesn't and be able to relate to a broader segment of the students who are trying to learn. And that is a real potential for AI. But before we get to that, I think the audience would be thinking, wait a minute, we've got this energetic guy who is an English professor and has now created some of the most outstanding content, Jason, I read a lot of what you write.

On AI and its impact on education, use in education. How did you end up chairing the AI council at your college and running so fast with AI? One of 

[00:06:15] Jason Gulya: the things that I was thinking about this week. So a few days ago, the CEO of NVIDIA was doing a talk about artificial intelligence and.

He got on stage and he said that, five years ago, or even two years ago, everyone was saying Computer science, if you want to get a job, if you want to really excel in the world, computer science, and he said on stage that is no longer true, , who's ever creating these programs and the chips and everything behind AI, their job is to make it as widely accessible as possible.

possible. And so I actually think there's a way in which the big skill in the future isn't going to be just coding. That'll be important, but the big skill is going to be communicating. It's going to be writing that if you want to get something out of AI, there's actually a lot of writing and communication skills that goes into the background.

I think that in many ways, and maybe this is just me rationalizing it, . In some ways, I feel like an outlier, but in some ways, I think it makes sense. Because when ChatGB came out and suddenly, there's this program when you can just communicate with it and get something out of it.

Suddenly, when there's that sort of program, I can start making video games. I can start, prompting whatever AI program I want through just directly communicating in the same way I would as a human. And Writing actually informed my introduction to this technology. So the first time I saw Chat2BT, it was probably a week or so after it came out, I came across it, started playing with it, and I played with it for about 15 minutes, half an hour, somewhere in there.

And I turned to my wife, who's on the other side of the table, and I said to her, I just discovered the most horrible thing that is going to be the future of plagiarism, and it's going to destroy schools. 

[00:08:12] Mahan Tavakoli: You just described what almost every professor that I have talked to has at one point or another said.

[00:08:20] Jason Gulya: I get it. And I think there is something about me because in the same way as I did with my degree, I suddenly became an English teacher after never wanting to be one. I hated it. And then I literally put it aside. I was so distraught that I put it aside for about a week and I came back to it and I started playing with it and I started approaching it as a learner.

I basically said to myself, let's not. Think like a teacher. Let's not think like a professor. Let's not worry about X, Y, and Z. Let's just see what I can teach myself. And that approach fundamentally changed how I think about this technology. Now I still think a lot of those words are there, but for many of us, that's where we need to start,

 We need to have that space to have that emotional reaction. That emotional reaction is fine, it's well warranted, but then trying to figure out, if we can shift mindsets. And what that means. That really changes how we approach this technology. And now I teach with it.

I teach it. I use it in my consulting business all the time. And I'm also trying to get more and more people and organizations to think about it. Strategically my original worry came out of just thinking as a writing teacher. And from there, it's snowballed and, to this day, if you are an English major or a communication major or anyone in that realm, you're actually in a very good spot to do very well.

Anyone who focuses on communication or public speaking or writing or anything like that, you can use this technology to really take off if you have a certain approach to it. That's 

[00:09:58] Mahan Tavakoli: interesting, Jason, because that is different than what a lot of other people are saying in that some. same way the calculator took away the ability of kids to do math in their head. I've heard the same kind of argument about chat GPT and these LLMs in that when they are so fluently writing it's. Going to take away our ability and the need for us to write, but you're saying something that is totally different from that.

Help me understand that better. 

[00:10:32] Jason Gulya: I do think that worry of cognitive offloading is real. Is real, the calculator, and it's real with the use of this technology. It's interesting because when chat GPT came out. And suddenly millions of people are playing with this technology.

I kept hearing the analogy to calculators. And I was actually curious about how controversial calculators are. And I did a deep dive. As it turns out, incredibly controversial to this day, if you do a deep dive into Educational researchers and how they're talking about calculators.

There is a ton of conversation and controversy about when to introduce them. And I actually think that the people who are going to be positioned for success are going to be able to use this technology, but combine it with foundational skills. I do think that there is a well warranted anxiety about foundational skills starting to fade away.

That can certainly happen. I think that what we need to lean in on is teaching those foundational skills and then combining them with the technology. What a lot of people miss is that if you lose the foundational skills, if you just skip steps one, two, and three, and you just jump to step four, With artificial intelligence.

If you do that, your ability to evaluate outputs is hurt a lot. You and I can sit here and we can produce something with AI and we can evaluate it, we can break it apart. The reason we can do that is because we have all this experience without AI. If you take away that experience. Our ability to evaluate and revise and change the output is going to be hurt dramatically.

So one of the biggest things we need to do, and this goes not just for schools, this goes for organizations, right? If you're a CEO, if you're in charge of learning and development at your company, trying to find a way to do that, trying to find a way to have foundational skills plus that, because that's going to be the optimal recipe going forward as opposed to just using AI in everything all the time because at some point that becomes a liability because you don't have those foundational skills that you're then applying to that technology.

It also makes you very replaceable if you don't have them. 

[00:13:07] Mahan Tavakoli: That is beautifully put, Jason. , in some instances, teachers and professors have tried to keep people from using it, have used detectors, almost none of them work rather than making the kind of argument that you made.

[00:13:25] Jason Gulya: One of the things that the hype around.

AI has done and I do think we are very much still in the middle of a hype cycle. It has made us more extreme than we need to be. When the hype dies down, I hope that we can have a more measured approach to this technology. As the hype dies down, I hope that we'll be able to find a way to Have foundational skills in the conversation and be able to create trust. I have been very public with my dislike of AI detectors. I do not use them. I do not rely on them. I do not encourage anyone to use them. I have encouraged my college to cancel all subscriptions with our AI detectors.

We pay for one of them. And I actually recommended to our president to get rid of it. Save the money, use it on something else. For a couple of reasons. One, as you mentioned, they're not particularly accurate. And they tend to flag certain kinds of writing. So they've been shown, through a lot of research, to flag writing that is written by a non native speaker.

And it makes sense. It makes sense that would happen. If you are picking up a language, you're going to use certain terms that feel very textbooky. You're going to use more passive constructions. You're going to write a little bit more like something like ChatGBT, right? If you're picking up any language, it's just how language acquisition works.

And so once we flag those students, it is a huge problem. And it is a problem at the college level. I've talked to many students who said, I did not use AI at all. And I was flagged. I actually had one of my pieces of writing that was written completely by me flag is like 70 percent AI, it happens.

And that's where I'm at the college level, but it terrifies me. At lower levels. I tried to run this thought experiment, a couple of months ago, if I were to say 15 years old and I handed in a paper and it came back as 70 percent and my teacher approached me and just said, you get a zero used AI that would wreck me.

That would completely wreck me if I were 15. I would not have the confidence to push back.  It would destroy me. It would destroy my relationship with school. And I think that we have to be very careful because of that, especially when we are Teaching certain levels because, approaching that 15 year old or, gosh, that 10 year old or that seven year old with that kind of accusation, if you're wrong, that is almost as high stakes as you can get in education. 

[00:15:59] Mahan Tavakoli: I totally agree with you on that, Jason, and I also believe the same thing that you do that AI is going to be transformative, in all aspects of our lives, including in education, but there are.

Quite a few skeptics out there. Whether with online education or MOOCs about a dozen years ago, everyone was saying MOOCs will be the future, the few courses from Stanford, everyone around the world will be taking and participating in, and those fizzled out, why do you think AI will be any different and change education, the way those other technologies or formats.

[00:16:39] Jason Gulya:  One of the reasons that AI is going to continue to exist and Have that transformative part is that it's being worked into everything. When chat GPT came out and suddenly everyone was playing with it. The technology itself wasn't that new. We'd had it for quite a few years. It had been made more advanced, but we had it for a while.

One of the big changes was obviously the chat format that made it more accessible for more people, but also the ability to build off of it, that now that we have these foundational models, we have chat, dbt, we have Claude, we have Gemini and so on and so forth. Now that we have them and people are building off of them, that's going to make them more resilient.

The other reason is we have more and more open access. And free versions of it and local versions of it. We have this proliferation of different versions of AI and our students, even today are interacting with AI and sometimes they don't even know it. They don't even know that it is behind their interactions.

And the other reason I think AI is going to continue to exist is. AI is a huge category. In education, we tend to narrow down the scope, we're thinking about chat GBT, which is only a kind of AI, it's artificial intelligence, but not really, it's actually a large language model, and not really, it's generative AI, so it's actually a kind within a kind of AI. But when you look in a larger way, what's happening with AI, there's a lot of AI that is not generative AI. And that's, thinking about things like even AI and biocomputing, I don't think because there's all these different things that are happening with AI being built with AI. It's going to continue to be around. And it's already started to affect how our students interact with information. All those factors together, the fact that AI has so many different forms, the fact that people are just building really cool and interesting things on these foundational models, it's going to make them more and more resilient.

Now, that being said. I do think we're in a hype cycle. I do think that VC money for a lot of these businesses is going to fade away. And I do think a lot of the companies and a lot of the AI programs that are around right now will not exist a year from now. 

I do think there is going to be an AI winter. Now, I don't think that's going to do away with AI. The big foundational models will still be there. Especially now that they're spearheaded by some of the biggest corporations you can imagine. But I do think that there's going to be this shrinking.

So some models will go fade away and certainly products will go away. AI will persist and be transformative at the same time that I do think there is going to be an AI winter in the not so distant future. 

[00:19:41] Mahan Tavakoli: When I think about this, Jason, whether it's K through 12 or college, there is a part of me that thinks we are in for wonderful things in that kids that are the same age, they don't necessarily have to learn the same thing at the same pace. You can have a human teacher using AI to customize, tailor the learning.

There is another part of me that thinks whether it is with colleges and universities or the education system, they are very resistant to change more so than almost any other institution that I know of having operated in many instances with success for hundreds of years. So where do you think this will land in education?

When we project out beyond the hype cycle, do you think this will be a blip where education 5 10 years from now will be the same it is today, which is very similar to 10, 100, and 200 years ago, or it will be drastically different?

[00:20:56] Jason Gulya: Honestly, I do not know. I don't think anyone does. Colleges especially are very strange.

They are very strange organizational structures. And what I mean by that is that in some ways, Colleges, especially if you're talking about elite colleges and universities, are extremely resistant to change, they have this very real, almost palpable investment in being a legacy institution. That's especially the case if you're talking about as many people do when they're talking about colleges, they use Ivy League as the example,

for better or worse. And so certainly in that context, when they are funded by a private endowment, primarily. There is not much of an incentive to change, but most colleges are not Ivy leagues, most colleges and universities are under a great deal of pressure to change and adapt, whether or not they're doing it

they have enrollment issues. I have all these different incentives actually change. So on a real macro level, colleges and universities are extremely resistant to change on a micro level. There's some of the most. change happy places in the world. So if, for example, I decided today to change something about one of my classes, on a whim, I just think, Oh, you know what, I'm gonna change this.

I'm going to make this huge change to the content. I'll change it today. Change it in the next hour. I'll do whatever I want, for colleges, professors can actually be extremely if they want to be right that's the big if. If they want to be, they can be extremely innovative.

Now, there are good and bad sides to that. Because on the one hand, that can give the professor the leeway that they need to help students get ready for work, right? To actually come up with different ways of teaching and personalizing learning and everything like that. But, the downside of that is it takes away the snowball effect.

 If you have say 100 faculty members and 20 of them are doing innovative things, but in very small pockets. Then it's really hard to make organizational change happen. Because of that, colleges are very strange. So I could see it going in either direction. The optimist in me thinks that change is going to happen in education.

I think it's going to sneak in. And this is actually starting to happen, that one of the first things that educational institutions did and colleges did was they tried to ban AI. Now I think there is this growing realization that's not actually possible. Especially because you can ban the foundational model.

You can ban something like ChatGT or whatever you want, but it is really hard to enforce a ban on everything built on that model, especially when Word is going to have it in it and Google Docs already has it in it, right? As soon as you have that kind of proliferation of the technology and it gets cheaper and cheaper to use.

And we're trending that way too. It's really hard to enforce that ban. And the way that it's starting to sneak in is through faculty and administrative use today, we have a weird, I don't want to call it a hypocrisy, I'll call it a disconnect. So that we have right now, educational institutions saying you cannot use AI in the classroom, you can't use it for this assignment or whatever.

And then they go and they use it for something they use it for an administrative task, or they use it to create a lesson plan or quiz or whatever it is, whatever they're doing with it. And at a certain point that disconnect is just going to become more and more of a gap. And then, what's going to happen is it's just going to break.

 Then at that point, AI is going to be allowed in more and more classes, and then we'll have to think on an individual basis and an organizational basis as well, especially with leadership. We'll have to think about what role AI can play in schooling and think about what role on another level, the educational institution, whether it's a college or high school or middle school or.

And whatever we want, what role that plays in society, because we'll have to rethink that too. That's another big question that AI is bringing to the forefront that with AI, what is the value proposition of something like a college education? And one of the good things that will come out of AI.

Is pushing us to answer and address those questions. Cause those are important questions that we should have addressed a long time ago, but we'll need the technological innovation and the age of AI to actually do it and try to think about what those questions mean for us and how to address them.

They are really 

[00:25:54] Mahan Tavakoli: important questions. And a couple of them I want to go deeper with you on. I'm sure you've had a chance to play around Jason with Conmigo it's an intuitive tutor helping you learn content. And therefore, when I look at many of the classes that I took, especially undergrad classes that I took at college, it would take out the need for some of those classes and this kind of.

Tutor interaction would be much more helpful to my learning. That doesn't mean there isn't value to that in person experience, but it requires rethinking. We'd love to get your thoughts on how you see that educational classroom experience evolving as a 

[00:26:41] Jason Gulya: result.

AI tutors are fascinating one. Convigo is really interesting to me, and in many ways I use it as a test case for thinking about what chatbots and what tutor chatbots can actually do. And one of the things I think is happening with some of these AI tutors, as I've played with them more and more, is one, and this kind of worries me a little bit, sometimes I think we assume that they're more engaging than they are.

One of the things that is very starkly obvious to me is that if I have an AI tutor that I could bounce questions off of and have, just teach me things and I can just ask more and more questions to and it never tells me to shut up and sit in a corner, stop asking questions, because the technology doesn't do that yet. You could make it do that, but to this day, as a default setting, it does not do that. And so for me, that's really cool. Then you sit a kid in front of it. And sometimes that happens. A lot of times it doesn't. A lot of times it falls flat. And in my field of teaching reading and writing, Tuners are having a hard time, Conmigo included.

So if use Conmigo to work on reading, it doesn't actually teach reading. It teaches content. So say a student is reading a book, I think one of the examples out there is reading Kill a Mockingbird, or whatever it is. So Conmigo, what it will do instead of teaching the student to read, it asks them to have a conversation with Scout, one of the characters.

Which is not the same thing, that is not you learning to read. That's you learning about To Kill a Mockingbird. I'd much rather them learn how to read and analyze. Now there are some tutors that are getting better with that., and that's going to happen in the future.

It is going to eventually be ironed out. So I've played with tutors. In a perfect world, at least for me, it's going to be a matter of figuring out how to make those AI tutors Compliment in person instruction. Especially when you're talking about big institutions, are you talking about a college serving 30, 000 students?

There are so many different needs of the students. Some of them do not find that I don't think is going to change. They don't find online learning engaging, that will continue to be the case for some students. And some students really thrive in an online learning environment. And then some students really need that, 5050 we're all in different places on this spectrum.

And it's going to be a matter of figuring out, how programs like AI tutors can work into the larger ecosystem, and again, I don't think this is a school thing. This is an organizational thing. Companies are going to run to the same question. If you are a leader in an organization, and you want your employees to do blank, and your goal is to put in The infrastructure that encourages them to do that.

It's going to be the same set of questions, and it's going to be about our figuring out how we can shift along that spectrum for the individual employee or student. So , I think a lot about AI tutors and I use a lot of different programs. One of the things that I try to emphasize is that I don't think the future is.

An AI for each individual person. I don't think that's the future. I think the future is a bunch of different AIs talking to each other for each person. So at this point I use AI enough that I know that if I want to do something, if I want to complete something it's not perfect, but I have some sense of which program I should use.

That I know that, oh, chat and GPT, or Gemini, or perplexity is another big one, especially if I'm doing research, and I find ways to create a conversation with them, and finding ways to have our individual AIs talk to each other, that's the future, there are different programs now coming out that it.

Do that. And we can also do it ourselves. So one of the experiments I ran literally a few hours ago, I was doing a research project. And I said, I'm actually not sure which one I should use. I'd love to use them all. So I did. So I took the research criteria, I threw it into chat GBT, I got something, I do the same thing, the Gemini got something into perplexity got something.

And then when I did, Okay. Is I created a personal GPT. So these are personalized chatbots. You can create with open AI with the subscription. And what I did is I created a research synthesizer that allows me to just pop all of the stuff in there and give me a synthesis of the whole thing.

Which allows me, to approximate where we're going, having these different AI programs that are talking to each other that we have control over, and it's going to be a matter of figuring out how we feel. That and so that's going to be how we move forward, even as, and I think this will happen, we reach an AI winter and a lot of the programs go away.

 That ability to create a conversation out of chatbots or ad programs is going to be the way that we move forward. Whether we're in schooling or leadership. 

[00:31:51] Mahan Tavakoli: I see the applications as you talk about with the AI agents and some of the personalization. So I could definitely see those applications in education as well as organizations. But one of the challenges that I see that organizations face Jason and I imagine and know you.

Colleges and universities will face is that there is a big challenge with change management. How are you seeing, especially in the education space, getting people to be open to this kind of change? Because in educational institutions, you become a professor.

Once you get tenured, you are there forever. There is value to that, but it makes the organization and the individual much more resistant to change. 

[00:32:37] Jason Gulya: Honestly, it's one of my favorite use cases of this technology. I think that when you look online and you look at all these experiments that people are running with AI programs, they tend to overemphasize.

Output and specifically output done quickly, all this emphasis on, I wrote 20 emails in 20 seconds, whatever it is, just putting out things again and again. In a quick way. There's a huge emphasis on it. If we lean into that, we are missing something huge because one of the big use cases with this technology is based off of the fact that honestly, it's not a human.

Because it's not a human, it interacts with our data. In a way that doesn't have human assumptions. So one of the things that I did after Gemini came out and they have a very generous two month free trial, which I'm still playing with. I started running questions by it and I said, I'm not going to ask it to do, just the little things

I'm going to take the hardest organizational level questions that I can. And I'm going to see what it does with it. So one of the things I did was I worked with it. I create a brainstorming prompt, I knew what I wanted to do. I didn't know what the perfect output was going to be. I went in knowing that I wrote the prompt having that information and I asked it.

All right, what we're gonna do is we're going to create a two year plan for a college, a big college. I gave it a specific number was like 20, 000 students or whatever it was going from being level one with AI technology. So very little use of technology or scattered use of AI technology to a five, which meant being An AI powered university and I asked it to come up with a specific concrete plan and I, got an output and I pushed it.

I pushed it as far as I could, I wanted students in the process. I didn't want them on the sidelines. I wanted them to be building with everyone else. I wanted as many people included as I possibly could. And I wanted very real, very concrete steps that amped up incrementally. And I told it that along the way and follow up messages and in about 18 minutes. I had a plan that I would say is further than 99 percent of colleges and universities. It is actually one that, yeah, you want to revise it a little bit, especially depending on the institution, but it is a plan that you could roll out,

that you could start in two years creating an AI powered university. And part of that too is as you do that, as you bring in these incremental stages, it saves you money. Part of the plan is, doing things that would save you a ton of money on textbooks. A ton of money on, what you need to do to perform administrative tasks.

And that for me is I hate this phrase, but a game changer because we can actually use these AI programs to brainstorm solutions to some of the hardest organizational problems that we have. And the output's not going to be perfect. And this is a good thing, right? You need that human input and the context and everything like that.

But the fact that I can do that an 18 minutes is absolutely mind blowing. And that really means in many ways that we just need to embrace the capacity for the technology to come across new solutions. And brainstorm and throw them out at us. And we can revise them as we go along because the beauty of this technology is that it can get us out of a rut.

It can help us figure out when we're stuck. And again, this is a broad organizational use of it. Independent of where you are that if you want to change how something works, you can use AI to brainstorm and get ideas. And you can try , different AI programs and see what comes out of it because they all have their own strengths and weaknesses.

But that for me is the biggest untapped part of this technology. 

[00:36:51] Mahan Tavakoli: What a beautiful example, Jason. I've done something similar with a couple of organizations where we've talked about their strategy. And it is as if you have some of the smartest strategists in the room along with you doesn't.

Take the place of the judgment that the senior executives and the CEO need to have and doesn't replace the thinking. However, it brings you to a much different level, much faster. Now within the classes also there is potential and I've heard you talk about this as well of how you are using it with your students.

How are you getting them to become comfortable with AI, use AI, what is your use as a professor of AI in your English 

[00:37:47] Jason Gulya: class? I use it consistently. I've worked into a lot of the assignments from our students and I've done it in different ways. One of the things that I try to do is I try to use AI to teach better.

I think that there's a lot of focus on efficiency. Doing things faster, saving time, and I understand why, especially in the educational realm where everyone is overburdened and doing 400 different things at once. So again, that's not relegated to education, that's just life. So there's a lot of focus on that, but I try to focus on how we can actually do it better.

And as a very quick example, I, like a lot of people in my field, Teach argumentation that I think going forward, one of the big skills is going to be finding information, synthesizing it, and figuring out what you think about. That sort of dynamic is going to be key and that's not going to change.

I actually think that's going to amp up as this technology amps up. The conventional way of teaching that was a five paragraph essay. That's how we taught it for a long time, right? That you write an essay where you have an introduction, you announce an argument, you back up the argument, da.

So that is the conventional way of doing it. So what I do now is fundamentally different. I actually give my students a prompt. And it is about a page long. Depending on the class, I might make it a little bit longer and it is a contrarian prompt. So I have them copy and paste it into chat, GBT, and some of them will read it.

Some of them won't. Okay. And basically it asks the AI to act as a contrarian and not one that just takes the other side, but pokes holes in their argument. That it's about bringing out the assumption, and I have my students pop that prompt in there and then give it an argument, something they believe, something that, they believe based on evidence.

And I have them chat. It made me so happy when they introduced the but in the top right part of the chat where you can just send the link to the entire chat. That made me so happy because I have them to send me the link and I can read through that.

And I learned a ton. I learned way more than I ever would from a conventional five paragraph essay, because I actually want to just see how they argue. I want to see what happens when they get pushed back. I have some students who just gave up. The machine won. And I guess conceivably, they changed something that they truly believed, for a really long time, I had students who went in having a very anti death penalty stance and gave in and left the conversation being pro death penalty, 

 That happened, that sort of transition happens. And then I have some students who push back and some students who just don't read the response. You know that very quickly, right? If you give an essay, it's like difficult to figure out, Oh, did they do the reading?

Everything like if they're interacting with a chatbot and they didn't read what the chatbot wrote, that it's really quick, it's super obvious that they're not actually interacting it. They're just running past each other. And that's where I want to see a lot of education go.

Because, in teaching that way, I actually didn't change the foundational skill at all. I'm teaching the same, at least for me, the same set of skills that we were prioritizing with a five paragraph essay, but in a way that is more dynamic, and that I would say is actually more relevant, especially because then you can look at AI literacy.

You can say, oh, this is where a student is really struggling with it, for example Because this happens, if the chat bot lied to them. Made something up, right? Did they know it? Did they trick it? Which is the very same thing that would happen in a debate, if you and I were debating over something, we might either through malicious intent or just through an honest mistake, we might give wrong information to the other person.

That serves our argument and helps them. And so it's testing, giving a sense of how students are interacting with technology. addition to that focus on argumentation, synthesis, analysis, all that stuff. So I'd love to see education move in that direction, which makes it more relevant.

Honestly, just makes it more interesting and more fun. It 

[00:42:12] Mahan Tavakoli: does. And Jason, what you shared is so relevant to education and organizations as well. I've done this with a couple of leadership teams and it's incredible. First of all, when we are in the same organization, a lot of times we have. Some of the same blind spots.

And then when people move up into organization, others are more mindful at pushing back against their ideas and their blind spots. This is an outstanding way. So I love the fact that you're using this with your students and having their ideas be challenged by chat GPT or whichever one of the LLMs they're using.

But this same thing works in organizational thinking, when thinking about strategy or a priority, having it as a cognitive partner, poke holes into your argument and what could go wrong. Now you might still choose to do the same thing. That's great, but you have someone. who doesn't mind hurting your feelings, doesn't worry about the performance review at the end of the year, tells you exactly the way it sees it.

So what a beautiful example that both is relevant in education and organizations 

[00:43:28] Jason Gulya: as well. I think you're absolutely right. And one of the real things to push home is that doing that kind of exercise doesn't require learning a prompt framework. There's all this emphasis that you have to learn, the perfect prompt that you can use forever.

I think that when we do that, I think when we ever emphasize that one. We lose the fact that these large language models are changing every day. The problem that works today really might not work that well in a month. And that happens all the time, but it also takes away from this technology's ability to just brainstorm and find assumptions,

it really can be as simple as, I work at this institution. I think the best way to do something is this. You just type it out and then ask the AI, what are my assumptions here? And that's such a powerful thing because it allows you to get outside your own head. And I do think one of the biggest organizational hurdles, independent of where you work, is going to be this tendency to stay in our own silos.

Now in higher ed. That is really powerful, people are very comfortable staying in their silo and not going outside of it. Not talking to anyone outside of it, english literature, you say in English literature, enter whatever discipline you want. But that same sort of impulse is everywhere.

We want to stay where we're comfortable, but If we are embracing this technology, if we are using it as a way to just rethink how we think and how we approach things, that's really powerful, it gets us out of our silos in a way that is low stakes, is judgment free. You don't have to worry about saying something that's going to offend.

The AI, whereas you might, if you do that in an in person setting. And that's one of the best uses of it, finding a way to use the technology to get out of our head so that we can actually authentically rethink what we're doing and what we've been doing and try to figure out sexually the best way to do things.

And as I've mentioned again and again, that's not an educational thing. That's for anyone who's working in an organization and certainly anyone who has a leadership role. 

[00:45:42] Mahan Tavakoli: It really is powerful all across the board. Now, I would love to also get your thoughts on how this is therefore going to impact the jobs of the future.

Jason, one of the things that over the past dozen plus years has happened in many communities, including where I live in the greater Washington, DC region, there has been a high emphasis on coding skills, especially. underserved communities, channeling people to coding bootcamps and everything else. Now, all of a sudden Microsoft says about 80 percent of their coding is being done by AI.

So pretty soon, other than the very best coders who can use AI to scale their work, there is need for very few people to code. What do you see as the kinds of skills and capabilities that colleges need to develop and the workforce needs to have to be able to thrive in this augmented future that you're talking 

[00:46:42] Jason Gulya: about?

I think one of the things that we are going to have to get increasingly comfortable is saying we don't know. and really embracing the fact that this technology is advancing fast. I actually think that in many circles, if someone comes out and says they know exactly what this technology is going to look like in two years or five years, and this is how you future proof.

As soon as they do that, it is a giant red flag to me. To be honest about it, we don't know where this is going. We really don't. And it doesn't matter who you are. It doesn't matter if you're you, if you're me, if you're Sam Altman, if you're Bill Gates. We do not know. We don't know what it's going to look like.

We certainly don't know the implications of it, what that's going to mean for how we think, how we learn or anything like that. And so if we truly embrace them, I think that we can start to create a foundation for learning. Because then adaptability has to be something that is taught purposefully and intentionally.

So we have to be very specific about what it is and how we are encouraging our students to adapt. Obviously part of it is just learning tools, that's part of it. But then there are also much higher level forms of that,  what does it mean to adapt?

What does it mean when, The jobs that companies are hiring for are fundamentally different two years from now. We don't know what their jobs are going to be. We all felt prompt engineer was going to be the big job of the future that everyone's going to get, and we're all going to make 300, 000, 400, 000.

We're going to prompt engineer and that job is basically gone. It is vanished as a technology has evolved being an expert prompter. It's not really that necessary anymore because technology is just caught up and it's worked into it so you don't have to be this perfect prompter anymore.

It helps to know prompting sure, but it's not going to be the foundation for your wealth. And that's going to continue to be the case. There's going to be a large emphasis on adaptability and critical thinking. And again, these are really abstract things. And if we are going to serve our students, we have to be very specific with them about what that means.

We can all say, Oh yeah, colleges are going to teach critical thinking. And this is a lot of what's happening. But the danger there is that we actually have to nail down what that means. What does it mean to think critically? What does it mean to be skeptical? That's a concept that is related to critical thinking and trying to be specific, trying to be concrete, and then focusing on how, that allows students to get ready for work regardless of the jobs that are out there.

 That's going to be key. And one of the things that AI is doing is it's showing just how problematic college to job pipelines are. , these have been taken off for decades, the idea has always been that in a perfect world, you have college graduates simply going down the pipeline and entering blank job.

And that's beautiful. That's a beautiful thing. But it is way too neat for an age of disruption. It is right and now one of the things that we're seeing with those pipelines is they're bursting because those jobs are no longer there. And if you're spending two years In a pipeline, knowing that you are going to get that job, that thing, that coding job, that software job, whatever it is, and then it vanishes, that for us, is going to be a push to focus on. Human skills. I don't use soft skills cause that's a little judgy, but human skills, interpersonal connection, because we're going to continue to be humans mostly, , being able to connect in that way is going to be huge.

So human skills, I would say adaptability and critical thinking. And again, it's going to be a matter of nailing down what we actually mean by those things. It is, 

[00:50:47] Mahan Tavakoli: Beautifully put in a couple of perspectives, Jason, in that even the people who are deep into the field and anyone who has followed Sam Altman and others say that even over the past year, they have drastically changed their mind about what is possible and what is not.

And oftentimes people Researching the field are surprised with the potential and the capabilities of what the LLMs are doing and what AI is capable of. So there is that need for adaptability. One of the challenges I find though is that organizations also haven't fully grasped that idea. So the college to workforce pipeline is broken part because of the colleges but the organizations are also still looking for some of the old patterns of behavior, knowledge, some of the old credentialing, so the systems are meant for a different era while we are accelerating through this AIH. 

[00:51:49] Jason Gulya: It'll be interesting to see when and if adoption picks up.

I think it will. But there are a lot of statistics right now that are pretty sobering in terms of what companies have actually adopted AI practices. There are good and bad reasons for that. I do think that there's a great deal of skepticism that says technology will continue as it is right now.

 There are good reasons why people are coming to that conclusion, especially if we think there is going to be an AI winter in the future. But I do think that colleges that are forward thinking now and are coming up with adoption strategies for this technology are going to be, light years ahead and maybe too far ahead for other businesses to actually catch up.

 I don't personally believe that anyone's too late. It's still early. The technology is still early, and it's still in its infancy. And so if anyone's out there, I'm just thinking, Oh, I haven't done this yet. So I'm behind. I just don't think that's true. But I do think that we're gonna hit that point.

[00:52:49] Jason Gulya: And next year, I would say when adoption rates start to increase pretty quickly, actually having an AI strategy in place. If you have one, puts you against the norm. As the hype starts to fade away and businesses start to see more and more use cases, colleges are the same way for me.

Colleges are just weird companies, . They have things that are specific to that, but they still need to keep the doors open, to serve students, they have to keep the doors open, which means they need to have a business. Side, regardless of all the, social reasons for them and stuff like that.

They still our businesses that they're very core in many ways. So because of that, they too should be thinking about, what strategy can be in place that allows them to take advantage of AI, but not expose themselves. That's going to be the balance. Because what happens when an AI model goes kaput?

What does that mean? You build your entire business off of blank and it goes away, that's going to happen. I think that something like chat tbt will continue and, or even Gemini will continue to be around. But if you're building a model off of something else, like maybe not.

And so trying to figure out a way to strategize, which means embracing, but also hedging your bets a little bit. Cause you don't want like, all right. They decided to do this. So my business is gone now. Cause you don't want that to happen. So it's going to be a matter of finding that middle ground once adoption rates start to increase.

And I do think that's going to happen. It's just going to be a matter of. When, 

[00:54:19] Mahan Tavakoli: I agree with you and it is happening fast and I believe we are going to go through as Azim Azhar says it beautifully, an exponential curve in the use and impact of AI, not just generative AI on AI. So when the pace of change starts picking up, it will go really quick.

Jason, I have president and CEOs of universities. College administrators and then presidents and CEOs of organizations from nonprofits to for profits to directors and managers. As you are giving advice to them, what to do as you mentioned, there are lots of people putting out content about the hundred prompts to use absolute bogus and.

Every day, I get dozens of emails with new different AI tools that are being rolled out. So I can imagine people are saying, Oh my God, I can't keep up with all of this stuff. So what would be your advice and recommendation? Where should they go? What should they look at? What should they do so they keep up without feeling overwhelmed?

[00:55:34] Jason Gulya: Tip number one, do not look at those lists. Just ignore them, pretend they don't exist. I do the same thing. I go online, I look at my newsfeeds and I see the thousand plus AI programs have been released in the last few months. And for anyone who's just listening to the audio version, I just rolled my eyes.

So yeah, I would say, first of all, do not look at those for multiple reasons. One, a lot of them will not. Persist right now. Certainly like a year ago, if you were launching an ad product, you didn't even need to have a product. People were going into business meetings without a product and just a basic sketch and we're coming away with millions of dollars.

 That was the VC culture at the time that you had a basic idea and you were getting funded that is eventually going to end, as more and more VCs are going to come in and say, show us how you made money, and then a lot of them will not be able to do that.

So ignore those lists. I would say as step number one, jump past the prompt frameworks. There are a lot of them out there. They all have cool acronyms or what they think are cool acronyms. Jump past those. I would say tip number one, choose one to two program, so chat TV T is a go to one just because it's across the board.

It really is a Swiss army knife. You can do with it what you want. Just pick one or two and spend some time on it. Research is saying that when someone starts using an AI program, it takes them about 10 hours of use to start seeing use cases. That we actually just need to give ourselves time to play sandbox and play around even before we get to the frameworks, even before we get to the list of AI programs.

So the virtue of this is getting past our knee jerk reactions. The hype around AI has moved us in so many different directions. When we think of AI, some of us think it's going to save the world. It's going to end global warming, end poverty, so on and so forth. Others think it is doomsday.

It is the apocalypse upon us. It's the beginning of the end of the world. And yeah, we can think about that, but before we get caught up in those conversations, we just need to use it. And just use it in a way that is open. Even if you decide it's not for you, even if at the end of the 10 hours, you think, okay, it's useful for this useful for that.

And it doesn't really help me. Even if you get to that point, get to the 10 hours. Cause you'll start to see certain ideas and use cases pop up and. My tip when people do that is be selective. If you get to the 10 hour rule, the goal is you get past the knee jerk reactions and you use it enough that AI stops feeling like magic.

When we go in. And we first start using it and we start asking it questions and getting whatever we're getting. There's something astounding about it, especially if we are a first time user. Once you use the technology enough that it no longer feels like magic and it starts to feel mechanical and like it's completely based off of your output and you can trick it and it can trick you.

Once you start to get that, that happens, after even just a few hours. Then you're ready. Then you're ready to start thinking about it, you don't want it to feel like magic. If it feels like magic, you're not ready to strategize and think about use cases. So I would say first of all, just get to that point and then build from there.

And again, it's gonna be just like one program. It doesn't have to be five programs. It doesn't have to be ten programs. It is much better to know how to use one program well, than to use ten programs. Not so well, I 

[00:59:30] Mahan Tavakoli: couldn't agree with you more. Jason going deeper to be able to understand the capabilities, the limitations is a lot more powerful than running after all of the different tools, all of the different prompts. The other thing that is really important is to find sources of signal rather than noise. I love aJ Agrawal's books. I love Paul Daugherty's books. Love Paul Reitzer, Conor Grennan's work.

And you, Jason, you put out outstanding content of real use cases. In education and for organizations as well, which is why I really enjoy reading what you put out. So for the audience to follow your work and your thoughts, Jason, where would you send them to? 

[01:00:27] Jason Gulya: The easiest place is LinkedIn. I'm pretty active on there.

So if anyone listening to this, wants to reach out, wants to just ask questions, or think about the technology more, please feel free to follow me if you want, connect with me, or send me a DM. And I try to be on top of them. I'm more than happy to help out. We're all trying to figure things out as we go along, but LinkedIn is probably the place that I'm easiest to find.

[01:00:52] Mahan Tavakoli: I love your approach. I love your thinking and really appreciate you sharing your insights with the partnering leadership community. Thank you so much, Jason and 

[01:01:02] Jason Gulya: Lilia. Thank you so much. It was a pleasure.