Ranked in the top 1% of all podcasts globally!
March 28, 2023

249 AI Technology to Support Social Changemakers at the Speed and Scale of Change with Emily Yu, AI Priori Founder & CEO | Greater Washington DC DMV Changemaker

249 AI Technology to Support Social Changemakers at the Speed and Scale of Change with Emily Yu, AI Priori Founder & CEO | Greater Washington DC DMV Changemaker

In this Partnering Leadership conversation, Mahan Tavakoli speaks with Emily Yu, Founder & CEO of AI Priori. AI Priori is a technology startup that offers social changemakers a compelling new way to understand and address the world's most pressing challenges using AI technology. Emily Yu talked about how her background made her passionate about social change and using technology to understand better and address social needs. Emily Yu finally shared her vision for AI Priori and the positive potential of AI applications in the social sector to surface insights and drive action for a more significant positive impact.

Some Highlights:

- From Georgetown School of Foreign Service to pursuit of culinary arts

- Emily Yu on working at Ogilvy on the  Heart Truth Campaign

- The Case Foundation--at the intersection of technology & innovation for social good

- The importance of taking calculated risks to create more robust programs to generate ideas for social good

- Emily Yu on the benefits of entrepreneurial approaches in philanthropy

- The value of taking risks with intention in order to push the envelope with programs

- The challenges that philanthropy faces in addressing inequity

- Emily Yu on the dynamics of trust-based philanthropy

- The potential for philanthropy to scale its impact through AI-driven social change 

- AI Priori's vision and potential for foundations and program officers

- AI and bias in social sector funding 

- How to leverage AI to tackle social challenges

Mentioned:

Partnering Leadership conversation with Jonah Berger on Magic Words

Partnering Leadership conversation with Jean Case on Be Fearless: 5 Principles for a Life of Breakthroughs and Purpose

Partnering Leadership episode  Augmented Future of Work with Artificial Intelligence, Going Beyond the AI Hype to Visualizing & Embracing Possibilities with 10 AI Tools

Human + Machine: Reimagining Work in the Age of AI by Paul R. Daugherty and H. James Wilson on Amazon

Andrew Ng Website and AI Resources

Robert Wood Johnson Foundation

Connect with Emily Yu:

AI Priori Website 

Emily Yu on LinkedIn 

Emily Yu on Twitter

Connect with Mahan Tavakoli:

Mahan Tavakoli Website

Mahan Tavakoli on LinkedIn

Partnering Leadership Website


Transcript

***DISCLAIMER: Please note that the following AI-generated transcript may not be 100% accurate and could contain misspellings or errors.***

[00:00:00] Mahan Tavakoli: Emily, you welcome to Partnering Leadership. I'm thrilled to have you in this conversation with me. 

[00:00:04] Emily Yu: Thank you, Mohan. I'm glad to be here.

[00:00:07] Mahan Tavakoli: Emily, can't wait to talk about AI RI but before we get to that, we'd love to know whereabouts you grew up and how your upbringing has impacted the kind of person you've become.

[00:00:18] Emily Yu: I was lucky enough to be born in and grew up in the DC area, so just right outside of DC and Bethesda and Rockville, Maryland. My parents immigrated here from Taiwan, and so my sisters and I am the middle child. We're all born here and this area is so rich with culture, museums and educational opportunities.

The opportunity to travel around, all those things, I think shaped what I am doing and influenced what I'm doing now, which is really to try to figure out a way to make communities better and healthier. 

[00:00:49] Mahan Tavakoli: What were the communities that you grew? 

[00:00:52] Emily Yu: That's such a great and thoughtful question because DC in many ways and in the surrounding areas are so multicultural. I will say though, my parents did send us to Sunday Chinese School, which was a language based school, right?

So we went every Sunday to learn. Chinese and my friends and I, we had a number of different interest areas. So everything from music to arts I should add, surprisingly not really technology per se or mathematics in that sense, that led me to a connection to AI today.

But a lot of those just rich cultural experiences and getting to experience the cultures of my friends, whether that be Hispanic, Asian it's just was a wonderful and rich upbringing. 

[00:01:30] Mahan Tavakoli: You ended up going to Georgetown School of Foreign Service. Did you want to serve in foreign service?

[00:01:37] Emily Yu: I did. And yes. I proudly started Colonels Addoch, Magruder High School's First Model, United Nations Club. All of that led me to Georgetown to study international relations. along that path, I think this just exposure to an idea of collaboration and partnership and diplomacy, learning how to communicate across cultures, that really was what I took away.

I think I remember hearing, I think you as well went to George. For school Yes. Courses. Yes. Yes, indeed. So this Jesuit spirit this inservice to others approach to learning and to our profession, that really is something that I think anybody can benefit from and really helps you go further because you're in partnership with others.

[00:02:24] Mahan Tavakoli: That desire for service is actually one of the values I got from Georgetown and a big emphasis on ethics as well. But you had mentioned that you wanted to also go to culinary school, so it sounds like there were conflicting passions, Emily. 

[00:02:42] Emily Yu: This is true. There might have been after Georgetown and after this wonderful experience getting to learn about myself more and the world around me, I had just this really drive or passion, if you will, for culinary arts and thought that, I would pivot and pursue that.

A little bit of a funny story you might appreciate after college, went to New York City and lived there for a while and, how could you not get into food when you're in New York City, . Not having had many jobs, at the mall and other jobs like that in retail.

but never in kitchens or in restaurants. I did what I knew how to do and I sent my resume to the top 50 best restaurants in New York as dubbed by whatever magazine. And I actually got back some responses. I said, Hey, I'm willing to work for free. I just wanna learn. I don't have restaurant experience, but man, I'm a hard worker and I would so appreciate the opportunity and a few restaurateurs and.

Invited me into their kitchen, into their back offices, just to learn. And so yeah, I put in quite a few months in the kitchen after my nine to five job and learned how to make a few interesting things, served a few dishes, but then came to the realization that it is a very demanding and rigorous job that I will forever love and appreciate, but it's not quite the right fit for me.

So I was very, For that experience. . 

[00:04:03] Mahan Tavakoli: Now you appreciate the food you're serving at restaurants differently, I imagine, . Exactly. So Emily, you spent some time at Ogo V and worked on something that's also close to my heart and that's heart disease that impacts women 

[00:04:19] Emily Yu: yeah, the Heart Truth Campaign. So this was created years ago by the National Heart, lung and Blood Institute, a part of N I H, the National Institutes of Health. And it was really to raise awareness that heart disease is the number one killer of women.

When you consider, overall, Age, demographics, all across all other factors. It's the number one killer of women. And so I was very fortunate to get to work at OK v Public Relations on this campaign that had everything from corporate partnerships. So we were, for example, having the heart truth logo to raise awareness on all the boxes of Cheerios, right?

It was on this sort of awareness building effort to make sure that women, they could learn their numbers when it came to heart disease, that they knew what to do and how to take action. And really just to make sure that, as you mentioned, it's not just a disease that affects men, but it's an equal opportunity, chronic disease, and something that women can do something about to improve their health and their wellbeing.

[00:05:22] Mahan Tavakoli: It is really important. , I know you like puns. Yes. . So pardon upon your heart is in service, which is in part the reason you ended up at the Case Foundation. 

Would love to know with your heart being in the right place and that desire for impact, what lessons did you learn from the Case Foundation? Because I think there is. a little bit of a broken system with respect to the way philanthropy interacts with the community. 

[00:05:54] Emily Yu: First off, I do have to give you kudos on the pun. That was well done . 

[00:05:58] Mahan Tavakoli: Emily, you said you like really silly puns. This episode might end up being, Silly puns. You gave me permission. Good. . 

[00:06:05] Emily Yu: 100%. 100%. We should have a little counter on the side.

 Every time one of us says one, it'll go off and go. Ding. You know the Case Foundation, oh, I have so many good things to say about both Gene and Steve Case who founded the foundation, and Gene who is the head of the foundation. And I think they've now transitioned into the Case Impact Network and, through that experience and just to connect the dots.

When I was at Ogilvy, we were managing and creating these programs that were really to drive behavior change and to raise awareness. A lot of those core elements really translated into what the case Foundation was trying to do, which was work for the social good at the intersection of technology and innovation.

For social good. So everything from civic engagement to again tech for social good and building social media platforms and opportunities to At the time, cuz this was over 10 years ago, really understanding what the millennial generation was shaping up. To do in terms of their unique approach and views on social change, which were different than previous generations.

And of course impact investing and another, a number of other programs and projects. So lessons learned, wow, there's a lot. , I had been in the social sector for more than a decade at that time. And I will say working with the Case Foundation in particular, this idea that you could apply entrepreneurial approaches to social change, I think, not that was necessarily revolutionary in and of itself, but the way in which they were harnessing technology, like social media platforms, like apps, like other sort of online engagement opportunities, which is not necessarily a surprise given the, Steve Case's background with AOL and Gene's background.

So to have that come together and then have this ability to leverage their connections, gene and Steve's connection and presence across both the business and social sectors to bring together and harness. Not only the ideas that these two sectors could generate, but then have the opportunity to implement them.

So when I say entrepreneurial approaches, one of the stories that I will never forget we were in a, just an annual program review meeting. As, as any organization worth it, salt will do. We were recapping programs. We had implement. Looking ahead to future programs we might work on, and as a team discussing the pros and cons.

We had implemented a red, yellow, green light sort of designation. So green light, meaning this program is great, it's running really smoothly. Yellow meaning just, caution, there might be something we should keep an eye on. And red meaning, maybe this program didn't pan out the way we thought it would or should.

We were going through and as a team, we were working so hard and showing gene the programs and everything. Dark green or light green. We had designated shades, right? But we were there and then we threw in a yellow or two, to be honest with ourselves. And Jean and her infinite wisdom, she stops us about three quarters of the way through and she's wait.

Team, where are the reds? I was shocked. I and the team was we thought we needed to stay away from red's. Red was a failure. Red was bad. But Jean, she very quickly said to us, look, we're an entrepreneurial, inspired organization and if we're not failing at, we're not pushing the envelope far enough in a calculated to be clear, in a calculated and thoughtful and strategic way, and it just flipped on its head.

This idea of fail fast, and learn from what isn't working so that. You can take the risks and make better, stronger programs. And to your point, coming full circle, if philanthropy, which is resourced, can't do that and take those calculated and strategic risks for social good, who can?

And so that was one of those professionally defining moments that will never forget. And I'll forever be thankful to Jean for. 

[00:10:02] Mahan Tavakoli: Emily, that is such an outstanding point, and in the years now, decades, where I've served on various organizations, boards, and as I served as Board chair of Leadership Greater Washington.

One of the things I would always get frustrated is the desire to have these measures that come up with greens all along the board. There is something wrong if all we are measuring is giving us greens on the dashboard and to the point that Jean made and you just emphasized it's experimentation with intention.

Sometimes those fail and you learn from them, so it's not poorly executing it's experimentation, but those intentional. Experimentations sometimes lead to yellows and oftentimes lead to reds and allow for much healthier conversation. So I think if there is one point, I would urge all organizations, especially nonprofits, to think about is when they are presenting to their.

Or when the board is sitting there, if they're seeing a dashboard that is all green, there is something wrong with the way they are thinking about their business or their service and their impact. . 

[00:11:31] Emily Yu: Couldn't agree more. And that's great advice from a board perspective too, I think sometimes within the nonprofit sector and even in foundations, we need to hear that we need to know that is okay and that is a norm that we should all aspire to.

Because if we are not out there pushing those limits in a, again, calculated, in strategic way, how can we ever. Expect to create that sort of transformative change because something has to change in that process. So having all those greens isn't necessarily a good thing all the time. 

[00:12:01] Mahan Tavakoli: One of the concerns that is sometimes raised, whether Anan talks about the fact. The philanthropists. and the organizations with resources shouldn't be the ones that guide what is important for the community. So there are concerns around that.

And then there are also concerns around we talked about OGO V does great. Research and thought leadership on behavioral impact of choices. For example, a simple one I've had some behavioral scientists on the podcast also, including Jonah Burger, talks about the fact that if you put salad first in a all you can eat buffet, people.

Taking more salad, then if you put the dessert first, they would end up taking more dessert. So therefore, by designing the food buffet, , you determine to a certain extent what people end up picking. And there are concerns that I hear from the community and from people saying, so are we letting some foundations, some philanthropists, and people high on up, decide what should be the priorities of people in the community and would love to get your thoughts with respect to.

Should we think about and approach? 

[00:13:29] Emily Yu: Ooh, do you have all day for this conversation? , you really hit the nail on the head. I think this is a big question that philanthropy as a sector is going to have to reckon with. This year and moving forward in particular because of what we saw unfold during the pandemic, this idea that inequity exists and how that plays out in how philanthropy at its core operates and why it exists.

I think there is a conflict and a tension that needs to be resolved in order for the true mission of philanthropic organizations to be fulfilled. And I'm of course, overgeneralizing, in general, philanthropy is there to fill gaps, not only from the government and also private sector and communities to fill those gaps for social needs and other advance.

That will improve communities, but then you hit the nail in the head. What does improve communities mean? And who's defining that? I'll say, and I would never wanna be so prescriptive as how should we view that wri large, but for myself, I think. I've had to, over the last couple years in particular hold myself accountable.

I, having been in philanthropy and been an executive director of a resource generating and giving organization as well as a grantee, because I ran a funding collaborative, so I was also receiving funds from others. I saw this cycle play out from. Quote unquote, both sides, right? The grantee as well as the funder.

And I will say there are 100% power dynamics and imbalances at play that negatively affect the people that I think foundations see themselves helping. Now, the flip side to that, and then I would love to share a story about this to just. Bring it home. But the flip side to that is philanthropy does so much so much to, to quote unquote fill those gaps and to support areas that are, benefiting from these funds or this knowledge or this awareness building.

And so while there are I think, opportunities and challenges within philanthropy writ large, I take it a step back and I see it as underlying issues with the. that philanthropy operates in. So just to bring it home, cause I hate speaking in these abstract, these sort of esoteric terms.

When I was at Build the Build Health Challenge, it is this funding collaborative, it is a national awards program and we were working with over 55 communities across the us. We. An opportunity to bring both awardees so that this is community-based organizations, public health departments, hospitals and residents together with philanthropic foundation program officers, so the funders and host these conversations.

and it would get really quite tense because these communities were speaking truth to power and saying things like, you ask for these grant reports, but we never necessarily see what you're doing with the information you're learning, right? So why are you collecting this data when it costs us x amount of time and resources to put it together?

For what end, you have us fill out these applications and I'm speaking broadly here because this was just a forum for this sort of conversation. We fill out these incredibly intensive applications when, maybe several hundred will apply. Maybe there's five awards, and we don't get.

So then how are we ever supposed to learn from and ch You know, I improve what we're doing in certain respects. We have communities that, have to do things like site visits. We actually came to the realization that with the pandemic, we switched over to virtual site visits instead of in-person, which is where usually the community awardee will host the.

in person, on site, right? So they can see the project in with their own eyes. And, switching to virtual during the pandemic, a lot of the awardee said, Hey, that was a lot better for us. It might not be better for you, the funder, because you don't get to travel and see these things firsthand, right?

But what really is getting lost in translation from seeing it virtually as opposed to. When the net is, we don't have to bend over backwards to host you and do this whole meeting and stop everything else. So those are just three really brief, but I think very applicable examples of this power dynamic and imbalance that we referenced earlier, and how I think there is this reckoning that is coming, which I think is a good thing.

Where philanthropy is leaning more into what a lot of people call trust-based philanthropy or leaning into having the awardee or the community member be able to either decide or facilitate or inform. The policies and practices of the funder. And a lot of foundations are moving towards this community centered model where they are engaging with the community partner to decide the parameters of the grant agreement, right?

To make the funding open-ended. So now they don't have to submit this. 25 line item budget, right? For like down to the penny. But rather let's just talk about the areas and the groupings. And let's give you the freedom because you know what your community needs, how it needs it, and what will work to address this situation.

And so I'm hopeful that is the trajectory. Philanthropy will move toward.

[00:18:46] Mahan Tavakoli: That is a great way to think about it, and I am really thankful that someone like you that has this deep experience in different aspects of philanthropy also is passionate enough. About artificial intelligence's potential to then end up making the transition to launch AI ri. So why did you decide to do that?

[00:19:14] Emily Yu: Ooh, this is a big life pivot, so I'll try to keep all this into something that I hope could be applicable to others who might be interested as well in transitioning for me. This really does go back to the Build Health Challenge actually, and my work in philanthropy.

So as a program officer and then I'll explain what AI Preor is, but for context, those awardees those leading innovators that we were working with over the course of my six years with. Build, we received over 500 grant reports. So these are progress updates from the awardees.

What was working, what wasn't working, our lessons learned, what our budgets were. They were detailing and articulating the secret sauce for them when it comes to social change. So what I realized was, I was sitting on this data goldmine it, it really was. But for all this information, and that was just this one portfolio, let alone case studies and journal articles and other things that had been published along the way.

But we never really went back to look at the data at scale. Of course, each cohort we'd review, over the course of multiple cohorts, multiple years, team members transitioned. So somebody who reviewed something might have left our evaluator team members might have transitioned. So no one person was actually.

Reading, reviewing and holding this knowledge. And then when, as Ed as the executive director, I was like, if I'm not prioritizing this and asking my team to prioritize this then who is, who's responsible for this and what a disservice and what a waste to be honest of all this information that we're sitting on.

And I thought there's gotta be a better way and. After many looking at many different things. Cuz there are ways, right? You could talk to a traditional evaluator and have them manually code and review and tag everything for you. I could have built my own sort of computer program to review all this and capture all this, but nothing was really sustainable.

It would cost far too much to maintain. I didn't have the. In-house. I am not a coder by nature, and so I did not know how to build something. I have to get a consultant. Long story short, I have been looking at AI and I so share your passion, Mahan, for artificial intelligence and machine learning just as this interest area that I think will continue to grow and progress and change the workforce.

So as I was looking more into it, I. Technology exists to do what I wanna do, which is streamline knowledge management because at the end of the day, that's it. My problem at the core was information overload. So how do I deal with that knowledge management? Is there software out there that does that?

100% yes. But as you alluded to, the social sector does not use it. The norm, right? It's not expected, and it's not like it's something that we have ready made for our needs in the social sector, what's working as a program? What's not, where are the opportunities for technical assistance and capacity building?

These are questions that we need to know in the social sector, but people in other industries that are using this technology, that's not really what they need to train the software to do, so it doesn't exist. So I thought, you know what, if it's there but doesn't exist for us, maybe I can approach this not using a technology lens because I don't have the coding skills to do this, but use my knowledge of the issue areas and the data.

And so when I matched the two together and found really talented people who do know the other side of the coin, we could marry our know-how together to develop something that eventually became AI pre. So in short it's software that's trained specifically for changemakers in the social sector to actually look for specific concepts, themes or answers to questions like what we talked about really quickly and really efficiently.

And it will give you back the most relevant excerpts from your data or your content so that you can use it and come up with the insights you need. And I'm just really excited about that and the future potential that this hold.

[00:23:20] Mahan Tavakoli: That is really exciting. And I wanted to underline a couple of things. You mentioned, Emily, before understanding AI pre ri a little bit better. First of all, you said you don't have a technology background. I have talked to and interacted with a lot of CEOs in the AI space. Many of them don't have a technology background.

They have an understanding of a business need. And the second part of it is something that you mentioned is people familiar with AI consistently talk about. Data as the oil data as being what is valuable. So you have the business understanding of a field in this instance, the social sector that has a lot of data that is not really mined to be able to then better serve the.

right? So it's not mining for the purpose of finding out who should buy what next. There are organizations that do an outstanding job of that. It's catching up on the data and the AI intelligence fronts in areas that can serve the community. So to that end, What types of organizations would be able to access and benefit from ai priori?

[00:24:46] Emily Yu: When it comes to the type of organization that could benefit from a software like ai, preor, my thought is it's really a multi sort of, or concentric circles, if you wanna think about it. I am , I was told I was given some good advice. You need to narrow down your audience and who you're training this tool for.

Make it really good for one audience and then grow from there, . So I'm starting with what I know, which is found. And program officers at foundations. And the beauty in that is I was looking up just for knowledge in the us there's about 120,000 foundations ranging from the very, very small one person shop all the way to the largest organizations.

And they grant about 90 billion every year in the us. Alone. And so I thought, in terms of that mission that we talked about earlier, if there was an opportunity to help program officers make more evidence-based and effective decisions for greater impact, could that potentially help more? Help program officers make more strategic use of that 90 billion that goes into communities every year.

So I'm looking at the small, mid-size and larger foundations right now to help augment what they do with evaluation and learning to help them maximize what they're learning. With and alongside their community partners. That's really the use case. But I actually was approached by two other groups.

One is a foundation, I should say, but it's for a different purpose. They have surveys, they run a really large survey, and they have over 15,000 responses. And one of them is an open-ended response box. And, to have somebody manually review that they just scratched the surface. And they said to me, Could this tool somehow help us parse and review not only sentiment, but also different categories within what people responded without us having to hire somebody to go through all 15,000 responses?

The answer is, we think is yes. So we're working on that. And to make this more tangible too, a university reached out and they are interested in using and seeing what AI Preor could do with literature reviews. So it's, it was an interesting concept because I thought you. . A literature review is so common and needed, and it's very helpful in the strategy process, but once you do it, you know it's locked.

The learnings are locked. What you found sits in a report and it's really hard to build off of that. If somebody wants to repeat it right, or scale it, they have to do it again. So I thought, could that be a really interesting use case for AI to have something that. Dynamic and it can continue to learn and get better at finding specific things you're asking for, which is all machine learning is.

So those are two other sort of extensions that we're exploring in addition to just reviewing somebody's content at scale.

[00:27:34] Mahan Tavakoli: So as you're doing this, Emily, one of the things that I know you are very familiar with and some of the AI experts I've had on the podcast keep repeating as well, is that AI is in great part, dependent on the data and bias data. Results in biased outcomes to a great extent, whether it is in determining who's the right fit for the job, 

if the data that has gone into that subset has been primarily men that have played a certain sport, then. The profile in the future will be that with respect to the social sector, I'm familiar with certain organizations, for example, because the funders value disadvantage or underserved students that make it to college.

Part of what the organizations would do is pick the students who are more likely. to be able to make it to college because that ends up looking much better when they do their funding requests, so they show that 99% of the students that go through our program who are underserved, end up making it to college.

The piece that is left out is, wait, we might not have picked the ones that would have put our stats at risk. So what I wonder is if we are building on this data, Could we then be biasing future understanding and future selections? Because some of the data that some of the nonprofits are using is biased data based on the selection, dependent on what they thought would get funded.

[00:29:23] Emily Yu: 100%. Yes. I think your description and the examples are very powerful and poignant. I think whenever people are involved, the, there's going to be bias, even if it's unconsciously done. And so I think two I've listened to several of your podcasts, a big fan, and so I know the stories you're referencing.

I think those leaders that you interviewed are right to be weary of and to caution us all about the bias and the influence it might have in ai because the ramifications, as you mentioned, could relate to funding, it could relate to jobs, it could relate to other, a number of other life changing events.

So I think for, for how I'm looking at it and what I would say to foundations, having been in that space for so long, you. Artificial intelligence and machine learning, they are simply tools that at the end of the day, people are training, people are creating the data that it is using to learn from, and people are interpreting the response or whatever is generated from the ai.

And so at each of those steps and stages, we have not only an opportunity, but I think a responsibility. To make sure that safeguards are in place. And I say as much as can be because there is still this sort of unknown factor that I'm, I am a little bit weary of in terms of the transparency with which the AI is learning as well as producing.

And so I think there's a number of regulations both in the US and Europe and other countries and areas that I. Will help. But at the end of the day too, we can't, we as consumers and individuals and especially foundations, can't be complacent about having somebody else worry about it and somebody else deal with it.

So if I could bring this full circle, for foundations who have resources and who have clout and say, and power, as we talked about earlier, I think they need to. Play an active role in supporting community centered data and making sure that communities and I use that generally but it, for example, if you ran a program on food insecurity, working with your partners and grantees on food insecurity issues to make sure that the data that is being produced, collected, and used to train AI models is one that the community has ownership of, say in, and also has some.

to be able to decide what happens to that data. I mentioned almost casually this sort of data set that I was sitting on. I will say foundations for the most part are very I would say have a high bar for protecting grantee data and using it in ways that will honor and support the communities that generated the data.

They're not outselling the data, for example. And so with AI, preor and others that I hope are doing this instead of just using a web crawler and going out there to pull this information for AI purposes, working with. An issue area experts to generate data that will then be used to train.

And while that might not eliminate bias entirely, I think it at least gives us some transparency into what was collected and how it was used. So those are some of the things I'm thinking about right now when it comes to this issue.

The 

[00:32:38] Mahan Tavakoli: bias is something to be aware of. However, I wanna highlight the point that you made. There is tremendous value because there is a lot of data that is not being looked at, and it's not being aggregated. Intelligence can be drawn from it. Now, when the intelligence is drawn with that transparency, you could at times determine potential biases and potential flaws in the data.

But that's no reason an excuse for not trying to understand and use the data effectively. Totally agree with you on that. So if we talk five years from now and you have been able to have a significant impact with AI priori, what will that future look?

[00:33:26] Emily Yu: Ooh, that future would look like. What we tend to call systems change, right? The, this idea that we're altering and influencing norms, policies, both legislative and organizational and changing how funding streams work for communities. I think this idea of tackling these. Challenges that society has, like food insecurity, like healthy housing, like maternal and child health issues, they're pressing.

They need attention, but the scale is sometimes so daunting and so entrenched in these norms and these. Power imbalances that we talked about, that they're really hard to see how, just how do you attack, what's the on-ramp to these issues, right? No one issue or organization or sector can tackle them.

So how do you start, where do you start, unraveling this knotted, knotted ball and I think. With AI priori and AI in general, right? It's going to offer us this tool set that for the social sector we can use to think at the same speed and scale as these challenges, the same speed and scale that they evolve in.

Because right now, as a one program officer, one foundation, or one nonprofit leader or board member, being able to try to wrap your head around the issue and then also a solution is really daunting. And so with AI, now we have the opportunity to be able to process. Data, information stories, things that might be a little bit more difficult to parse and actually be able to harness them to help us think through what is the next idea, what is the next opportunity for change, and figure out how to, leverage those so we can come up with solutions that have impact.

I think that sort of scale has been one of the missing pieces to date, and so it, with. Hopefully that will bring us that speed and scale that we need for social change.

[00:35:27] Mahan Tavakoli: It definitely has that potential. Now, in doing that, Emily, I'm also thinking about the fact that a lot of governments, most especially local governments, have. A lot of data with respect to some of the communities that are being served or in the greatest need to be served. Are you thinking about accessing and being able to tap into that data?

[00:35:54] Emily Yu: You're definitely onto something there. I think I was reading the federal government as well as other local governments, right? They have, they, they've just been amassing data and they, it's so much data that they don't even necessarily know what they're saving it for, but they know there could be a use down the road, so they're holding onto it.

And for these governments, and philanthropy, I think the use case is similar in the sense that. The data is this sort of currency, or I think Andrew mentioned that, AI and data together it's this new form of electricity, right? For our generation and this transformative power that it has, is yet to unfold fully.

I think data for the sake of data is not necessarily. The best way to think about it, right? I think that's actually one of the criticisms we get in philanthropy. Why are you collecting this? Why are you asking me for this? And we don't always have the best answer, but I think the foundations and the governments that will really be able to harness their data and use it well are the ones that are gonna be able to make that linkage and say, this data can help me answer this question.

And so how can I use it in a way that is constructive? Data for the sake of data isn't going to help anybody, in fact, Probably make the problem worse cuz we won't be able to see through the forest, the tree, through the forest. But with what we're talking about here, communities playing a role in how their governments and how foundations and how other partners use their data.

I think that's really where this. Next big opportunity lies. There's this, future movement. I think that's unfolding now about who owns what data, is it communities? Do we own our own data? And the answer is, not really , you give up a lot of your data every day. And so this balance, to your point about government data foundations, communities, right?

The what we're both talking about here. As this starts to unfold, community and ownership of data and who uses it and how, I think if we can get clarity around that or somebody will figure out clarity around that will be a really big opportunity.

[00:37:55] Mahan Tavakoli: It is a big opportunity and part of what excites me, Emily, is whether it's on social media, when people scroll on Facebook, Facebook does have pretty robust artificial intelligence trying to determine which posts you look at more, even if you don't click, if you spend a tiny bit more time on one versus another, it starts developing a profile of the individual user or Amazon's artificial intelligence is becoming really good where some people joking about a few years from now, it will make sense for them to ship you a box with 10 of the things that they project you will.

over the coming couple of weeks. Even if you send a few back to them, it will still be worthwhile. They have the data to know the individuals they're interacting with really well. So what excites me about what you are doing is that these companies are using data. For their needs, whether it's shopping or social media.

This is thinking about the data what is the relevant data and how can we effectively use it to help elevate our communities and our humanity. So AI for Good, which is so energizing and exciting for me now, part of what I would. Your thoughts on Emily is that even many of the business leaders I talk to, when I say ai, they go cross eye

They haven't even reflected on the fact that the applicant tracking systems that they're using, there is AI in there. When they turn on ways and they're going from one place to another, there's AI in there. With nonprofits, a lot of times the executives are so busy in doing what they're doing and in philanthropy, same thing that many of them have not reflected on this.

So what would you recommend as resources, that Business executives, nonprofit executives can go to try to understand the applications of AI to know what questions they should ask Emily and how they should think about AI priori in the.

[00:40:19] Emily Yu: Sure. I'm looking at my bookshelf on this side, so I have a few books I'll pull out in a second. I will say, it was funny cuz I was just having lunch and catching up with some of my former foundation colleagues and chat g p t had just unveiled its free and open tool.

And so they were raving about it and telling me about how they were playing with it for different prompts. The way their eyes just lit up. I think playing around with some of the free tools that are available, like a chat, g p t and understand. You don't have to necessarily understand what is happening at this stage of the information fact finding process.

But I think just getting to play with it and giving it some prompts, seeing what it's giving you back. I know you've encouraged people to play around with a number of different tools that you've recommended. So I think that exposure and revisiting, I think those 10 tools that you mentioned previously on a podcast.

I think the chat G p t, there, there's a number of other, if you just Google, the free AI tools because it's also embedded in a lot of things you mentioned ways Alexa, Facebook it's in a lot of things right now. So even just having a cursory knowledge of what you're using and how even QuickBooks nowadays, which a lot of, smaller organizations will use.

Embeds ai. And so just having awareness about your interaction, how your data's being used is one place to start. And I will say I'll just pull this out actually. I had gotten this actually from a business school teacher, so it's called Human and Machine reimagining work in the age of AI by Paul Dery and James.

that's a really great book. It was very accessible in and in business school. I thought it was really applicable. And I also came across another platform building h that I'll give a shout out to if people are interested in a former Robert Wood Johnson Foundation c Chief Technology Officer.

Since went on and started working in partnership with a former, I believe, editor from. Magazine and together they're looking at how technology can inform a culture of health. And so they have a number of different projects that they're working on and they, freely share what they're learning and collaborating on.

So that's been really great. And the third and final one that I'll offer up I've been following Andrew ing for a while now, and he leads up open ai. . And so he has a number and the Open AI platform has a number of just great resources. And because he is so invested in online learning, he has a number of different platforms you can actually access some free courses that are very easy, very accessible.

And they talk about everything from data curation and how AI is working. To natural language processing and other things like that. So those three are, give a good variety something for everyone. Whether you like online courses, you just want an article or you want something that's more tactile, like a book.

[00:43:04] Mahan Tavakoli: Those are outstanding recommendations and we will link to them in the show notes. And Emily, how can the audience find out more about you and about AI pre. 

[00:43:15] Emily Yu: Oh, thank you for offering that. AI priori is live on the website, apriori.com, P R I O R I ai priori.com. And with me, I would love to connect over LinkedIn my LinkedIn page is linked on the website as well.

But those are two great places to catch up. I'm also on. At DC x change and just always up for a conversation about social good, about AI and community centered 

[00:43:41] Mahan Tavakoli: processes.

I'm really excited both with respect to what you are doing, Emily, the background that you have, and now looking for the impact that you want to make through AI priori, and also I appreciate you. Helping me better understand the potential of AI for social good. So knowing that you'd mentioned you like puns.

 I had chat G p T about. The pun on AI and social impact. So this is what chat G p D said. If it's not funny, it's not my fault, it's chat. JP D's fault chat. GD said, why did the robot join the nonprofit organization? Because it wanted to make some bite size contributions to society.

So that's a chat. PTs pun. 

[00:44:34] Emily Yu: Bravo. Bravo Chachi, pt. 

[00:44:39] Mahan Tavakoli: It is incredible. And like the folks that you mentioned have been playing around with it. I've been playing around with it. I have been encouraging my audience to do the same thing. Na Chachi, PT four is a lot. More powerful chat.

G p t three, for example passed the bar at the 10th percentile. Chat. G p t four is at the 90th percentile, so it's just. Tremendously more powerful in so many regards. It can help us understand, the potential of AI and the potential also for good. Because I believe if we have more people like you who have the positive intentions and channel AI in that direct.

We will get more positive out of AI 

thank you so much for joining this conversation, Emily. 

[00:45:30] Emily Yu: Oh, thank you for having me, Mahan. It's been inspiring. You're fantastic, and I love this podcast.