Positive Leadership
Positive Leadership
Empowering people with AI (with Kevin Scott)
As Microsoft's chief technology officer, JP's guest on this week's episode of the #PositiveLeadership podcast thinks deeply about how technology can impact people's lives and benefit everyone.
Host of the Behind the Tech podcast and author of Reprogramming the American Dream, JP talks to Kevin Scott, who takes us on a journey from rural Virginia to changing the world with AI.
Subscribe now to JP's free monthly newsletter "Positive Leadership and You" on LinkedIn to transform your positive impact today: https://www.linkedin.com/newsletters/positive-leadership-you-6970390170017669121/
JEAN-PHILIPPE COURTOIS: Hello and welcome to Positive Leadership, the podcast that helps you grow as an individual, as a leader, and a global citizen. I’m Jean-Philippe Courtois.
KEVIN SCOTT: The question we have to ask ourselves as people who build technology are we doing the best job that we possibly can to build technology in a way where 40 years from now our children and our grandchildren will look back and say, “Wow, this was beneficial. This mattered. This made all of these things that we care about with justice and the human condition better not worse.” We’d better get it right.
JEAN-PHILIPPE COURTOIS: My guest today is a leader who thinks deeply about how technology will affect people, communities, and society. He’s a very dear colleague of mine sitting with me on Microsoft’s senior leadership team, our CTO, chief technology officer, Kevin Scott. Kevin plays a key role in shaping Microsoft AI strategy. He grew up in the rural town of Gladys, Virginia, a place that’s been and will continue to be disrupted by emerging technologies. And he’s very mindful of the potential impact of AI on the workers of towns like Gladys. He’s the host of Brilliant Behind the Tech podcast and recently published Reprogramming the American Dream: From Rural America to Silicon Valley―Making AI Serve Us All, which I highly recommend. It was an enormous pleasure to sit down with Kevin to talk about technology innovation process and its role in serving the public interest and to dig into his thinking around responsible AI—what it looks like and how it works in practice. Hope you enjoy the episode. Here’s Kevin.
In the Positive Leadership Podcast, one of the first things that we get people to do actually is to reflect on the moments that have shaped them into the person and leader that they are today. I’d like to start back at the very beginning of your story back in Gladys, Virginia. What was it like for you as a kid growing up? Who or what was the largest influence on you and your core beliefs, Kevin?
KEVIN SCOTT: The biggest influence on my ethos and values were my family. We certainly weren’t well off. We at times had a lot of financial difficulty and at best my family was always paycheck to paycheck. I don’t think we really paid much attention to that, or at least that’s not the thing that we were always talking about. My dad was a construction worker and my mom hustled with a bunch of little side jobs that she did. She was a tax preparer; she ran a nursery school for the kids at church for a brief period. But her primary job was taking care of the family. I just had the luxury of being around a bunch of people who were hard working, very curious, who always worked with their hands, who were always tinkering with something, and who felt like they had an obligation to their community to do something valuable.
JEAN-PHILIPPE COURTOIS: People who care for their community, right?
KEVIN SCOTT: Yeah.
JEAN-PHILIPPE COURTOIS: Who were serving others in their day-to-day lives as well.
KEVIN SCOTT: Yeah.
JEAN-PHILIPPE COURTOIS: Something that’s important to you, I think, Kevin, is also referenced in the title of your book, is the American dream, this idea that anyone, regardless of where they were born and what class they were born into, can attain their own version of success. And it’s an idea that has served as a powerful motivation for many people in the US for many years and around the world as well. Did the American dream seem like something that within their grasp?
KEVIN SCOTT: Both my parents and I believed when I was growing up and I still believe this, that if you go figure out some way to be valuable to your fellow human beings, you try to understand what you want and need, and then you go do work and work as hard as you possibly can to try to meet those needs of your fellow human beings that good things will happen to you. And it doesn’t mean that you, I tell this to my kids all the time, CTO of Microsoft, there’s so much luck involved in that, that’s a crazy thing to try to fixate on and plan for. I would have been okay no matter what my outcome was by just applying this set of ideas: that I can work hard, try to discover what’s valuable to do and then going to do that thing.
It really isn’t a promise. It was kind of implicit in the way that my parents talked to me about this and in how they acted. You just have to go do the work, and you got to go grind through it. Sometimes it works and sometimes it doesn’t. It’s maybe even more valuable to do something and fail and learn something from the failure and then get up and go again as it is succeeding. Sometimes success teaches you nothing.
JEAN-PHILIPPE COURTOIS: Yeah, absolutely. So in a way, very much hard work ethics and also almost like an entrepreneurship mindset the way you describe it actually, Kevin.
KEVIN SCOTT: Yeah. I didn’t even know the word entrepreneur when I was growing up, but I think that’s what I was surrounded by, a bunch of entrepreneurs.
JEAN-PHILIPPE COURTOIS: Early on, and you see as a child at one point, you’d been saving up little money you could, and you bought your first computer. Radio Shack Color Computer 2, if I’m not mistaken.
KEVIN SCOTT: Right.
JEAN-PHILIPPE COURTOIS: And I think it was a huge deal, big deal for you personally, actually as an undergrad, you studied English. And you kept it as your minor for college and computer science as your major. You actually almost went, I think people don’t know that, for an English PhD. So what happened? What made you decide between mastering programming versus mastering your native language, English?
KEVIN SCOTT: Part of it’s pragmatic, right? I was constantly trying to figure out how I was going to support myself and support my family. I’d gotten a scholarship that partially covered the cost of going to school but didn’t completely. So I had student loans; I was on my own financially. And part of that was my decision, and part of it was my parents’ financial circumstances. I just had to take care of myself, and eventually I had to take care of my entire family. Part of it was a pragmatic thing. I enjoyed both equally. And I think that’s a tremendous luxury because sometimes it’s hard to even find the one thing that you really enjoy. So I picked computer science. And the story that I told myself about picking computer science is that there’s a way to do computer science that is artful, that is grounded in the humanities and that I wasn’t really forsaking anything by making this choice.
JEAN-PHILIPPE COURTOIS: I think you wrote your first line of code back in 1983. Interestingly enough, Kevin, it was the year, Kevin, I joined a French software startup company. I was doing modest because I was not good at that some BASIC programming on an Apple II.
KEVIN SCOTT: Nice.
JEAN-PHILIPPE COURTOIS: What would be your advice to a young 18-year-old Kevin who loved to learn how to develop a piece of software without being able to go through a computer science degree? Which studies, if any, should he consider? How would you go about that today?
KEVIN SCOTT: Yeah, I don’t know whether I personally would have done anything different because young Kevin would have looked at these AI systems and would have wanted to understand what’s behind the scenes of them and how they work. I think the advice that I would give people more generally is find whatever it is that you really, really have a passion about that is valuable to other people and try to use these tools that you now have available to you to go do that thing. I have a 13-year-old and a 15-year-old who are thinking about what it is that they want to be. My 15-year-old daughter has wanted to be a cardiothoracic surgeon for the past... it’s a weird thing that she got fixated on when she was really young because she watched too much Grey’s Anatomy. She had decided that she’s not a tech person, but her mind has been changed over the past few years. The way that she’s using tools like ChatGPT and GitHub Copilot to help her solve these medical problems and healthcare problems that she’s really motivated by is quite interesting. I think that’s the mindset. It’s like, all of these problems we need to go solve, all of these things that matter and are important, we have increasingly powerful tools to go do them. You don’t have to be a programmer anymore to get the computer to do unbelievably powerful things for you in service of what it is you’re trying to accomplish.
JEAN-PHILIPPE COURTOIS: It reminds me of one of the episodes I had with Sal Khan and the way of course he’s been developing the Khanmigo AI agent, who is becoming increasingly, and when you’re asking the question about the future, kind of your personal tutor beyond actually just the course you’re taking on. You could even portray a day where you would say, well, it could become actually a lifelong learning agent for all of us. You could, all of us, keep learning a bunch of things differently.
KEVIN SCOTT: I think we all learn differently, and we all learn at different rates. This idea that you could have an infinitely patient tutor that could adapt to your particular learning needs, very individually, I think is just a really interesting, powerful idea.
JEAN-PHILIPPE COURTOIS: I think it’s a wonderful analogy, again just another very recent podcast episode I had with a French chef in terms of cooking called Thierry Marx, who is well-known in this country and beyond. He’s doing some amazing work as well socially. He’s someone who loves talking about this philosophy of basically learning by doing, putting his hands into what he called the catarrhs of cooking, the fundamentals of cooking. And this is the way he grew up as a kid, actually, in the kitchens of big chefs. A very nice analogy, which I love, when it comes to AI. Let’s go back to the roots of AI with Alan Turing, a pioneering figure in computer science and AI. In the seminal 1950 paper called Computing Machinery and Intelligence, he asked the following questions: Can machines think? Can machines do what we as thinking entities can do? And he said it is possible to do a machine which will learn by experience. And obviously we know that his work laid the foundational concept that continues to guide AI research and development today. A lot has happened since Alan Turing obviously, Kevin. What did you personally experience as the most stunning breakthrough?
KEVIN SCOTT: When Turing wrote that paper, the term artificial intelligence hadn’t been coined yet. It was five years later at a workshop at Dartmouth when a bunch of these mathematicians and computer scientists and information theorists sat down and said, “Hey, we want to build a machine that can in many ways replicate what a human brain does. We’re going to call this study, this new field, artificial intelligence.” And we have had a really hard time over the entire history of the discipline even defining what artificial intelligence is. We’ve got this track record of, we think we understand what intelligence is, what our own intelligence is, and then we go build machines to replicate some aspect of it. And as soon as we accomplish the thing, then we redefine what the measure of intelligence is. We used to think that the apotheosis of human cognition are these very challenging games like chess and go. In 1997, IBM built a system called Deep Blue that beat Gary Kasparov, who was the world champion player at the time at chess. And it was stunning. Nobody thought it was possible. As soon as it happened, we figured out, “Oh, this really isn’t the breakthrough that is going to tip us over into this idea of artificial general intelligence. This solves a very narrow thing.” Funnily enough, I think chess is now a more popular human pastime and hobby and you can even think about it as a sport, given how competitive it is and how much joy people derive from competing and watching the competition, even though since 1997 there has never been a human chess player as good as a computer at playing chess. We just don’t care anymore.
JEAN-PHILIPPE COURTOIS: But we keep playing. We keep playing.
KEVIN SCOTT: I think that’s a really important thing to keep in mind. To answer your exact question, the most stunning breakthrough, I think, in my career probably happened recently. It was when GPT-4 finished training and this idea that we had that you can follow a particular path in developing artificial intelligence was going to result in an AI system that was very generally useful for doing a bunch of cognitive tasks.
JEAN-PHILIPPE COURTOIS: That was a big aha moment.
KEVIN SCOTT: I had a strong sense that it was coming, but I didn’t know that it was coming this fast. The thing doesn’t think. Going back to your Turing quote, he said two distinct things. One is he’s talking about thinking, and one he’s talking about can we write a piece of software that emulates aspects of what our brains do.
Absolutely, we can do the latter, but the former is like this almost a philosophical thing. When I say, “Hey, JP, do you think, what are you thinking about, how do you think?” That’s a very human question. It’s very different from what these software systems are doing.
JEAN-PHILIPPE COURTOIS: Despite the fact that we spend so much time thinking, there’s no universally accepted definition of what thinking actually is. It’s a complex phenomenon that involves cognitive, emotional and sensory process and can be approached from different perspectives such as philosophy and neurology as well as software engineering. And because we don’t really understand how people think, it’s hard to show what actually counts as thinking. Turing’s imitation game was a clever way to get around that problem. If a computer behaved as if it’s thinking, it decided, then you can assume it is, which sounds like an odd thing to assume. But we do the same with people. We have no way of knowing what’s going on in their heads. New AI innovations like Microsoft Copilot and GPT-4 from OpenAI appear human-like, but it’s done in an entirely different way than what occurs in the human mind. As Kevin says, it is not thinking. But it’s clear that generative AI has emerged as a transformative force profoundly influencing various aspects of our lives.
Back just a couple of years ago, can you just allude to that, you were really at the inception point of the partnership between Microsoft and OpenAI. Because back in the spring of 2018 I think when you first went down to meet them, obviously OpenAI already had a relationship with Microsoft using Azure. And you met the team, including Sam Altman and also Ilya Sutskever. What were your first impressions meeting those guys and the work they were doing at the time before GPT-4, 4.5 Turbo and more coming up now?
KEVIN SCOTT: I will tell you; I was sceptical going into the meeting. My scepticism in general is just a personality disorder not anything specific to the team. I’m always slightly sceptical of things that I haven’t fully understand. And sometimes I will go fully understand them and I’m still sceptical afterwards. This was one of those rare instances where I went in, and I was... I’d been building machine learning systems at that point since 2004. I was not prepared to go into that meeting to have my mindset about what was possible in the next handful of years change. And I walked out of it absolutely convinced that we were on the cusp of something very interesting. We were still years away, but they had a framework and a way of thinking about the problem that they were tackling, very scientific, there was a methodology, “Here’s the experiment that we’re going to run. This is what we will learn from it. This is what it will tell us about the experiment we will go run after that.” I thought we had the foundations for a partnership where we could do some valuable things for them to support this very disciplined development of a very useful AI system. The way that it was going to be useful and interesting for us was that it wasn’t trying to solve a narrow problem.
A lot of the AI systems before then were built to solve a narrow problem, like play chess, predict which ad someone is going to click on. This was a, “We think we’re going to be able to build a system and as a function of scale it’s going to become more broadly useful, and you’re going to be able to build dozens and hundreds and thousands and millions of applications on top of it.” As a platform company, that’s exactly the mission Microsoft has been on for almost five decades now.
JEAN-PHILIPPE COURTOIS: For sure. I’d like to come back to your book and your role here in Microsoft because I think you’ve been always, well, I got to know you, quite sceptical, yes, but quite optimist as well—someone who is driving the positive side of innovation, I would say. That’s also my bias, I must admit, as you know. But you also experienced early on in your childhood, as you said, poverty and loss of industry in the region where you grow up. And so you understand well the need to invest in rural infrastructure.
Let’s imagine today, Kevin, you are now in charge of shaping the social and economic development of Virginia that you know so well. What would you do to transform the lives of the people in those rural communities to reshape the traditional sectors like construction, textile, furniture, agriculture, and more. In other words, how would you reprogram the American dream in your home city?
KEVIN SCOTT: I think the thing that you should do, and this is a set of things that we ought to be thinking about doing fairly broadly actually, not just in the rural parts of the United States but in the developing world and in all parts of the industrialised world as well, job one is you actually have to have the infrastructure in place to support people using AI tools to solve the problems that they want to solve. Part of it is do you have a platform available, is it open, is it getting cheaper and more powerful over time, can you operate it freely across global boundaries. It means some very basic things. One of the struggles that folks in my community have is internet just doesn’t work well. My mom and brother live near a local telephone building, the exchange for Gladys, and they have very good internet. And my uncle, who lives three miles away, has 300 kilobit per second internet, which was great in 1990 and is pretty horrendous now. You can’t leverage AI if you can’t connect to AI. I think a bunch of the stuff you need to go do is really educate people to be entrepreneurs, in a sense. How do you take young people and expose them to this full palate of tools that they have to use and help them think critically about the interesting problems there are in the world that they could put those tools to use solving.
It’s a very different educational paradigm than the one that we’ve had since the beginning of the industrial revolution where it was like, “You, human being, need to learn these basic skills. You need to be literate. You need to know a little bit of math. You need to have structure in your life where you can get up in the morning and go do something for this number of hours, work in teams and understand hierarchies and all of this stuff.” That was very industrial revolution sort of learning. That’s not really what we need right now.
JEAN-PHILIPPE COURTOIS: It’s both infrastructure, it’s skilling, it’s lowering the bar of accessing as well, not just freely but also the level of confidence. Because there’s a lot of fear. In many countries I travelled the world as well for our company, you can still see a lot of confusion and fear and anxiety in just getting access to AI. So I think there’s a lot to be done there as well in terms of education.
KEVIN SCOTT: Yeah. If you look over decade time horizons, we’ve got some very challenging things happening in the world. You have a warming climate, which is going to impose a whole bunch of structural changes on the world. We basically have designed the world for one temperature regime and we’re about to enter another one. We have lots of work to go do to adapt to that and try to engineer an-
JEAN-PHILIPPE COURTOIS: Redesign.
KEVIN SCOTT: ... energy economy that won’t make the temperature regime worse than it’s going to be. But maybe the one that we don’t talk a lot about is demographic change. Almost everywhere in the industrialised world we are either in population decline or the population growth is decelerating, and you can sort of see a point near in the future where you’re going to tip into deceleration. If you think about, in our lifetime—this isn’t an imaginary time horizon, where I’m worrying on the behalf of children, which I do, but we can also worry on behalf of ourselves—these demographic changes in the world will mean that you just don’t have a big enough workforce to go do the work of the world unless you have major breakthroughs in productivity.
And major breakthroughs in productivity mean technology. Whether it’s AI or something else, something has to happen, or we will have a profoundly different world than the one that we occupy right now. AI is currently the best shot on goal for productivity that we have.
JEAN-PHILIPPE COURTOIS: I fully agree with you. Fascinating to see all that happening hopefully in our lives, as you say, Kevin. I’d like to shift gears a little bit and you actually alluded to the times we are going through right now in ’24. This is actually the biggest election year in history, as you know, with countries representing more than half of the world’s population. Four billion people are going to send their citizens to the polls. I was reading a book I got from Bill Gates, of course our founder and first CEO of the company. He wrote this book in 1985 called The Road Ahead. He said, “We don’t have option of turning away from the future. No one gets to vote on whether technology is going to change our lives.” So, Kevin, do you agree that no one gets to vote for using AI in their lives?
KEVIN SCOTT: Oh, they totally get to vote. We get to do what we do because society has given us permission. We don’t have a right to do what we do.
JEAN-PHILIPPE COURTOIS: We don’t.
KEVIN SCOTT: We currently have permission to do what we do. The permission can be revoked if we do it irresponsibly, if we are not listening very carefully to what people are telling us. If you have that mindset, you basically have more fluid voting. Because before you have to have extreme votes, like people are voting every day by telling you what they do and don’t like. And if you’re listening and adapting what you’re doing, hopefully you get into an equilibrium that people are happy with, that they feel confident about and hopeful about. You can’t just jam stuff into society. It really doesn’t in the limit work that way.
JEAN-PHILIPPE COURTOIS: I fully agree with you, Kevin. I was really interested to listen to the last State of the Nation pitch by your president, Joe Biden. He said, “We must address the growing concern about AI-generated voice impersonations. I propose a ban on the misuse of this technology to create deceptive voice recordings or sensibilities to manage the risk associated with AI, especially in areas like society, economy, and national security. How should governments in the first place, and as we’re just starting to talk about that, tech companies like Microsoft and our peers, embrace what we call responsible AI? What does it mean, actually? If you could share your thinking in a practical way.
KEVIN SCOTT: Responsible AI is sort of like responsible governance or generally accepted accounting practices. It is about saying this is what we believe responsible AI means. It is being transparent in the saying, publish what your guidelines are. And then having a set of processes and controls inside of your company that make sure that you adhere to standard, and then being willing to engage with stakeholders about evolving the standard and having some degree of auditability of your processes for adhering to it. I think every company that is developing and deploying this technology needs a framework like this the same way that you’ve got board audit committees, and you have outside auditors that are looking at your books. When a thing is important enough, you need some kind of framework so that collectively everybody can believe that you’re doing the thing right. Some of the stuff that I think the President called for is perfectly reasonable, what are the things that we’ve been very, very hesitant about. We’ve had technology that can do interesting things with voices for a while and for a while we’ve chosen not to deploy that because the risk of people using that in fraudulent ways, in ways that far outweigh the benefits, seems like something we need to think carefully about. And maybe at some point you will do it because there are also positive uses for the technology. One of the things that we’ve been doing in my team is there’s this neurodegenerative disease called ALS that will ultimately cause among other things people to literally lose their voice. We have been using this technology to archive people’s voices. Before they lose the ability to speak, we get enough of a set of samples from their actual speak so that you could have an AI system give them their voice back.
JEAN-PHILIPPE COURTOIS: Which is wonderful.
KEVIN SCOTT: That is an unbelievably powerful and beneficial thing. With all of this stuff, it’s about the balances. What good does it enable, what bad does it enable, and how do you make sure the good far outweighs the bad. When the bad is really bad, that you can prevent that from happening and very, very, very quickly detect and mitigate misuse.
JEAN-PHILIPPE COURTOIS: No, I’m with you. I’m with you, and I think obviously not just as a company but with the industry and others. I’ve never seen in my 40 years plus in tech as much as I would say urgency coming from government themselves to regulate. It’s happening in Europe. It’s happening in the US. It’s happening in Asia. And I think we are all in around the table to make sure we agree on those guiding principles and then implement some tools and processes. And you know that well, Kevin, because you’re also the head of a responsible AI office with some of your peers at Microsoft to make sure in our own ways of building AI tools we do it in a much responsible way. I don’t know if you want to touch a bit on that on the way our customers can benefit from those practices as well.
KEVIN SCOTT: We’re a platform company, and so whenever possible when we’re building tools and infrastructure to help ourselves build products and to launch them and operate them safely and efficiently and all the other attributes you want of products, we try to make that infrastructure available to customers. We have an increasingly sophisticated set of infrastructure that we’re offering in Azure to help people build and operate AI-powered products responsibly. And increasingly the tools that you use for our AI are powered by AI themselves, which is one of the really interesting things. The more powerful AI gets, the more powerful your AI infrastructure gets. One thing that I will say, since we’re talking about policy and regulation, is there is also, the same way when you’re developing a technology there’s this balance between benefits and harms that you’re trying to weigh, there’s also positive and negative for regulation. If you regulate too aggressively and too early, you may curtail the development of technology that can be unbelievably beneficial. I think, again, this is all about societal equilibrium here.
We have to do what we’re doing in a way where we have enough societal trust where we can develop the technology and let people lay their hands on it and evaluate whether it’s good or bad themselves so that we can inform the regulation that actually has to happen. If you don’t do that, you’re kind of regulating into a vacuum; you’re sort of imagining what a future might be, and you don’t even know that it exists. You sort of risk two things: regulating the wrong thing, you pass something, and it doesn’t produce the effect you intended, or you inhibit something that could be very, very valuable.
JEAN-PHILIPPE COURTOIS: I would say as usual with any innovation but really with this one in particular, it’s really a balancing act between policies at the highest level, but also then technology innovation, processes and people, people, people. We don’t dig into that, but obviously people are critical to the way you do and develop AI.
KEVIN SCOTT: Yeah, people, people, people is the right thing. Honestly, that’s the most important thing. It’s more important than the regulation. It’s more important than the implementation detail. If you are not doing something that is valuable, that is serving the public interest, that is making everyone’s lives net better, then what are you doing? You’re honestly wasting your time. You don’t need regulation; you need to wake up and stop doing what you’re doing.
JEAN-PHILIPPE COURTOIS: That’s a foundational principle. Now I’d like to shift gears a little bit and use one of the favourite quotes, he has many, of our manager, common manager, Satya. A poem, Satya likes poems, Kevin, as you know. And it’s one quote from this Austrian poet called Rainer Maria Rilke. He once wrote that, “The future enters into us in order to transform itself in us long before it happens.” So, Kevin, is AI entering into us? Is AI entering into humans for a better world or for a dystopian world?
KEVIN SCOTT: Obviously, when a technology emerges it absolutely changes us, changes us I think in pretty deep ways. You just imagine you and I, for instance, how we process the world and see the world is deeply influenced by a whole bunch of technologies that have emerged over the past 40 years. My great grandmother who passed away many years ago lived to be 100 years old. She was born in the 19th century and lived through the first stages of the first internet bubble. She went from a world that had no electricity, no cars, no airplanes. She didn’t have indoor plumbing. She was in this barely industrialised Victorian world when she was born. And by the end, we had airplanes, space travel, satellites, internet, mobile phones. Technology has an extraordinary impact on not just how we live our lives but how we perceive reality. Whether that’s good or bad is up to us. It’s choices that we make about how we use the technology, what we decide to put it to. And I think, if you look over the long arc of history, it has been by and large positive. In a sense, it has to be, otherwise you have no progress over very long periods of time. If you choose the wrong path every time, you will collapse. We occasionally choose wrong things and then we course correct. We’ve been building, building, building to a world that is so much more prosperous than the world that you or I were born into or the world that certainly that my great grandmother was born into.
JEAN-PHILIPPE COURTOIS: I’d love us to pursue it, because we’re almost coming to an end, Kevin, with more positive scenarios in mind. You mentioned already a couple of great examples like ALS. I could refer as well to eyesight. I had a wonderful visit in India two, three weeks ago, and this wonderful multi-lingual platform they built, and Microsoft research actually has been partnering with them to enable—actually they have 270 languages, 22 officials—a farmer in a remote village to talk his language to be not just translated to English and Hindi but to connect and using a co-pilot like ChatGPT interface to get access for the first time ever to the public subsidy he was actually able to get in his life. It can apply to farming, education. What are you the most excited about in regard to the application of genAI when it comes to the most positive breakthrough for the society?
KEVIN SCOTT: I think there’s so much stuff. That one that you mentioned in particular the pattern I think is really interesting. One of the things that modern generative AI ought to be very good at is helping everyone navigate the complexity of the world. The example that you gave is a good one. We’ve been doing some work with an organization in the United States called Share Our Strength, whose mission is to feed kids. It’s just sort of amazing how much childhood hunger there is and what the knock-on effects of childhood hunger are. A tremendous amount of what gets diagnosed as ADHD is actually just kids being hungry. When you’re hungry, you can’t focus, you can’t learn, and it just sort of becomes a snowball effect in people’s lives. The thing that we’re doing with Share Our Strength is we’re trying to figure out how to use generative AI to help people navigate the entitlement programs that already exist, that already are funded, the money has been appropriated and it’s sitting there unspent because people either aren’t aware of it or can’t navigate the bureaucracy to sign themselves up for it. This ought to be a great thing for generative AI to help with. I think that that as a pattern is really, really extraordinarily powerful. I think educational equity is another thing. My daughter goes to a very good school here in Silicon Valley with excellent teachers.
My school did the best it could do back in the ‘70s and ‘80s in rural Central Virginia, but if I look at my school versus my daughter’s school, not even a comparison. The thing that I think that AI could do is close that gap to try to make a higher quality of education and learning and enablement more equally available, not just across the United States but across the world.
I think there’s some truly exciting stuff happening right now where the fundamental pattern driving progress in AI right now, this notion of self-supervised learning, that you can transform, compute and data, and increasingly just compute, into AI systems that can solve very complicated problems is applicable to more than just language. We’re doing some really fascinating work at Microsoft Research on how you can apply these techniques to physics and chemistry and biology to help with building the next electrolyte for energy storage or designing a therapeutic molecule that will cure a disease. I think it’s underappreciated because they’re more complicated for a lay person to perceive what the progress is versus what’s happening with language agents or linguistic-based agents. The progress is extraordinary. These are not little steps we’re making. They’re just unbelievable, in some cases, the biggest jumps forward in progress that we’ve ever seen. It’s an exciting, exciting, exciting time.
JEAN-PHILIPPE COURTOIS: I share that excitement with you. Of course, I’m lucky enough being part of Microsoft to see some amazing mind-blowing developments going on. Finishing last couple of questions, Kevin, with even more positive stories because that’s the core, the spirit of my podcast, as you know. I know that with your wife, Shannon Hunt-Scott, you started the Scott Foundation back in 2014 with a desire to give back to the Silicon Valley community where they work and raise their family. I think the initial focus of the Foundation was in supporting leading edge organizations addressing critical needs such as childhood hunger, early childhood education, women and girls in technology. What is it that you and your wife are the most proud of? And do you see yourself more and more involved in such foundations like our friend Bill has done for the last few decades of his life now?
KEVIN SCOTT: I think what Bill has done with the Gates Foundation is really extraordinary. The work that they’ve done in public health is just extraordinary. They have chosen, here are these acute problems that are not getting solved fast enough, that don’t have a natural mechanism to get solved. If something doesn’t change, they’re going to be just as bad or worse 50 years from now. The thing that we focus on at the Scott Foundation is trying to identify and relieve structural poverty cycles, things that lock people into intergenerational poverty. The reason that that’s our focus is because both my wife and I grew up not terribly privileged. I mean, we had some privilege because even though my dad went bankrupt a couple of times and we had a lot of financial hardship to go deal with, we’re more privileged than someone who was born in equatorial Africa, for instance, in the 1970s. I think you always have to appreciate what you have. And then both my wife and I had a sense, “Look at ourselves.” And we’re like, “Wow, there are a handful of things that happened to us that if they hadn’t have happened, we would have had very, very, very different lives. The point of the Foundation is like what can you do to go engineer a helping hand, to go engineer some of the good luck that my wife and I had so that you don’t have to have so much left to chance in getting people to snap out of a poverty cycle. And we try to invest in organizations that are entrepreneurial, where they’re thinking about how you can take an investment and with leverage, with tools like AI, for instance, you can go tackle a problem and get a big benefit.
JEAN-PHILIPPE COURTOIS: A big impact, yeah. In the positive leadership philosophy, we learn how to be self-aware and how to build our self-confidence but also to build our own positivity—which I think is super important, by the way—in the way we show up, in the way we connect with people. What are your daily routines, Kevin, or maybe habits you have from time to time, to grow your positive leadership with your colleagues, customers, and all the people you connect with in your lives?
KEVIN SCOTT: I think it is really important for your own positivity and the positivity that you project in the world to be grateful, to just, no matter how crappy your day is, to ground yourself in, “What’s one thing that I can be grateful for today?” The reason I think it’s so important is because once you start feeling gratitude, there’s almost a snowball effect to it. Once you’ve felt the first thing you’re grateful for, it’s easy to see all of the other things that you should be grateful for. And it helps you be humbler. I think it’s one of the foundations of having compassion in your life, of being able to put yourself in someone else’s shoes, not feeling what they’re feeling but to just try to understand what their point of view is and why they might be doing what they’re doing, even if your knee-jerk reaction is, “This is irritating.” There’s always a reason for everything. Just a little bit of gratitude can go an awfully long way.
JEAN-PHILIPPE COURTOIS: Kevin is so right. A little bit of gratitude can go an enormous way. Appreciating the good things in our lives cultivates positive emotions such as joy, happiness, and optimism. There’s so much research that says a gratitude practice makes you enjoy life more. It’s such a good idea to incorporate it into our daily routines or rituals. For example, you could start and end each meal with a moment of gratitude, or you could place visual reminders of gratitude around your home or your workspace. This could be a gratitude jar where you drop in notes of appreciation; a gratitude board, where you pin up pictures or quotes that inspire gratitude. Or just simply a sticky note on your mirror reminding you to be thankful.
I think you are still a young man, at least relative to me, everything is relative in life, I think. It’s a bit early to be thinking about your legacy, but perhaps what story would you like people to tell about you, Kevin, in the future?
KEVIN SCOTT: I don’t know. Honestly, I’m deeply uncomfortable with anybody paying any attention at all to what I’m doing. What I would like is I would like the opportunity to work with people who care about what it is that they’re doing. The thing often that I’m proudest about, I just got a note this morning from a friend, someone that I hired 19 years ago, actually, and they sent me a note on the 19th anniversary of their start date at this thing that we were both doing back then and thanked me. I should be thanking them because the thing that makes me feel the best about what I’ve done is having some tiny little impact, a positive impact on someone’s career and just seeing the things that those people go do after we are no longer working closely together just fills me with joy. I kind of don’t care what everybody thinks of me but having those people that I’ve worked with feel like, “Hey, it wasn’t a waste of time working with Kevin Scott,” that’s what I would like. Obviously, I care a lot about what my family thinks. I want to support them in their ambitions and dreams and have my wife be successful and my children be successful and do good things in the world and believe that they had a husband and a father who supported them being their best selves.
JEAN-PHILIPPE COURTOIS: That was such a wonderfully enriching conversation and a great way to end a podcast. He’s so thoughtful and engaged with the world around him and with the innovation process. Used responsibly, AI can transform the world for good, driving much-needed social and environmental innovation at scale. As Kevin says, these are really exciting times. Thank you so much, Kevin. I enjoy tremendously of course our partnership in the company but also as a friend in this podcast. You’ve been incredible. Thank you.
KEVIN SCOTT: Well, thank you very much for having me on. Thanks for putting positive energy out into the world.
JEAN-PHILIPPE COURTOIS: Thank you, Kevin. I’m Jean-Philippe Courtois. You’ve been listening to the Positive Leadership Podcast. If you’ve enjoyed this episode, then please leave us a comment or a five-star rating. If you’d like more practical tips on how to develop your positive leadership, then head over to my LinkedIn page and sign up for my free monthly newsletter, Positive Leadership & You. Thanks so much for listening. Goodbye.