Diane Tavenner and I sat down with Neerav Kingsland, a longtime education leader who is now at Anthropic, to explore the evolving intersection of artificial intelligence and education. Neerav shares his journey from working in New Orleans’ public school reform to his current role at one of the leading AI companies. Our conversation covers the promise of AI tutors and teacher support tools, the key role of application “wrappers” for safe and effective student interaction with AI, and the need for humility and caution, especially with young learners. The episode also delves into the broader societal impacts of AI, the future evolution of schools, and the increasing importance of experimentation and risk-taking for students navigating an uncertain, tech-driven landscape.
Linked References:
“Machines of Loving Grace,” Dario Amodei, October 2024
Michael Horn
Hi, it’s Michael. What you’re about to hear is a conversation that Diane Tavenner and I had with Neerav Kingsland, longtime person in the education world who’s now at Anthropic, one of the major companies behind the large language models—of course Claude being theirs. And I had several takeaways from this conversation, but I just wanted to highlight a few for you. First was Neerav’s humility in constantly saying we don’t know the answer to the full impact of AI on education, let alone society, and just how honest that felt. Second, I was struck by how much he sees AI tutors as being a major use case for the technology, and he referenced things like Amira or Ello as perhaps examples of where this could be going. Third, teacher support was something he named, whether it be for efficiency gains or to help with facilitation and the like. Fourth, I was struck by how he repeatedly emphasized the importance for caution when it comes to young children interacting directly with AI, particularly the large language models themselves, and his belief as a result that wrappers, essentially applications, if you will, application layers, will be a critical part of how young people interact with AI, both to build in more content, expertise, more scaffolding, but also the protection from AI perhaps itself.
And then finally, the last thing I’ll leave you with was when we asked him what perhaps would be most valued in the years ahead for schools, he said something that is perhaps undervalued today and that is radio risk taking. And that’s something that certainly landed for me. So I hope you enjoyed this conversation with Neerav Kingsland, and we’ll talk to you soon on Class Disrupted.
AI’s Role in Education Trends
Diane Tavenner
Hey, Michael.
Michael Horn
Hey, Diane. It is good to see you and excited to get into this conversation that we’ve been teasing our audience with in the opening episode around AI. And then we had a few weeks to get our guests lined up. And I think, as today’s conversation will show, it has been well worth the wait, I suspect. But there are a lot of developments, obviously, AI to large companies, constantly making some exciting updates, rolling out new applications and features and the like. And so you and I have been constantly updating our own thinking, emailing back and forth a lot, and I think today is going to be really exciting to continue to update our thinking.
Diane Tavenner
Yeah, I agree. I have conversations regularly with people who listen, who say, you know, this is the dialogue we want to have about AI and education. And honestly, I can’t think of a better person I’d like to be talking about this topic with. Our guest today is Neerav Kingsland. And Neerav is someone Michael and I have both known for many, many years. And the reason why is he’s worked in New Orleans in post Katrina days helping to build the nation’s first public school system there, where over 80%, 90% of the students attend charter schools. He served as the CEO of New Schools for New Orleans and then in a variety of philanthropic roles with the Arnold foundation and Reed Hastings and a managing partner at the City Fund. And then Neerav made this big jump a few years ago and joined Anthropic, which is of course one of the handful of leading foundational AI companies known for its large language model Claude, and he leads strategy there. So with education and AI sort of covered, Neerav, it was hard for us to imagine someone better positioned to come and open this season and talk to us about the big picture of AI and education. And so welcome. We’re really happy to have you here.
Neerav Kingsland
So thrilled to be here. Thanks, Diane.
Michael Horn
No, well, so, Neerav, I want to start with this because I’d love to just understand your pathway from education to Anthropic. And I’ll say up front, Diane may already know some of this, but I don’t. On your LinkedIn, it looks like you effectively left education and moved hook, line and sinker, if you will, into one of the leaders in AI. So I would love to just understand what is, you know, what led to the move. What does your day job look like these days? Is education still present in it?
Just help us understand the pathway.
Neerav Kingsland
Yeah, totally. So I had been following and reading about AI since my time in New Orleans. The book that really hooked me was The Singularity is Near, the Ray Kurzweil book, which is 25 years old now, but pretty prescient. I think he predicted AGI in like 2033 or something. And here we are. And so I think that opened my eyes to the possibility I wasn’t technical enough to know how right he might be, but kind of big if true. After you, you know, you read a book like that and then, you know, as a layperson, just kept on reading, listening to podcasts, blogs and so forth. And then it was really when GPT2 came out, so kind of, you know, maybe 15.
Michael Horn
You were earlier than us.
Neerav Kingsland
Yeah, only because I was like, trying to write poetry with it and I was like, oh, my gosh, like, this is pretty good. Like, we might be knocking on the door. And so, you know, I just started thinking like this, you know, these ideas and this technology could be the biggest thing to ever happen to humanity. And we might be getting pretty close. And so I started thinking very seriously about a career change there, and the transition was a little more gradual. I reached out to Open Philanthropies. I knew the leader, a guy named Holden there who ran that foundation, that’s Dustin Moskovitz foundation, and just asked if there was anything I can do. I knew they did a lot of AI safety work, and in a cool way, they had a lot of young founders, and I, at that point, was a little older, so it scaled nonprofit and philanthropic work.
So I became an executive coach, just kind of an advisor to some AI safety founders, and did that on the side for about a year and a half. So I got to know the field, got to know a lot of amazing people, and eventually paths crossed with the Anthropic folks. And, I was wowed by their mission and the team, and so joined about three years ago now. It was before ChatGPT, so it was really a small research org when I joined. And then, you know, the rest is history.
Michael Horn
It’s such an interesting trajectory. It’s such a cool example, frankly, of putting yourself in the middle of something. Right. To make that sort of a switch. How does it connect? Like, does it feel like you’re leaving education in some ways, or does this feel like some other way of framing it in terms of, you know, your own purpose, life, work, the arc of the things that you’ve done in terms of impact on humanity? I just love to get that insight.
Neerav Kingsland
Yeah, I’m still very involved in education. I’m on the board at City Fund. There’s a new leader there, Marlon Marshall, who’s absolutely fantastic, but so stay connected through that. And then my first couple years at Anthropic, we were mostly just trying to stay alive. And I didn’t have much to contribute on research, so I was doing business, sales, BD fundraising, and did that for about two and a half years. So I went from an education nonprofit to, like, SaaS salesperson for two or three years, which is great. I learned a lot, and, you know, very important, obviously, for a company to succeed. And then about a year ago, our CEO, Dario, wrote this piece called Machines of Loving Grace, which I’d highly recommend, and set forth kind of a positive vision for AI and society.
And at that point, we were a little more stable on revenue, and so I and a couple others kind of raised our hands to go create an org within Anthropic our unit called Mission Labs. And so that’s actually where I sit now, where we incubate projects that can help AI do good in the world. And so I’ve done some education work, helped get our life sciences kind of drug discovery work going. I’m working on cyber defense now. I can go into more details on many of that. But through that I just feel insanely fortunate to sit in both at Anthropic and then a part of the org that’s mission is to incubate projects to do good with AI.
Michael Horn
That’s fascinating. That’s really neat and great of Anthropic to create a division that’s focused on all those questions as it emerges. And we’ll make sure to link also to that letter in the show notes because I think that’s an important one for the audience to have the context. Just one more question before Diane, you can jump in there. But I like, I’m curious. We’re getting all these hot takes right now that AI is going to radically transform education. AI is going to be the worst thing to ever hit education or maybe incremental at best to, you know, it actually obliterates the purpose of education itself in some pretty significant ways.
Give us sort of your headline of where you sit on that continuum and you can provide the nuance I just gave you the headlines to navigate.
Listen to this episode with a 7-day free trial
Subscribe to The Future of Education to listen to this post and get 7 days of free access to the full post archives.











