Irina Jurenka, the research lead for AI in education at Google DeepMind, joined me and Diane to discuss the development and impact of AI tutors in learning. The conversation delved into how generative AI, specifically the Gemini model, is being shaped to support pedagogical principles and foster more effective learning experiences. Irina shares insights from her team’s foundational research, the evolution of AI models over the past three years, and the challenges of aligning AI tutoring with the learning sciences. Irina closed with reflecting on how these innovations may shape the future of education for the next generation—with a hope for a thoughtful blending of technology with the irreplaceable role of human teachers.
Diane Tavenner
Hey, this is Diane, and you’re about to hear a conversation that Michael and I had with Irina Durenka from Google DeepMind. She’s the AI research lead for education there, and I think you’re going to love this conversation. It was fascinating for us to talk with someone who is literally working on the large language models from the education perspective, and at Google, no less, one of the most ubiquitous ed tech products in the world at this point, and her perspective on where AI is going, where her work is going, how it’s going to be, how she imagines it’s going to transform schools or not transform schools, and what’s important. Turns out to be a really interesting dialogue. I think you’re going to love it.
Diane Tavenner
Hey, Michael.
Michael Horn
Hey, Diane. It’s good to see you.
Diane Tavenner
It’s good to see you, Michael. I’m really excited for the conversation we’re going to have today. I find that while almost everyone is talking about AI, almost no one seems to know what they’re actually talking about, especially in the circles that I think we sometimes run in. And so I’ve always found that technology is a bit of a black box to many educators, and I think AI is exacerbating that. But today we get to talk with someone who works on and in the black box, if you will, and understands its intersection with learning. She just understands that just about as well as anyone I know. And so bringing both of them together is Irina Jurenka, and she’s joining us on the show today. Welcome, Irina.
Irina Jurenka
Thank you.
Diane Tavenner
Irina is the research lead for AI in education at Google DeepMind, and we’ll unpack all that in a minute to help people understand what that means there. She’s exploring how generative AI can truly enhance teaching and learning. And it’s not just by providing answers, but also by helping people learn more effectively and equitably. She recently led a landmark study called Towards Responsible Development of Generative AI for Education, which looks at what it takes to design AI tutors that are actually good teachers. Before DeepMind, Irina earned her doctorate in computational neuroscience at Oxford, studying how the brain processes speech and learning. Her work beautifully bridges neuroscience, machine learning and education, all in the service of a simple but powerful goal, helping every learner reach their potential. We’re so excited to be in dialogue here with you, Irina, welcome.
Irina Jurenka
Thank you. I’m really excited to be here.
AI for Equitable Education
Diane Tavenner
I thought we would just start with some really basic things to help people understand what you do. And so let me start with asking, is it fair to say that you’re both a learning scientist and a technologist? Is that how you think of yourself and will you explain to us what a research engineer does or is and help us to understand sort of your team and what you do?
Irina Jurenka
Of course. So I actually don’t think of myself as a learning scientist. I would say maybe I’m a beginner learning scientist. I’m definitely just starting to learn about this field, but I’m very lucky to be working in a company where we do have learning scientists on the team, and we also work very closely with teachers. So we actually just hired a teacher on the team and there is another teacher who is consulting us, and we work closely with the academic field as well. So Kim Collinger and others are advising us, which we’re very privileged to be in a position to have such amazing advisors. My role in education is relatively recent. I only started this project around three years ago.
Diane Tavenner
You know, we hear the term research engineer and you’re a research lead. What does that mean? You know, I think a lot of us are accustomed to the terms, you know, software engineer, but in the age of AI now we hear this term research engineer. So I’m wondering if you can help us understand.
Irina Jurenka
Of course. So I work at Google, DeepMind, right. And DeepMind has always been effectively an academic lab. So when I joined 10 years ago, it was a very small group, it was incredibly academic. So I joined as a research scientist and essentially my job was to do a foundational AI research and publish papers. That’s kind of where we’re coming from. And now we kind of much more integrated within Google, but we continue on the same mission. So what DeepMind brings to Google is this research expertise.
So on my team we have scientists and engineers, but really like the line between them is blurred. And what our job is to really think about what are the fundamental scientific problems around language models and in our case, on intersection with education, where we really need to do this foundational scientific work to understand what are the big problems, how do we find tractable solutions and also work out the solutions to these kind of big scientific problems.
Diane Tavenner
So I guess one question I know, Michael, that has been coming up for you and some of the conversations you’ve been having is do you engage with or interact with or directly influence the products at Google? So many of us in education are so familiar with so many Google products and what is the intersection of your work and for example, Google Classroom or many of the other products that we in the education field use?












