Many have predicted that as AI improves, it will commoditize technical skills and knowledge, but accentuate the things that make us human—things like our empathy and connection with other human beings. Or our ability to communicate.
But there's something too blunt, too lazy and too generalized about those observations in the face of mounting evidence showing that large language models are often better than humans at “performing” things with empathy—what we’ve thought of as “human skills.”
Take a Harvard study showing that “AI Companions Reduce Loneliness.” Or one from researchers at the University of California San Diego who found that health-care professionals rated responses from chatbots as “significantly higher for both quality and empathy.” Or my Christensen Institute colleague Julia Freeland Fisher’s work chronicling 30 edtech companies and advising organizations that found that “imbuing bots with warmth is key to driving engagement.”
To think about what we can do that perhaps LLMs can’t, we need to get more precise. Ben Riley’s work pondering the nature of intelligence, for example, has moved us toward that goal. But at the level of tasks or skills, we need to get clearer both about what the skill itself is—and the level at which one is able to do it.
Here's one way I've thought about it.
At a surface level, AI chatbots are good “listeners.” In an age when far too many of us don’t take the time to listen to others—especially those with different viewpoints—you can argue that they listen far better than most of us.
Keep reading with a 7-day free trial
Subscribe to The Future of Education to keep reading this post and get 7 days of free access to the full post archives.