One of the ideas I worked through last summer was why it is problematic to anthropomorphize large language models. Most of the concern seems to be about people becoming obsessed with chatbot companions. But people have been obsessing about fictional characters since there have been fictional characters. No one is going to fall in love with Khanmigo. And the run of sad stories about chatbot love are as much examples of our anxieties about digital technology’s claims on our attention as a new social phenomenon.
The problem is that treating large language models like people obscures how they work, not just in a technical sense, but how they might function in a classroom or a workplace. The essay below makes that argument as plainly as I can.
Speaking of anxieties about digital technology’s claims on our attention, I will publish a new essay soon about Walter Ong and Plato that speaks to this my anxiety over the fact that Johnny Can’t Read Won’t Read. It is based on a talk titled A Phaedreus Moment that I gave last week at Perusall Exhange 2025. You can see the notes and reference for the talk here.
Let's Stop Treating LLMs like People
This is the second essay of three in a series arguing against anthropomorphizing LLMs in education. The other two are my exploration of questions about the educational value of historical chatbots and my review of Ethan Mollick’s Co-Intelligence.