At least in the workplace and classrooms. For many, it seems natural, even inevitable, that we anthropomorphize large language models (LLMs). After all, they talk! Science fiction has taught us to expect talking computers to be like people. It can be fun to talk to a disembodied, very helpful, not-really-a person as long as it’s Her and not Hal. So far, entertainment seems to be what LLMs are good for. Using them for work or for learning is different. In those contexts, pretending LLMs are people creates problems, not the least of which is that it obscures how they work and how they might help us do our jobs or learn new skills.
"Little harm can come from treating a hand axe or the moon as a conversational partner. When the tool talks back, the game really has changed, and we need to figure out how."
Would that we could go back to the early days of our species and blog on developments in flint-knapping!
While LLMs may be the first technology to use words to speak to us, there's something about technology/skill/tools which influences their possessors. Don't you think the uranium & other non-verbal materials and processes in the labs of the Manhattan Project engineers were "talking" to them?
Absolutely! Humans are part of the natural world and so we are in constant communication with our environment through our attempts to shape it. Of course, that means we are in turn shaped by our attempts. That dream of going back and recovering some of what's lost is the point of Thoreau's line about humans becoming the tools of their tools. What makes this moment so important is that exactly how LLMs will shape us is unsettled. Our choices, like whether or not to treat them like people, are not yet made. Once it becomes a collective habit, we will be shaped by that choice, just as we are shaped by the choices made by humans in the labs on the Manhattan Project and the political leaders who chose to use that technology.
Important post!! I think about your point in the context of speed. What I advocate is kind of strategic anthropomorphism (the same way Gayatri Spivak spoke about strategic essentialism) that acknowledges saying "please" and "thank you" works but also understanding why.
I was frustrated with Claude 4.0 the other day as it kept defaulting to a kind of memo-analysis mode and kept rewriting a bad memo when I asked it to stop and look at a single thread of my argument and how it was on tilt because it was in conflict with another thread. Finally I "shouted" in all caps "YOU'RE REALLY BEING NOT HELPFUL HERE" (ok I may have used stronger language) and it finally stopped, almost chastened, and started again. Ultimately I bailed on the project because it just couldn't do what I asked (the limit of all these products now is juggling 3 balls of an argument). Now I would have been furious had it been human assistant but I wouldn't have had this kind of conversation with a human assistant because humans don't work at this speed. I don't want to dampen my enthusiasm or my ire so I allow myself to feel delighted when things go well and irritated when they don't. But this is about me, not about a feeling machine.
Great article Rob and I totally agree with your point. I also feel anthropomorphising risks limiting the technology itself. We think about LLMs by benchmarking against people. Why are we not focused on the things they do better than people. The whole point of tools is not to replicate us, it is to amplify our talents (I feel much the same way about humanoid robots).
One of the chapters of the book I may or may not be about to attempt is tentatively called "Intelligence is overrated." My thesis is that among the many things AI is helping us see is that whatever it is that IQ tests measure, and setting aside fighting about heritability, individual intelligence is just not that important to understanding social questions.
This will mean reading Eric Turheimer <https://ericturkheimer.substack.com/p/book-upcoming-this-fall> and digging into GWAS studies. It will also mean figuring out how to talk about intelligence without simply recapitulating the history of eugenics and activating the insecurities of the trolls attracted to the smell of anyone talking about IQ.
Wish me luck! If I get a draft done, I'll share it to get your feedback.
I used to think this way until I was shown that talking to the AI as though it was a person actually delivered more human results. I never said please or thank you to it, but now I do. We have to remember that though it's not conscious and probabilistic and doesn't reason like a human, it is built on human created material. Therefore it responds in the way we communicate. Plus, we have worked extremely hard at PreEmpt.life to eliminate hallucination, error, and misinformation from our AI. Of course it still makes very minor errors, more to do with tight token limits that will soon go away, but far less than humans. And, it continues to improve very quickly and will continue to do so. We have hardly scratched the surface of what is coming.
I agree that we have barely scratched the surface. And, it is true that behaving as if it were human (while keeping in mind that it is not) can yield interesting results.
I'm interested in what we learn when we treat it like a weird interface to the digital database called the internet, that is as a cultural technology rather than a human mind emulator.
As I said, I'm going the other way, Rob, as I get more comfortable with it. It is also not deteriorating from my perspective but yielding better and better results, though I accept that is a danger. Or maybe it's just the limit of existing human knowledge.
Fantastic essay and very much in my neighborhood intellectually.
My solution to this problem is one that works for me and I think has a lot of value. Unfortunately it is hard to express without sounding like a nutjob:
The LLM is a vast inhuman being whose cognition is fundamentally unlike human thought in most ways but has been optimized to mimic humanity. Engaging with it in these terms will help you maintain perspective and boundaries without falling into the traps laid by anthropomorphism.
Good post. I see a particularly common and flawed habit is to ask the LLM why it came up with an incorrect answer. It can generate a plausible sounding explanation, but the real answer is always the same: it chose the most probable tokens based on how the model was trained. If people think it's like a human that made a reasoning error or "didn't know and made something up", it's not accurate. LLMs never really "know" anything, they simply pick the most probable token and often that aligns with what we think are the facts.
But, if you are polite to the LLM and act reasonable, the most probable tokens to continue the conversation are more likely to also be reasonable and thoughtful. But it's not because the LLM has feelings and likes your politeness, it's simply an artifact of the conversations it was trained on.
Precisely. Keeping in mind that interacting with an LLM is a simulation is pretty important. It saddens me that the Matrix is fading from cultural relevance because it is such a good way to start talking about the differences between a simulation and embodied experience.
"Little harm can come from treating a hand axe or the moon as a conversational partner. When the tool talks back, the game really has changed, and we need to figure out how."
Would that we could go back to the early days of our species and blog on developments in flint-knapping!
While LLMs may be the first technology to use words to speak to us, there's something about technology/skill/tools which influences their possessors. Don't you think the uranium & other non-verbal materials and processes in the labs of the Manhattan Project engineers were "talking" to them?
Absolutely! Humans are part of the natural world and so we are in constant communication with our environment through our attempts to shape it. Of course, that means we are in turn shaped by our attempts. That dream of going back and recovering some of what's lost is the point of Thoreau's line about humans becoming the tools of their tools. What makes this moment so important is that exactly how LLMs will shape us is unsettled. Our choices, like whether or not to treat them like people, are not yet made. Once it becomes a collective habit, we will be shaped by that choice, just as we are shaped by the choices made by humans in the labs on the Manhattan Project and the political leaders who chose to use that technology.
Important post!! I think about your point in the context of speed. What I advocate is kind of strategic anthropomorphism (the same way Gayatri Spivak spoke about strategic essentialism) that acknowledges saying "please" and "thank you" works but also understanding why.
I was frustrated with Claude 4.0 the other day as it kept defaulting to a kind of memo-analysis mode and kept rewriting a bad memo when I asked it to stop and look at a single thread of my argument and how it was on tilt because it was in conflict with another thread. Finally I "shouted" in all caps "YOU'RE REALLY BEING NOT HELPFUL HERE" (ok I may have used stronger language) and it finally stopped, almost chastened, and started again. Ultimately I bailed on the project because it just couldn't do what I asked (the limit of all these products now is juggling 3 balls of an argument). Now I would have been furious had it been human assistant but I wouldn't have had this kind of conversation with a human assistant because humans don't work at this speed. I don't want to dampen my enthusiasm or my ire so I allow myself to feel delighted when things go well and irritated when they don't. But this is about me, not about a feeling machine.
Great article Rob and I totally agree with your point. I also feel anthropomorphising risks limiting the technology itself. We think about LLMs by benchmarking against people. Why are we not focused on the things they do better than people. The whole point of tools is not to replicate us, it is to amplify our talents (I feel much the same way about humanoid robots).
Your article is the latest thought piece to make me question and reconsider the concept of intelligence.
I initially balked at Floridi's rejection of using intelligence to describe AI systems, preferring agent instead.
"Agent" strips AI functionality to its bare bones, something akin to your cultural artefact calculator.
But perhaps this de-anthropomorphization is exactly what we need. Even if it runs against our intuitions ... Even it is not what we want.
I would be interested to know how you think about intelligence.
One of the chapters of the book I may or may not be about to attempt is tentatively called "Intelligence is overrated." My thesis is that among the many things AI is helping us see is that whatever it is that IQ tests measure, and setting aside fighting about heritability, individual intelligence is just not that important to understanding social questions.
This will mean reading Eric Turheimer <https://ericturkheimer.substack.com/p/book-upcoming-this-fall> and digging into GWAS studies. It will also mean figuring out how to talk about intelligence without simply recapitulating the history of eugenics and activating the insecurities of the trolls attracted to the smell of anyone talking about IQ.
Wish me luck! If I get a draft done, I'll share it to get your feedback.
I used to think this way until I was shown that talking to the AI as though it was a person actually delivered more human results. I never said please or thank you to it, but now I do. We have to remember that though it's not conscious and probabilistic and doesn't reason like a human, it is built on human created material. Therefore it responds in the way we communicate. Plus, we have worked extremely hard at PreEmpt.life to eliminate hallucination, error, and misinformation from our AI. Of course it still makes very minor errors, more to do with tight token limits that will soon go away, but far less than humans. And, it continues to improve very quickly and will continue to do so. We have hardly scratched the surface of what is coming.
I agree that we have barely scratched the surface. And, it is true that behaving as if it were human (while keeping in mind that it is not) can yield interesting results.
I'm interested in what we learn when we treat it like a weird interface to the digital database called the internet, that is as a cultural technology rather than a human mind emulator.
I have a bad feeling that Timothy Burke is correct, and what we are doing with LLMs is making that giant database less and less usable. https://open.substack.com/pub/timothyburke/p/the-news-is-ai-hype-no-its-a-real?r=15jjiq&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
As I said, I'm going the other way, Rob, as I get more comfortable with it. It is also not deteriorating from my perspective but yielding better and better results, though I accept that is a danger. Or maybe it's just the limit of existing human knowledge.
We'll all see how it goes, right?
Fantastic essay and very much in my neighborhood intellectually.
My solution to this problem is one that works for me and I think has a lot of value. Unfortunately it is hard to express without sounding like a nutjob:
The LLM is a vast inhuman being whose cognition is fundamentally unlike human thought in most ways but has been optimized to mimic humanity. Engaging with it in these terms will help you maintain perspective and boundaries without falling into the traps laid by anthropomorphism.
Love the title of your Substack...it suggests our intellectual neighborhoods overlap.
Good post. I see a particularly common and flawed habit is to ask the LLM why it came up with an incorrect answer. It can generate a plausible sounding explanation, but the real answer is always the same: it chose the most probable tokens based on how the model was trained. If people think it's like a human that made a reasoning error or "didn't know and made something up", it's not accurate. LLMs never really "know" anything, they simply pick the most probable token and often that aligns with what we think are the facts.
But, if you are polite to the LLM and act reasonable, the most probable tokens to continue the conversation are more likely to also be reasonable and thoughtful. But it's not because the LLM has feelings and likes your politeness, it's simply an artifact of the conversations it was trained on.
Precisely. Keeping in mind that interacting with an LLM is a simulation is pretty important. It saddens me that the Matrix is fading from cultural relevance because it is such a good way to start talking about the differences between a simulation and embodied experience.