Disanthropomorphize Chat
Language without mind and the ends of OpenAI

It wasnât sleeping, but where was it, within itself? But then it didnât, as he understood it, possess a self to be within. Not sentient, yet as Lowbeer had pointed out, effortlessly anthropomorphized. An anthropomorph, really, to be disanthropomorphized.
âWilliam Gibson, The Peripheral
In the summer of 2022, Blake Lemoine became famous for telling reporters that a technology he worked on at Google, called Language Model for Dialogue Applications (LaMDA), was sentient. This was a sign, a glimmer, of what was to come. Not sentient AI. Rather, the experience, and stories about the experience, of playing language games with a generative pre-trained transformer, its outputs refined using reinforcement learning from human feedback. ChatGPT offered millions this experience, and the feeling that there must be sentience behind the sentences appearing on their computer screens.
Joseph Weizenbaum named this feeling after ELIZA, a computer program he designed and built in the 1960s to role play as a therapist. The ELIZA effect is the effortless anthropomorphizing of a tool that talks with you. As Weizenbaum warned, this can be harmful. It was for Lemoine. Google fired him, calling his claims âwholly unfounded.â1
In the spring of 2023, Sam Altman became famous for telling reporters that something sort of like sentient AI was nearly here and that OpenAI would ship it soon. Many were skeptical. They pointed out all the ways GPT-4 was not that smart and that artificial general intelligence (AGI), Altmanâs preferred term, was pretty vague. This skepticism was widespread, but journalists were more interested in the experience of chatting with ChatGPT and worries about students using it to do homework. And they really liked writing about AI startup founders and scientists confidently predicting what would happen next.
Three years in, thatâs changed. There is broad agreement that new models are better, but not super smart. The fact that ChatGPT is a homework machine is just one of many problems facing educators. Journalists are filing more stories about the AI bubble than about AGI. And lately, when they tell stories about the chatbot experience, it is in the context of increasing evidence that it makes mental health problems worse.
Seemingly Socially Conscious Language about AI
Recently, Microsoftâs Mustafa Suleyman argued against what he calls Seemingly Conscious AI. Treating machines that generate language as potentially sentient or conscious
is both premature, and frankly dangerous. All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.
No doubt Suleyman is sincere in his social concerns. Still, it is worth considering the split in Silicon Valley between those who rely on the ELIZA effect to sell their products and those who worry about its harms. While the founders touted general intelligence, the sober-minded executives running some of the worldâs largest technology corporations kept their distance from the AGI bandwagon, even as they joined the AI parade.
Asked about AGI in early 2023, Google CEO Sundar Pichai sidestepped, saying â it almost doesnât matter [whether we call it AGI] because it is so clear to me that these systems are going to be very, very capable.â What they are capable of wasâand remainsâthe question. Along with Amazon and Facebook, Google has been swept along in the excitement about AGI and the capabilities of language machines, investing huge amounts of capital out of fear of being disrupted as they once disrupted. Rather than building their own large foundation model, Apple and Microsoft are approaching large language models by experimenting with models of different sizes and purposes, orchestrating and integrating them into their existing products and services.2
When it comes to the education market, Microsoft and Google matter the most. They sell the vast majority of system- and institution-wide information systems and servicesâthey like to call them platformsâalong with the cloud infrastructure to run them. Neither company cares about making chatbots entertaining, nor do the managers who sign those enterprise contracts. Having people chat the day away with a digital buddy is not the path to automating tasks efficiently and optimizing workflow. Copilot is less a smarter Clippy and more a strategy to make sure that asking Chat to do your work for you does not replace Microsoftâs existing business. Gemini is the least chatty of chatbots, in part so it will help you get your work done. It is designed to be integrated into Google Workspace and Google Classroom.3
Calling this language-as-a-service helps distinguish upselling existing customers on language machines from selling people on spending time with a chatbot. Microsoft is not selling AIâit is protecting BI, its data analytics and visualization tool, along with its other brands like Excel, PowerPoint, and Word. It has rebranded Azure Cognitive Services as AI Foundry, imagining software developers as industrial workers, forging new tools and products for the twenty-first-century knowledge factory. Microsoft wants your organizationâs IT staff to build natural language integrations with their tools and your data stores. It doesnât care which models provide the integrations, as long as Microsoft sells the service.
Google and Anthropic are similarly focused on selling the services of language machines to organizations, but they also compete in the chatbot arena. Currently, their models are on top. Yet, it is OpenAI with its brand name advantage that is in the spotlight as people begin to question the value of anthropomorphic chatbots. Understanding what ChatGPT does and does not do reveals the problems with OpenAIâboth its path to profitability and the harms to its customers.
was early to the psychological dangers of LLM-based chatbots, writing six months ago about a family member who has bipolar disorder. Like Lemoineâs story three summers earlier, Rileyâs story was a sign of things to come. As families file lawsuits and tell their stories to the New York Times, OpenAI flounders, pointing to its terms of service while talking up its newly added parental controls. Moving fast and breaking things works until it doesnât. When you misunderstand the technology you are building, then it really doesnât.Distinguishing language from cognition
Riley recently published an essay in The Verge making the case for this misunderstanding in terms of cognitive science, writing that the theory behind large language models as a path to superintelligence
is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.
The notion about thinking machines that underpinned this theory was made famous by Alan Turing when, in 1950, he âproposed to consider the question, can machines think?â His thought experiment became known as the Turing Test. When ChatGPT passed it, those who imagined Turingâs musings as a simple science experiment were confused, believing it must be a sign of human-like intelligence. It wasnât.4
Generating meaningful language through computational processes and successfully applying those processes to word problems is a genuine accomplishment, but passing tests does not make a thinking machine. With its impressive outputs and failures to make sense of experience, ChatGPT makes clear that manipulating language is not the same as cognition. Riley calls upon the work of Alison Gopnik to remind us that humans demonstrate this truth as well. Babies engage in all sorts of cognitive processes prior to using language.5
This uncontroversial view goes back to the early days of psychology. In Principles of Psychology, William James describes cognition as a stream of thoughtâmore famously, as a stream of consciousness.
Consciousness, then, does not appear to itself chopped up in bits. Such words as â chainâ or âtrainâ do not describe it fitly as it presents itself in the first instance. It is nothing jointed; it flows.
James explains the confusion that arises when language makes it appear that cognition happens through chaining or training discrete bits or tokens together.
The confusion is between the thoughts themselves, taken as subjective facts, and the things of which they are aware. It is natural to make this confusion, but easy to avoid it when once put on oneâs guard.
âLanguage works against our perception of truth,â he says, pointing in the same direction as Riley: an understanding of language and cognition, not as equivalent, but as relations in the stream of experience.6
When a person experiences the ELIZA effect, they mistake the words extruded from a language machine for âsubjective factsâ when the words are merely things about which the person is aware. The human-like outputs of language machines make the already confusing relations between thoughts and things more so. Chatbot makers and enthusiasts exploit this confusion, some more than others.
Why buy what OpenAI is selling?
Educators should, as Suleyman says, buy from companies that build technology for people, not to be people. And, there should be consequences for companies that harm people. Given its track record, OpenAIâs models should be kept away from people, especially children. If effective regulation is too difficult or distant a goal, perhaps market competition can address the problem.7
There is great writing by educators on how to use (or not!) large language models in teaching. But Iâd like to suggest more attention be directed to which companies to do business with (or not!). Purchasing decisions about language machines, including whether to use products offered for âfree,â are crucial to making digital tools serve students and teachers. An emphatic no to ChatGPT for Teachers does not necessarily mean yes to Gemini integrations in Google Classroom. There are other options.
Shaping markets in educational technology means teachers, technologists, and students mobilizing as citizens, consumers, designers, engineers, and educators to make, sell, and buy things for the better. This work started long before ChatGPT and will continue long after OpenAI is a footnote in the stories historians tell about the 2020s. As those stories play out, letâs agree to disanthropomorphize the chatbots.
AI Log, LLC. Š2025 All rights reserved.
For more on competitive markets as a fix for technology that seems to delight us but actually makes us miserable, check out Cory Doctorowâs new book, Enshittification: Why Everything Suddenly Got Worse and What to Do About It.
Ben Recht has some idea
Disanthropomorphizing chatbots is one way to correct for the mass ELIZA effect precipitated by ChatGPT. Another is demystifying Reinforcement Learning from Human Feedback (RLHF), the technique that made generative pre-trained transformers so effective at playing language games.
If that sounds interesting, check out Ben Recht writing at arg min. He has been live blogging his machine learning class âPatterns, Predictions, and Actions.â He also wrote the book, co-authored with Moritz Hardt. His new book The Irrational Decision, is is available for pre-order. I cannot wait to get my hands on it.
Curious about AI Log and what I do around here? Check out this brief guide or click this button.
Google says they fired Lemoine because he âchose to persistently violate clear employment and data security policies that include the need to safeguard product information.â According to this 2024 interview, Lemoine agrees Google had reasons to dismiss him unrelated to his claims about LaMDAâs sentience.
Brad DeLong writing at DeLong's Grasping Reality: Economy in the 2000s & Before is very good at explaining the behaviors of the tech giants as they respond to the disruptions threatened by OpenAI and Anthropic. See, here and here.
Stephen Fitzpatrick writing at Teaching in the Age of AI and Wess Trabelsi writing at AI ⊠K12 = Wess are very good on the battle between OpenAI and Google taking place in our classrooms. See, here and here.
For an exploration of why âlanguage machineâ is a better term than âthinking machineâ to describe transformer-based large language models see Language Machines as Antimeme, or even better, buy and read the book I review in that essay: Language Machines: Cultural AI and the End of Remainder Humanism (2025) by Lief Weatherby.
Angie Wangâs illustrated story âIs My Toddler a Stochastic Parrot?â offers a wonderful meditation on this theme. See more Angie Wang here, including this marvelous piece of motion design that would make a good illustration for this essay.
Jamesâs example for how words work against perception of truth is thunder. It signifies an event that is experienced bodily and understood within a stream of thought. If you read the passage below and compare it to what we know about how large language models work, youâll understand crucial differences between how language machines process inputs and humans think.
A silence may be broken by a thunder-clap, and we may be so stunned and confused for a moment by the shock as to give no instant account to ourselves of what has happened. But that very confusion is a mental state, and a state that passes us straight over from the silence to the sound. The transition between the thought of one object and the thought of another is no more a break in the thought than a joint in a bamboo is a break in the wood. It is a part of the consciousness as much as the joint is a part of the bamboo.
The superficial introspective view is the overlooking, even when the things are contrasted with each other most violently, of the large amount of affinity that may still remain between the thoughts by whose means they are cognized. Into the awareness of the thunder itself the awareness of the previous silence creeps and continues; for what we hear when the thunder crashes is not thunder pure, but thunder-breaking-upon-silence-and-contrasting-with-it. Our feeling of the same objective thunder, coming in this way, is quite different from what it would be were the thunder a continuation of previous thunder. The thunder itself we believe to abolish and exclude the silence; but the feeling of the thunder is also a feeling of the silence as just gone; and it would be difficult to find in the actual concrete consciousness of man a feeling so limited to the present as not to have an inkling of anything that went before. Here, again, language works against our perception of the truth. We name our thoughts simply, each after its thing, as if each knew its own thing and nothing else. What each really knows is clearly the thing it is named for, with dimly perhaps a thousand other things. It ought to be named after all of them, but it never is. Some of them are always things known a moment ago more clearly; others are things to be known more clearly a moment hence.
A language machine names thoughts simply, each after its thing. It maps and chains, through predictive algorithms, some of the âthousand other thingsâ using word vectors. But it has no stream of subjective experience that allows it to know these things as humans do. Much of what we call hallucination or confabulation is the inevitable product of computation acting on data without sense, meaning both inputs from sensory organs and a sense of time.
OpenAI may be the biggest rotten apple in the barrel of AI startups, but it is not the only one. Marc Watkins writing at Rhetorica makes clear that Perplexity is just as much in need of removal.


Thanks for this Rob. I think you're absolutely right that we need to pay more attention to what companies are involved in the AI push. There are some interesting efforts to try to develop AI models that are made with data from the public commons, and are open source, though I can't say I have high hopes they'll be able to compete with Big Tech.
I would like a Picard Mode in my AI models so that they would address me as the captain explicitly as a computer, without all the frippery or any "would you like something else, sir."