On the use of historical analogies in writing about AI
Something big is happening, but how do we describe it?
Paradise
Is exactly like
Where you are right now
Only much much
Better.
—Laurie Anderson

Something big is happening, no doubt, but be careful when reading what Matt Schumer wrote about it, and not just because he used AI to write it. Writing down words generates meaning, but those meanings adapt, they change over time. Language is a virus, remember? Using words to describe something can focus our understanding of something, or obscure it.
A good analogy works like the aperture on a camera lens, brightening and deepening an image in a photograph while blurring the context. F-stop signifies the setting on a camera that controls the amount of light that passes through the lens at a specific shutter speed, which determines exposure and depth of field. Comparing a process, feeling, or idea in the past to an aspect of the present brings light to your subject, making it vivid and clear.
Our great poet of AI hype cycles, Max Read, writes that we are passing through the “Wuhan phase,” so named because it has become common to compare how people feel about using the latest AI coding assistants to how they felt when they realized that Covid-19 was a real, 1918-level, shelter-in-place kind of pandemic. This latest wave of hype has reinforced my sense that large language models, what I call transformer-based language machines, are writing technology. They order signs, or, if you prefer, tokens, into meaningful sequences.
Here’s how writing works for me. A piece of writing gets my attention. I read it. Then I start generating words, which form into sentences, sentences into paragraphs, and paragraphs into an essay. The whole time, I go back through and reorder words until I get something that works. Claude Code does something analogous: it reads something that provides a context, and starts generating tokens, which form into statements, statements into functions and functions into modules, and then into applications, going back over the process again and again to make sure it works.
Is this a bad analogy? You tell me.
Stories about machines past
We learned a few years ago that a machine that writes essays is something big. Now, apparently, we’re learning how big a something it is that machines can write software applications. These discoveries generate thoughts and feelings, which generate discursive cycles of excitement and dread about AI. This is big for people who write. And so, they grab pens and paper, keyboards and speech-to-text apps, their favorite AI chatbot, and get to writing.
That last example, using a transformer-based language machine to write, is best understood in the context of the history of automation. Of course, as a historian, I would think that, and then write an essay about it. One social function of writing is to make sense of change in relation to what the writer finds important. Kenneth Burke calls this frames of acceptance, the more or less organized system of meanings by which a thinking person “gauges the historical situation and adopts a role with relation to it.”
Writers of history have a habit of insisting that the past is important to whatever big is going on. What’s worse is they treat whatever history they read about as if it explains what is happening now. I read a lot about education in the United States just before and during the nineteenth century, so I can tell you all about how changes in universities and schools during this period were an inflection point, a fancy term for something big in the development of modern society. And, of course, I write and tell stories about how this history is relevant today. Here is one.1
It starts with the way the automation of labor changed starting sometime around 1750 with the steam engine and new factory methods of production. Over the next two hundred years, manufacturing and transportation were increasingly automated mechanically through steam, electricity, and internal combustion.
Knowledge production and dissemination were industrialized as well, I would argue, based on reading books by Adam Nelson. The modern university and the common school are knowledge factories that developed alongside factories used to manufacture goods. This process of industrialization has not yet ended, but, thanks to the invention of the electronic computer, the automation of intellectual labor began to intensify starting around 1950, leading to another inflection point.
Inflection points are not instantaneous. They are dramatic constellations of events, not singularities, at least in the stories historians tell. They unfold over time, encompassing many social and technological changes, but also the persistence and recontextualizing of social practices and institutions from before. Think of how handwriting and monasteries have persisted even as the world around them changed.
Until 2022, writing itself, the ordering and reordering of words in order to create meaning, remained in human hands even as those hands were augmented and trained by mechanical and information technologies. Writers in the twentieth century had complicated feelings about how tools like the typewriter and word processor made writing feel automatic. On one hand, there’s the famous Truman Capote burn of Jack Kerouac and the Beats: "That's not writing, that's just typing." On the other, T. S. Eliot and Hemingway both understood typing as important to their writing process even though, like most of their peers, they started off using pencil and paper.
For writers like me and John Warner who learned to write in the 1980s as typewriters gave way to word processors, typing is writing, so much so that writing an essay by hand feels impossible. Warner, one of the most trenchant critics of twenty-first language machines, describes the speed of machine interfaces introduced during the last century as freeing: “learning to type was a kind of liberation for my thoughts as I could finally capture them at something close to the speed with which they happened.”
You can describe the intrusion of ChatGPT into writing as a sharp break with the past, or as the gradual acceleration of innovations like autocomplete, spellcheck, and grammar-checks introduced in programs to write using a personal computer. These functions began to blur the lines between augmentation and automation, but it is clear that transformer-based language machines automate much more of the writing process than word processing programs. Yet, just as the most advanced manufacturing technologies need humans to configure and manage the machinery, even the most advanced language machines will need input and oversight from humans to produce useful language.
To receive essays like this one in your email inbox once or twice a month

Stories about machines present and future
This means we can treat complex machinery that automates the process of writing as analogous to the machinery that automates the production of an automobile or an airplane, or maybe better, the process of driving or flying one. Cars and airplanes operate more autonomously today than they did ten, twenty, and fifty years ago. That trend seems likely to continue, and to generate feelings as humans become less important to their immediate operation.
When Matt Schumer used an AI chatbot to write about how it feels to be “no longer needed for the actual technical work of my job,” he very likely spent time reviewing the output, reordering the words to make sure they cohered into something that worked as he intended. The same is true of software engineers who have to check the output of Claude Code or the drivers of Teslas or pilots of Boeing 737s who have to pay attention to the operations of their vehicle. Don’t let the changing meaning of “autopilot” confuse you. Left unattended, any complex machinery will, to use a nineteenth-century phrase, run off its rails, or to use a twentieth-century phrase, crash and burn.
Why do so many assume that language models, unlike cars and airplanes, will suddenly start operating without human attendants and operators? I think the blame falls on science fiction, that marvelous alternative to thinking about the past by projecting our experiences into an imagined future. Many readers of science fiction experience the language machines in use today and analogize them to fictional machines like the Maschinenmensch from Metropolis and the operating system from Her.
To believe OpenAI and Anthropic are building one of these entirely fictional, entirely autonomous entities requires believing that language models do not model language, they model the world through language. You have to believe that they will gain powers of perception and cognition through machine reading and writing alone. You have to believe that ordering and reordering tokens will lead to something analogous to consciousness.
No matter how complex they become, I think it is unlikely that language machines will become completely autonomous anytime soon. That’s because I insist that analogies to the past are more useful to understanding the present than stories about the future. It is also because transformer-based language machines have led me to believe that relations among language, writing, and thinking are far more complex than I had understood.
Language Machines: Cultural AI and the End of Remainder Humanism (2025) by Leif Weatherby helped me think more carefully about language, writing, and thinking in the context of AI. Here is my review.

Predictions and analogies
The fact that machines order and reorder signs in ways that are meaningful is something big. Yet, the past suggests that the speed of change from these dramatic events will happen more slowly than the feelings of excitement and dread suggest. That’s because the historical analogy to the pandemic is a bad one. So is the analogy of language to a virus from outer space, even though in the hands of William S. Burroughs and the writers of Pluribus it has made for some very interesting stories.
The idea that ChatGPT and Claude are alien intelligences obscures our understanding of how they work. These machines are not natural phenomena like an infectious disease or an imagined alien superintelligence. Transformer-based language machines are better understood as normal technology or as cultural and social technology, information systems made out of collective human intelligence, which John Dewey calls our pragmatic intelligence and Brad DeLong calls our anthology intelligence.
The speed with which these machines will shape our society and our lives is hard to predict and only loosely related to what the technology itself is capable of. That’s because humans are capable of exercising control over how machines are put to use. We could have completely autonomous vehicles driving and flying around today if we didn’t mind the death and destruction that would follow.
Those who like to make predictions about superintelligent machines can point to stories about how heavier-than-air flight was treated skeptically by experts right up until the Wright Brothers made it happen. The past is filled with stories about scientists who confidently predicted events that did not and could not happen, just as there are plenty of science fiction writers who have told stories featuring magical inventions that turned out to be accurate predictions. This is why Charles Peirce urges us to adopt a doctrine of contrite fallibilism, to recognize that “our knowledge is never absolute but swims, as it were, in a continuum of uncertainty and of indeterminacy.”
I wouldn’t say uncertainty has increased since Peirce wrote those words, but the waters around writing seem much rougher these days. It’s hard for me to blame a writer of code who grabs hold of a prediction that the machine replacing them is a sign that superintelligence is almost here, or a writer of history who analogizes the adoption of AI in universities to the centuries-long process of factories adopting the steam engine and the electrical generator.
Prediction is hard, especially about the future, ha ha. Analogies are, too, especially when they are about the past. That’s because words are not like code, even if writing works to describe how both are produced. The experience of reading an essay is quite different from using software. Analogies are tricky like that.
One last analogy, this one returns to the blurry image of Laurie Anderson’s performance of Language is a virus that I placed at the beginning of this essay. In it, Anderson uses her left hand to intrude upon the words on the screen, moving it in front of the screen between the letter F and the word STOP, gesturing so that the sign briefly becomes F-STOP.

The conventions of AI discourse, heck, of writing period, demand I conclude this essay with an explanation of how this image illuminates the themes I explore and tell you exactly where I stand on writing with AI. Instead, let me admit that I am at sea, swimming in my own uncertainty and feelings of indeterminacy. If I know anything, it is the dangers of thinking I know something. As Peirce put it, “no blight can so surely arrest all intellectual growth as the blight of cocksureness." So, rather than a call to action, let me leave you with questions about the gesture Anderson makes, which I take to be a metaphor for writing, the ordering and reordering of signs.
What if one of the performance’s robotic dancers made the gesture? Or a robot walked on stage and made it? What if the machine generating the sign F STOP had been programmed to place an image of a hand making the gesture? Or a complex machine had generated and controlled the gesture stochastically?
AI Log, LLC. ©2026 All rights reserved.
This story is what is known as a grand narrative, a polite term for the lies historians tell. Most historians and philosophers of history these days treat such sweeping stories about the past as speculation that explains much by ignoring most of what actually happened. Such stories are, like the science fiction written by Charlie Stross and his colleagues, lies told for money and status.
On a completely unrelated note, please hire me to give a talk on your campus or at your company’s next user conference. You can support AI Log by sharing this essay with others who might like it.

