Language is not a virus, nor the new language machines - but discourse about those language machines is not unlike a virus! The generation of thoughts and feelings around these technologies, which leads writers to generate writing about this, can now be simulated by LLMs generating writing. It's analogies all the way down (even as it isn't). Great piece.
For some reason Hollis Robbins' piece from yesterday, with its brilliant extended analogy between mechanical LLM production and mechanical knowledge production in the social sciences, seems apropos. Her analogy is structurally perfect and yet feels inevitably partial, once human meanings are brought into the frame . Curious what you thought.
I'm on the record as a big fan of Hollis's work. The best thing about that piece for me is that it brings scale into focus. We take for granted what it meant for higher ed "to have scaled" over the course of the last 250 years. Social science is one cluster of now distinct disciplines, but during that period, social science went from a loosely defined group of bureaucrats and hobbyists trying to use statistical and scientific methods to understand humans to a global, industrial operation organized through peer review and funded by governments.
That's why I love talking about the introduction and intrusion of LLMs as like the introduction of the dynamo. Electricity replaced steam in a lot of manufacturing but it was not clear to anyone how to do it in 1890. It took decades to figure out. If LLMs with their stochastic processing of cultural information replaces the electronic computer in a similar way, what does the university of twenty or thirty years look like?
Most academic humanities writers are interested in talking about what a bad idea it is for Silicon Valley to be making all the decisions about the development of this tech and listing reasons why LLMs cannot replace people. I do this too, for the very good reasons that it is a bad idea and they can't.
Hollis is working other territory, and doing a damn fine job of it.
This is such a good question and I spend too much time on it. For about six months, as I told @joeljmiller I kept thinking of LLMs as a ball of string. Then I settled on automobiles (while faculty were still riding horses (https://hollisrobbinsanecdotal.substack.com/p/the-faculty-are-riding-horses) and now I've gone back to balls of string.
Ball of string is a wonderful double metaphor as well: a technology containing multitudes of language folded in vector space, and something that won't reveal its full scale and dimensions until it's unfolded (well, unraveled) over time.
Very thought provoking and insightful post Rob
"typing is writing, so much so that writing an essay by hand feels impossible."
So true. I can't imagine writing an essay by hand anymore though I did it well into college.
I love that it was John Warner who gave me that insight.
I still take most notes by hand, but I agree that, if writing long form, I'm most likely typing, at least once my brain storming is done.
Language is not a virus, nor the new language machines - but discourse about those language machines is not unlike a virus! The generation of thoughts and feelings around these technologies, which leads writers to generate writing about this, can now be simulated by LLMs generating writing. It's analogies all the way down (even as it isn't). Great piece.
For some reason Hollis Robbins' piece from yesterday, with its brilliant extended analogy between mechanical LLM production and mechanical knowledge production in the social sciences, seems apropos. Her analogy is structurally perfect and yet feels inevitably partial, once human meanings are brought into the frame . Curious what you thought.
I'm on the record as a big fan of Hollis's work. The best thing about that piece for me is that it brings scale into focus. We take for granted what it meant for higher ed "to have scaled" over the course of the last 250 years. Social science is one cluster of now distinct disciplines, but during that period, social science went from a loosely defined group of bureaucrats and hobbyists trying to use statistical and scientific methods to understand humans to a global, industrial operation organized through peer review and funded by governments.
That's why I love talking about the introduction and intrusion of LLMs as like the introduction of the dynamo. Electricity replaced steam in a lot of manufacturing but it was not clear to anyone how to do it in 1890. It took decades to figure out. If LLMs with their stochastic processing of cultural information replaces the electronic computer in a similar way, what does the university of twenty or thirty years look like?
Most academic humanities writers are interested in talking about what a bad idea it is for Silicon Valley to be making all the decisions about the development of this tech and listing reasons why LLMs cannot replace people. I do this too, for the very good reasons that it is a bad idea and they can't.
Hollis is working other territory, and doing a damn fine job of it.
Thank you both!
This is such a good question and I spend too much time on it. For about six months, as I told @joeljmiller I kept thinking of LLMs as a ball of string. Then I settled on automobiles (while faculty were still riding horses (https://hollisrobbinsanecdotal.substack.com/p/the-faculty-are-riding-horses) and now I've gone back to balls of string.
I like ball of string, but some times you gotta mix those metaphors. Faculty borrowing students' keys in that piece is too good to give up.
Ball of string is a wonderful double metaphor as well: a technology containing multitudes of language folded in vector space, and something that won't reveal its full scale and dimensions until it's unfolded (well, unraveled) over time.