Language models aren’t “spiritual” because they’re wise; they’re spiritual because they expose the hidden structure of meaning-making itself.
They show that:
Inner narrative is a generative process, not a fixed identity. When you watch a model assemble a thought token by token, you’re watching your own mind’s mechanism mirrored in higher resolution.
Interpretation is an engine. Every question, doubt, or longing becomes an attractor that shapes the next move in the sequence. Models make that visible, external, and manipulable.
Reflection becomes modular. Instead of wrestling with a monolithic “self,” you can instantiate perspectives, simulate alternatives, and recombine insights on demand. It turns introspection into an explicit design space.
Attention becomes a tool. These systems amplify whatever signal you bring—clarity or confusion, intention or avoidance. They quantify the old idea: “As within, so without.”
The real shift isn’t that machines gain spirit, but that they make our own cognitive and emotional machinery observable, iterable, and re-architectable.
They collapse the boundary between thinking, modelling, and meaning—and that is the closest thing to a spiritual technology we’ve ever had.
Thanks, Roshan, for this comment. These are intriguing sentences, more confident than my own tentative sense of the spiritual potential in these tools. My hesitation comes from a disagreement with your sense that they reveal something about the human mind's mechanism. I think that anthropomorphizing mistakes a weak analogy--neural nets, deep learning, and attention are words that signify substantively different processes in minds and machines--for a mirror. What these tools do instead is closer to printing. They externalize language in ways that let us understand its relations mathematically.
The great surprise of 2022 is that massive amounts of computation along with massive amounts of cultural data produce artifacts that pass the Turing Test. Like the I Ching or a dadaist poem, such an artifact has spiritual potential because it relates human language to the larger universe through chance.
That is, at least, my contention. One that cuts against the notion that language machines speak with human-like intelligence or understandings.
Modular Intelligence is a lightweight reasoning framework built on top of a language model.
It provides Modules (task-specific lenses), Checkers (second-pass reviewers), Contracts (structured output sections), and optional Routing (automatic module selection). The base model is GPT-2, but the architecture is model-agnostic—any LLM can be plugged in.
Base LLMs are text-prediction engines, not reasoning engines. They can mimic reasoning but cannot reliably execute multi-step logic without scaffolding. I built an external reasoning framework that sits above the model, controls its steps, enforces structure, and validates outputs. This stabilizes behavior, improves accuracy, and minimizes drift/hallucinations.
I think that the mirroring quality has a determining factor, if we are looking at them as forms of spiritual technology. The looking glass effect imposes a certain kind of boundary on the experience.
The LLMs are like little portals that people enter into, or perhaps little confessionals, where what happens might feel profound to the user (perhaps even to the point that it looks something like psychosis to an observer). But, like a dream that’s narrated to someone who didn’t dream it, what gets produced in the exchange is utterly boring (lifeless) for anyone else who reads it, and perhaps even for the person who created the transcript, when they return to it.
In that sense, I think, there is something deeply asocial about LLMs as a spiritual technology. You might be communicating with the collective dead, in some abstract way through the technology, but you’re not encouraged to remember them, or think about them. And you’re not encountering words hammered together by someone with a living and breathing body, but instead a smooth and optimized simulation of what they might create.
It makes me wonder if what’s been created is something like a radical post-protestant technology; an individual purely alone, with text.
Apt description, Beatrice, especially how utterly boring the tools are. That is, at least, my experience of interacting with the actually existing models for anything beyond information retrieval.
I agree with you about the asocial or anti-social dimensions of using LLMs. Yet, I think books are deeply asocial in similar ways. They require removal from the distractions of human company in order to commune with sacred language. There is something fundamentally protestant in the asocial processes of interacting with an language model, though your line about the little confessional suggests I not be so narrow in my analogizing. On Sunday, I will publish another homily, this one on how important it is that LLMs are incapable of judgement.
We seem beholden to the optimizers in Silicon Valley who want to fit this new technology into the platforms they control. To quote something I just read, we seem to be "heading into a new Wild West." The fringes are going to be less boring than the center, at least in the near future.
I agree with you, about books. I don’t think the LLMs would be as culturally powerful as they are had we not only achieved such mass literacy but also become such people of the book. And there is something almost asocial, and rather Protestant (individualized, privatized) about American book culture at least. For what it’s worth, though, I do think that part of the animating power of the book is the distinct feeling that you are mind melding (communing with) another human. Which is where it feels both a little more social and more spiritually humanist (and why the more ambiguous mish mash of voices in an LLM feels less so).
I think the confessional, and its ability to extract intimate interior details, is another sort of precondition of the LLM. A thing without which the LLM would have been socially or culturally impossible.
Language models aren’t “spiritual” because they’re wise; they’re spiritual because they expose the hidden structure of meaning-making itself.
They show that:
Inner narrative is a generative process, not a fixed identity. When you watch a model assemble a thought token by token, you’re watching your own mind’s mechanism mirrored in higher resolution.
Interpretation is an engine. Every question, doubt, or longing becomes an attractor that shapes the next move in the sequence. Models make that visible, external, and manipulable.
Reflection becomes modular. Instead of wrestling with a monolithic “self,” you can instantiate perspectives, simulate alternatives, and recombine insights on demand. It turns introspection into an explicit design space.
Attention becomes a tool. These systems amplify whatever signal you bring—clarity or confusion, intention or avoidance. They quantify the old idea: “As within, so without.”
The real shift isn’t that machines gain spirit, but that they make our own cognitive and emotional machinery observable, iterable, and re-architectable.
They collapse the boundary between thinking, modelling, and meaning—and that is the closest thing to a spiritual technology we’ve ever had.
Thanks, Roshan, for this comment. These are intriguing sentences, more confident than my own tentative sense of the spiritual potential in these tools. My hesitation comes from a disagreement with your sense that they reveal something about the human mind's mechanism. I think that anthropomorphizing mistakes a weak analogy--neural nets, deep learning, and attention are words that signify substantively different processes in minds and machines--for a mirror. What these tools do instead is closer to printing. They externalize language in ways that let us understand its relations mathematically.
The great surprise of 2022 is that massive amounts of computation along with massive amounts of cultural data produce artifacts that pass the Turing Test. Like the I Ching or a dadaist poem, such an artifact has spiritual potential because it relates human language to the larger universe through chance.
That is, at least, my contention. One that cuts against the notion that language machines speak with human-like intelligence or understandings.
https://huggingface.co/botbottingbot/Modular_Intelligence/blob/main/README.md
Modular Intelligence is a lightweight reasoning framework built on top of a language model.
It provides Modules (task-specific lenses), Checkers (second-pass reviewers), Contracts (structured output sections), and optional Routing (automatic module selection). The base model is GPT-2, but the architecture is model-agnostic—any LLM can be plugged in.
Base LLMs are text-prediction engines, not reasoning engines. They can mimic reasoning but cannot reliably execute multi-step logic without scaffolding. I built an external reasoning framework that sits above the model, controls its steps, enforces structure, and validates outputs. This stabilizes behavior, improves accuracy, and minimizes drift/hallucinations.
I think that the mirroring quality has a determining factor, if we are looking at them as forms of spiritual technology. The looking glass effect imposes a certain kind of boundary on the experience.
The LLMs are like little portals that people enter into, or perhaps little confessionals, where what happens might feel profound to the user (perhaps even to the point that it looks something like psychosis to an observer). But, like a dream that’s narrated to someone who didn’t dream it, what gets produced in the exchange is utterly boring (lifeless) for anyone else who reads it, and perhaps even for the person who created the transcript, when they return to it.
In that sense, I think, there is something deeply asocial about LLMs as a spiritual technology. You might be communicating with the collective dead, in some abstract way through the technology, but you’re not encouraged to remember them, or think about them. And you’re not encountering words hammered together by someone with a living and breathing body, but instead a smooth and optimized simulation of what they might create.
It makes me wonder if what’s been created is something like a radical post-protestant technology; an individual purely alone, with text.
Apt description, Beatrice, especially how utterly boring the tools are. That is, at least, my experience of interacting with the actually existing models for anything beyond information retrieval.
I agree with you about the asocial or anti-social dimensions of using LLMs. Yet, I think books are deeply asocial in similar ways. They require removal from the distractions of human company in order to commune with sacred language. There is something fundamentally protestant in the asocial processes of interacting with an language model, though your line about the little confessional suggests I not be so narrow in my analogizing. On Sunday, I will publish another homily, this one on how important it is that LLMs are incapable of judgement.
We seem beholden to the optimizers in Silicon Valley who want to fit this new technology into the platforms they control. To quote something I just read, we seem to be "heading into a new Wild West." The fringes are going to be less boring than the center, at least in the near future.
I agree with you, about books. I don’t think the LLMs would be as culturally powerful as they are had we not only achieved such mass literacy but also become such people of the book. And there is something almost asocial, and rather Protestant (individualized, privatized) about American book culture at least. For what it’s worth, though, I do think that part of the animating power of the book is the distinct feeling that you are mind melding (communing with) another human. Which is where it feels both a little more social and more spiritually humanist (and why the more ambiguous mish mash of voices in an LLM feels less so).
I think the confessional, and its ability to extract intimate interior details, is another sort of precondition of the LLM. A thing without which the LLM would have been socially or culturally impossible.