
Note: Logpodge posts are where readers get a bundle of short essays so that your inbox is not overwhelmed and I get to reduce the list of βessays Iβm working on.β
In this Logpodge, I briefly consider two ideas: complex adaptive systems and social intelligence. Both are important to the conceptual frameworks Iβve been spending time with lately: thinking about large AI models as a cultural and social technology, AI as a normal technology, and the revival of management cybernetics.
Had Nature any supple Face
Or could she one contemn -
Had Nature an Apostate -
That Mushroom - it is Him!
βEmily Dickinson, The Mushroom is the Elf of Plants
That mushroom
My favorite description of complex adaptive systems is from Merlin Sheldrakeβs marvelous book about fungi, Entangled Life. Sheldrake explains in language both literary and scientific how mycelial networksβthe root-like (or neural network-like) structure of a fungusβfunction in relation to other life forms. Mushrooms are usually described as the fruit-like bodies of these networks because these above-ground manifestations of the mycelium serve as a mechanism for the organismβs reproduction.
Sheldrake introduces complex adaptive systems in his discussion of βwood wide webs.β He explains why forests are far more than a collection of trees. They are a complex ecosystem comprising many living things constantly interacting, including mammals, insects, and fungi; forests are βdynamic systems in shimmering, unceasing turnover.β He explains:
Entities that behave in these ways are loosely termed βcomplex adaptive systemsβ: complex, because their behavior is difficult to predict from a knowledge of their constituent parts alone; adaptive, because they organize into new forms or behaviors in response to their circumstances. Youβlike all organismsβ are a complex adaptive system. So is the world wide web. So are brains, termite colonies, swarming bees, cities, and financial marketsβto name a few. Within complex adaptive systems, small changes can bring about large effects that can only be observed in the system as a whole. Rarely can a neat arrow be plotted between cause and effect. Stimuliβwhich may be unremarkable gestures in themselvesβswirl into often surprising responses. Financial crashes are a good example of this type of dynamic nonlinear process. So are sneezes, and orgasms.
Are large AI models complex adaptive systems? Sure, why not? I like this label because it helps conceive how large AI models are a social and cultural technology analogous to other complex adaptive systems like markets and democracies. It lets us describe surprising behaviors of AI systems in terms of other complex systems rather than as magic or a vaguely defined but hitherto unknown form of cognitive power.
If large AI models are analogous to mycelia networks, their outputs are analogous to mushrooms. This frame avoids two common metaphors: AIs are brains and AIs are alien intelligences. Mycelia are non-human organisms, yet they produce uncanny and strangely human-like outputs. We share an environment, and yet we donβt understand in any great detail how mycelia work. Sound familiar?
Large AI models are mycelia that produce strangely familiar artifacts out of the digital substrate of the decomposing human culture of the internet. These artifacts appear on our screens in disturbing, delightful, or mundane forms. Are they safe to consume? How can they appear in such familiar forms yet be constructed from the processes as we currently understand them?
These are important questions, and asking them through new metaphors, lightly held, will get us further than continuing to fight about defining intelligence.
Social and individual intelligence
Despite our tendency to think of intelligence as a property of individuals, humans are social organisms, so no matter how it is defined, intelligence is as much social as individual. Keeping this in mind helps apply the concept of complex adaptive systems to questions about humans. For example, science can be understood as a long-running project that applies intelligence to understanding the world. We celebrate individual achievement within that project, but every Nobel prize winner has a long list of collaborators, teachers, and predecessors who were part of whatever breakthrough or insight is recognized in the award.
If humans are complex adaptive systems, then scientific and social institutions are complex adaptive systems made out of complex adaptive systems. To talk of intelligence in laboratories and conferences, board meetings and classrooms, or, for that matter, families and governments, means thinking about intelligence in its social dimension.
If we want to think of large AI models as a form of intelligence, we need to see them as operating within human collective intelligence. We may use them in our individual workβas an individual co-intelligence, a personal tutor, or a personal assistantβbut if we focus on such uses without attending to the larger social dimensions, we miss the βwood wide websβ for the trees.
Collective intelligence is essential to Alison Gopnikβs argument that large AI models are a cultural technology. This is the main point of her retelling of βStone Soupβ as a parable about AI. Social intelligence is central to other thinkers writing about large AI Models as a social and cultural technology, for example, Cosma Shalizi's collective cognitionΒ and Brad DeLongβsΒ anthology intelligence.
The more we are able to understand intelligence as both individual and social, the better we will be at thinking about the potential social value of large AI models and other forms of machine learning.
Here is John Dewey in Chapter 6 of The Public and Its Problems on the general problem of thinking in terms of social vs individual:
One of the obstructions in the path is the seemingly engrained notion that the first and the last problem which must be solved is the relation of the individual and the social:β βor that the outstanding question is to determine the relative merits of individualism and collective or of some compromise between them. In fact, both words, individual and social, are hopelessly ambiguous, and the ambiguity will never cease as long as we think in terms of an antithesis.
Where do I start?
A few readers have asked me where they might learn more about large AI models as a social and cultural technology. Iβm working on a guide to the key thinkers and texts. If you want that delivered to your inbox, hit the button below.
What I do for money these days (alert: shameless self-promotion)
Some readers tell me they are interested in how my educational consulting relates to my writing, so I occasionally write about my attempts to earn a living while writing on the internet for free. In my review essay of John Warnerβs More Than Words, I used my favorite Memphis Minnie song to characterize AI Log as the βgravy I give away for freeβ while βselling my pork chopsβ in the form of consulting and public speaking. Think of this as a Pork Chop Update.
Meet me in Montreal or Kansas City
Iβll be keynoting Explorance World 2025 in Montreal in early June with a talk called "The Boring Revolution: How AI Will Change Education and the Workplace, and Why You Might Not Even Notice." My plan (still drafting at this point) is to translate Arvind Narayan and Sayosh Kapoorβs AI as a Normal Technology and Henry Farrellβs The Management Singularity for an audience of middle managers and technology administrators from higher education and business. I will throw in a little Peter Sengeβs The Fifth Discipline to help me make the case that all large organizations are learning organizations. The big question is whether I use the word cybernetics when recommending The Accountability Machine to this crowd.
Iβll be in Kansas City, Missouri, on May 16 for the Midwest Symposium on Social Entrepreneurship, which has a track dedicated to the effects of AI on civic and social innovation. Weβre still working out the details, but it looks like Iβll be participating in panels and talks throughout the day-long event. The Symposium is open to the public, and you can register here.
As a presenter, I can get a few guests in for free or at a reduced price for either event. Subscribers should email me at rob.nelson at hey.com if interested.
Watch me do my thing on a screen
I turn down most invitations to present online for exposure because I get so little value from talking at a screen. I make exceptions for educational technology companies I admire or when the conference is extraordinarily good.
Persuall Exchange hits both marks. This year, the online-only conference will be May 19th - 23rd. Iβll have a 15-minute recorded talk, followed by a Q&A. My talk is titled A Phaedrus Moment. I argue that this is not the first time a new cultural technology has forced us to question what we value and consider whether and how a new tool has educational value. I use Socratesβs famous lines about writing in the Phaedrus to talk about large language models. I cleverly accidentally titled my talk in a way that gets me top billing on the eventβs program. The event is free, and there are some fabulous speakers, many of whom will be talking about AI.
Iβve written before about my work withΒ Leepfrog Technologies, the people who build the Courseleaf products to support academic operations. They invited me to present a webinar to their clients, and were kind enough to let me post the slides for those interested. The presentation introduces ways to think critically about using artificial intelligence in academic operations. For those at institutions using Courseleaf, you should be able to get access through your Registrarβs office. Contact me if you would like more information.
Thanks for reading these advertisements for myself. If you want to buy some of my pork chops, you can find more information here.
Note: In the interest of transparency, I disclose the companies I work with, so readers know about my work for pay and can evaluate how it influences my writing.
AI Log, LLC. Β©2025 All rights reserved.
Merlin Sheldrake!!! Right on. Now we are talking.