These are notes, images, and links for my talk with colleagues gathered for the Explorance French Europe Summit. I will also be speaking in Edinburgh on November 19-21 at the Explorance Europe Summit 2025.
I pulled together my recent thoughts on chatbots and use of large AI models in higher education to talk about generative AI as analogous to other technologies that once felt dangerous and exciting, but today feel ordinary, normal, banal, even boring.
One weird thing about transformer-based AI models is that most of us access it through an interface that is over fifty years old. This new-wine-in-old-bottles aspect to ChatGPT makes it difficult to understand the potential of the technology and use it for new purposes. We anthropomorphize it as a companion, a co-intelligence, a helpful intern, when in fact, it is powerful new cultural technology that we are only beginning to understand. That understanding will come slowly, but it is coming.
To call AI banal or normal is to call attention to historical processes that are shaping its development and diffusion, countering the hype created by those trying to sell it at a profit and who speculate wildly about its potential. Like all inventions, including the dynamo, the internal combustion engine, and the open-stack library, AI technology will take a while to be incorporated meaningfully into our work. Autonomous vehicles are an example of an AI technology that seems about to pass a threshold that will soon make it feel as ordinary as your public library or the street light outside your house.
Will that happen with generative AI? Of course it will, though we cannot know how things will play out with any certainty. But doesn’t it seem like ChatGPT is already beginning to feel like la révolution banale.

I begin by asking the audience to apply the duck/rabbit image made famous by Ludwig Wittgenstein to the question of how we see generative AI.

Then we talk about the fact that the people who released ChatGPT in late 2022 did so as an experiment, a way to gather user data. They were as surprised as everyone else at the result: the creation of what is often described as the fastest-growing computer application in history.
Then we compare the interfaces of early chatbots like ELIZA…

to ChatGPT’s interface.
The key terms for the talk are:
Natural language capabilities of generative AI have a much greater range of applications than simulating conversation.
Generative AI may be a general purpose technology but it is being used for specific purposes and narrowly defined tasks, including search within platforms and analysis of unstructured datasets of natural language.
Orchestrating various AI models built for different purposes and re-engineering academic and business processes (workflows) to use AI will be harder and more important than figuring out how chatty companions can help individual knowledge workers.
I use this this article from The Markup to explain how organizations are using natural language search to manage information. AI. Neutron Enterprise, a large language model developed by Atomic Canyon, is not deciding anything. It is not even summarizing the millions of pages of regulations and compliance guidance employees are reviewing in order to extend Diablo Canyon Nuclear Power Plant through 2029. All the AI model is doing is finding the relevant regulations and guidance for each decision the workers have to make, so the workers themselves can interpret the rules.

My favorite example from higher education, is a natural language search feature developed by Courseleaf, a platform that manages academic processes like course rostering, registration, and academic planning. One of their modules is Curriculum Manager. This is a system faculty members use to propose new courses, and administrators use to manage the curriculum review and approval process. Natural language search allows faculty proposing new courses and programs, and deans reviewing those proposals, a better understanding of similar courses and programs being offered.

The event where I am speaking is sponsored by Explorance, so the audience will see demonstrations of MYL, their AI model, in action. It is a purpose-built model for analyzing student comment data gathered through course evaluations. For more on MYL, see this section of my essay On Beyond Chatbots. Since Explorance paid for my trip and I have worked with them for years, let me point you to my disclosures notice. Thanks to friends at Explorance for the opportunity to spend time with colleagues from around Europe talking about higher education and technology.
In all my talks, I plug my favorite purpose-built technology for understanding complex ideas: books.
Specifically, I recommend the book AI Snake Oil. The authors, Arvind Narayanan and Sayash Kapoor, write one of the best AI newsletters. Their new book project, AI as normal technology, and their relaunched newsletter under the same name, offer an alternative to the hype about imminent superintelligence and predictions of an immediate transformation of work and education. Treating generative AI as normal technology focuses our attention on the social processes of adopting this new social technology rather than speculation about the unknowable future it may bring.

I talk about how we focus on the invention of technologies and downplay the historical processes of innovation and diffusion. There are speed limits in each part of the cycle. Pay particular attention to diffusion, the peach colored area.
The speed of progress happens at the speed of humans and institutions. Electricity, the internal combustion engine, and the electronic computer all took decades to understand, to innovate, and to commercialize. Those technologies didn’t feel normal when they were invented; they became normal through social processes of adaptation, and adoption. Generative AI will be no different.
This leads me to my favorite “law,” which is about AI but applies to so much more in software development and in life.
Think about all the boring tools we take for granted that are enabled by electrification. Indoor lighting, toaster ovens, household machines that clean our dishes and our clothes....
All that is all based on the wild and crazy idea that we could generate electricity, that scary naturally occurring phenomenon we know as lightening, and direct it into your house using wires.
Think about how dangerous and revolutionary that sounded to any normal person in 1880.

You can tell the story of electrification as a fight between big personalities with George Westinghouse and Thomas Edison as the Sam Altman and Elon Musk of the 1890s. You can tell also tell it as a boring story of patent lawsuits and corporate mergers.
Here is my version, and the protagonists are not inventors, entrepreneurs, or corporate lawyers.
My heroes are the innovators and diffusers. The people who worked for those early electric companies, the people who tinkered with light bulbs and lighting systems in their workshop or backyard. The local planners and residents who worked to light their cities and towns. The regulators and government officials who made it work on a national scale. The engineers who decided where to build power plants and the clerks who figured out how to bill customers. The people who solved all the little challenges of stringing electrical wires across the continent and connecting them to homes and businesses.
They slowed things down by asking questions, proposing alternatives, and listening to each other. To be sure, there were mistakes and problems, fires and accidental deaths. It took years of innovation to make sure we could safely light city streets and send power directly into homes.
An analogous process is playing out today with autonomous vehicles. And the heroes are not CEOs who make false predictions in order to generate attention.
The story that starts with Google’s first experiments with autonomous vehicles in 2009 to next year’s boring ride to the airport in London or Atlanta in a Waymo features thousands of people working to solve millions of large and small problems.

Just like electrification, it has taken teams of engineers, designers, bureaucrats, planners, and consumers working together to figure out how to make this form of AI work.
There have been analogous revolutions happening with information. We have gone from this:

To this:

To this:

In four decades.
We now talk with our data using AI models built out of nearly all the machine readable information in the world, some linear algebra, and computational algorithms powered by electricity. That’s a big deal, and one we are only beginning to understand.
The challenge is less about making effective chatbots and more about reengineering or reimagining organizational processes, what Paul Bascobert, president of Reuters, calls orchestration in this Decoder interview.
In closing, I talk about the contradictions in how I am teaching a course on AI this term. In order to learn about technology, we put technology aside. I invite my students to turn off all their electronic devices during class, while I require them to use AI and other digital technology to prepare for class.
I believe we should inhabit these contradictions, and follow impulses that leads us to strengthen human social connections...put our our screens away, and talk about what new technology means for our selves and our society. Face-to-face conversation is not boring. It defines us as a a species. Talking about the new tool, the latest thing, is thrilling.
AI Log, LLC. ©2025 All rights reserved.
I enjoy talking with people about how AI is changing higher education and what we should do about it. Click the button below for more information.