Reader note: A few kind readers have helpfully pointed out that I could generate subscribers and likes in greater numbers were I to post these shorter pieces individually.
And they are right! Posting often makes numbers go up on Substack. Plus, seeing “10m read” on a piece discourages some from clicking on an article. I appreciate the advice. It prompts me to clarify why I collect shorter pieces into what I call Logpodges and limit my posts to weekly.
Most forms of digital social media encourage habits of surfing for short bursts of insight and analysis. I’m aiming for an audience and world that wants to spend the time to read long form writing.
That sounds all high falutin, I know, but I figure a reading audience is more likely to buy books. Since I read and review books and am writing one, this approach makes sense to me. I know this swims against the currents around here, but let’s call it product differentiation].
I’ll be exploring my thinking a bit more on this in an upcoming review of John Warner’s More Than Words. A book! You should buy it!
Gift horses
Earlier this week, OpenAI announced “a first-of-its-kind consortium with 15 leading research institutions dedicated to using AI to accelerate research breakthroughs and transform education.” Like the press release last month announcing its partnership with California State University, I see this as an act of desperation by a company that has yet to figure out a profitable strategy for selling what it hopes is an educational technology. If that turns out to be the case, and, boy howdy, there are a lot of people hyping that idea, they want to sell their product directly to institutions.
This is the context for their announcement last May of OpenAI Edu, “an affordable offering for universities to responsibly bring AI to campus.” It was a pitch to higher education to buy institution-wide licenses so everyone can use their magic educational tool. Back then, Wharton bought licenses for their faculty and MBA students, and then Arizona State University announced they would provision a limited number of OpenAI licenses to faculty and staff through a grants competition. No one mentioned any dollars changing hands in the press releases. Then, for seven months, nothing happened. Nobody bought OpenAI Edu.
OpenAI raised $6.6 billion last October. They decided to spend money to make money in the education market. The one-two punch of signing CSU as a customer at a deep discount, followed by a $50 million dollar gift to some big-name research institutions, is expensive advertising.
For the institutions involved, the difference is significant. The CSU system may have gotten a deal on pricing—my back-of-napkin calculation suggests they are paying per year what regular customers are paying per month—but that assumes people actually use the product for something related to education. Worse, they now have costs associated with provisioning the accounts and teaching people how to use the tools responsibly. As I have said before, I hope journalists covering this story and faculty governance groups insist on getting data about the actual use of the products. How bad a deal this was for CSU depends on those costs and the renewal terms.
As far as I can tell, the “consortium” of 15 institutions that just issued joint press releases with OpenAI didn’t commit to much of anything except taking the money and saying thanks. On the face of it, NextGenAI is just a brand name for 15 donations. I doubt the term consortium means more than the list of institutions that agreed to accept the gift. They probably just needed a better word than recipients or beneficiaries.
In addition to the gift, the universities get to signal the AI enthusiasts among their board members and alumni that they are making solid progress on this whole AI thing. For them, the deal is free use of OpenAI’s experimental technology, a few million dollars to offset the costs of some experiments, and a little press release-based theater. Seems like a good deal.
Was a $50 million dollar press release a good deal for OpenAI? I don’t know, but compared to blowing its money on a Super Bowl ad and a new logo, it's nice to see the money Altman and team are incinerating at an impressive rate go to a worthy cause. Research universities can use every dollar these days.
One note is that in addition to press releases, OpenAI bought access to 15 campuses. The consortium members should keep in mind that OpenAI products may not be around in a year or two. They should look that gift horse in the mouth before they decide to harness it to anything important. That goes for faculty deciding to use those products in their classrooms and research projects, too.
Speaking of horses, I feel the need to continue to beat the horse-hide drum that educational institutions should avoid getting swept up in the hype about AI. Investors counting on the AI revolution had a rough week. Given the increasing chance of a recession and the underwhelming release of GPT-5, the timetable for anyone getting their money back seems as clear as the timetable for AGI. That, as I like to say, is Silicon Valley’s problem. Higher education has enough problems.
One last horse-related note, OpenAI’s generosity is born of self-interest. In addition to headlines and nice quotes from vice presidents and vice chancellors, they bought access to people on campus. When it tries to leverage the relationships it purchased to bring the sales team to campus, those 15 institutions should be prepared. Accept the gift horse, but keep it outside the campus gates.
AI Fight Club—History Edition
Last week, I read two fabulous essays by historians, Cate Denial and Benjamin Breen, about their use of AI.
In Why I'm Saying No to Generative AI published on The Important Work, Cate Denial describes how she arrived at the decision to ban the use of AI in her classroom.
My own ethical wrangling with the material and environmental costs of generative AI, and the harm it causes to other humans, continued. I now feel comfortable banning students from using generative AI for class, not because I distrust them, but because there is no value-add to using generative AI for historical work.
This is not my approach to AI, as you can read about in my series What is an LLM doing in my classroom? where I describe how my students and I explored the educational value of generative AI in my history class in the fall term. Denial and I come to our different approaches through a similar set of values about engaging students in determining rules and norms for our classes. For example, she says, “I had my students write their own generative AI policies, to which I would hold them for the rest of term.” I asked my students to work together with me to determine a shared set of norms for the use of AI that we would revisit over the term. Interestingly, the outcomes of this engagement with students were completely opposed. Her students “rejected AI” while mine embraced it with the goal of exploring its limits and determining what, if any, value it might offer us.
Denial’s experience led her to prohibit the use of AI in future classes. My students’ experience led them to recommend that when I teach the class again, the entire class should agree to limit the use of all digital technology during class. In other words, they think I should allow the use of AI and any other digital technology while preparing for class but that the students and I should only use older edtech like paper, pens, chalk, and slate (we agreed that color markers and whiteboards count as old tech) for our structured class activities. In the final essay in the series to be published later this month, I will write about why this idea is something I’m strongly considering.
My philosophy of teaching, to the extent that I have one, is that there is no one right answer, no single best approach to teaching. The worst feature of contemporary educational research is the fight clubs that develop around specific “evidence-based” approaches where a single best method is “proven” to be more effective than another. The initial wave of incoming research about the use of AI in the classroom will, I’m afraid, inspire the founding of AI in education chapters of AI fight clubs for campuses and school systems everywhere.
That’s why I loved Denial’s closing paragraph:
My position may change again in the future. Teaching is a wonderfully iterative process, and as new information becomes available it makes new decisions possible. Perhaps the accuracy of transcription software will increase; perhaps funding for the digitization of sources will do likewise. What will remain the same is my commitment to supporting students in becoming excellent and ethical historians, whatever the tools—pencil, manuscript, tapestry, pottery shard, oral tradition, painting, laptop—we have at hand.
Exactly. I have decided that LLMs should be included on the list of tools as long as the goal is to use them well. Whether and how they can be well used is a question we don’t yet know. And even when we think we figure it out, we should stay open to other possibilities and approaches.
On Res Obscura, Ben Breen explores how generative AI might open the door to new kinds of educational experiences grounded in historical research. His writing over the past two years about developing simulations has been eye-opening for me. Lately, Breen has been a bit more skeptical of generative AI as he, like all of us, has begun to confront what it means when students use LLMs to replace their own thinking and writing.
In AI legibility, physical archives, and the future of research, Breen helps clarify the limits of the latest deep research models, which are quite good at generating lumps of text that look like literature reviews produced by graduate students. The usual critique is that these outputs contain false information, what I prefer to call confabulations, and that we shouldn’t trust them. That’s true, but this is more of an argument to double-check their outputs carefully than not use them. It misses a more fundamental limit to LLM outputs. Generative AI models are summarizing and synthesizing machines that use massive amounts of digital data. They do not understand social context and human perspective because they have no way to access it.
Humans can understand and inhabit multiple perspectives as they think and write. They construct context from their own lived experience. Part of human consciousness is to reflect and imagine the consciousness of other humans. This is an essential difference: LLMs lack the ability to understand a question or experience from different points of view, though they can, if prompted, generate a simulacrum of differential perspectives. Here’s Breen:
The issue is that generative AI systems don’t want messy perspective jumps. They want the median, the average, the most widely-approved of viewpoint on an issue, a kind of soft-focus perspective that is the exact opposite of how a good historian should be thinking.
To be in an archive is to be confronted with the contradictions, controversies, secrets, and unspoken facts of a person’s life or of a time and place.
To encounter these things via a tool like Deep Research is a surreal experience, because you get an approximation of the historical reality, but with all the illegible data smoothed away.
In fact, it isn’t even being excluded: it just isn’t being noticed.
As Alison Gopnik says, generative AI is a cultural technology in that it “allows individuals to take advantage of collective knowledge, skills, and information accumulated through human history.” But these machines cannot meaningfully reconstruct human experience out of an archive. Even when an LLM synthesizes and summarizes data, there is a significant limitation. As Breen points out, only a small portion of data we have about the past is online, so history is mostly inaccessible to LLMs. Even as that digital portion of the archived past grows, understanding its variety of meanings is not something LLMs can do for us.
Making the case to students that this work is worth doing is the purpose of teaching history. How we structure our classes or guide our students in how, or even whether, to use LLMs in this work should be up to us and our students. That’s why much of my own historical work these days is about writers like William James, Anna Julia Cooper, and John Dewey, who took pluralism as the basis of their educational philosophy.
I share an interest in James with Breen, who is writing a book about William James and the machine age “because James, more than anyone, I think, recognized both the irreducibility of even scientific knowledge to a single set of data or axioms, and also understood that consciousness is rooted not just in abstract reasoning but in a physicality, a sensation of being a person in the world.”
A greater awareness of the limits of what a “single set of data or axioms” can tell us about how to teach and a greater appreciation of consciousness as the experience of being in the world would go a long way to avoiding so many of the fights we seem to be having over the use of AI in education.
AI Log, LLC. ©2025 All rights reserved.
Great article, Rob. Makes me wonder what Paulo Freire would have to contribute to this conversation about the use of AI in a classroom teaching literacy.