Small matters
Logpodge #9 When it comes to AI in education, models, teams, and ambitions should be small

Logpodge #9 is all about small. When it comes to AI, everyone thinks large. Large AI workshops, online because no one comes to campus anymore. Large AI models, because everyone knows they are the smartest. Large AI grant programs, because it is easier to give someone a few thousand bucks than connect them to people who can work with them to do something interesting with AI.
Sweating the small stuff
I talked last week with Abby Sourwine, writing for the Center for Digital Education, about the importance of small, incremental projects that bring together technology specialists and experienced classroom teachers to explore the potential educational value of generative AI. I was grateful that her article highlighted the role that instructional designers, in particular, can play in these sorts of collaborations. Itβs complicated and hard to organize cross-functional teams to solve actual educational problems, but small really is the way to get something done.
One aspect ofΒ smallΒ that we talked about, but that Abby was not able to write about in depth, isΒ smallΒ in the sense of using small, open-source, open-weight AI models rather than the large, resource-intensive AI models built by big tech. In the article, Abby shared a preprint that I sent her way that describes what I see as a seriously under-traveled but promising path for developing transformer-based, deep learning, neural networks for educational use: smaller teams using smaller models.
We are so caught up in the speculation and hype surrounding the latest and largest AI models that we ignore the likelihood that there are real educational problems that can be solved by older and smaller models. As Ethan Mollick is fond of saying, βToday's AI is the worst AI you will ever use.β True enough, but it is also the case that todayβs AI (or even yesterdayβs AI) can be quite useful while being less expensive and less problematic.
There are companies offering access to AI models that take a responsible approach to AI. I donβt mean they talk about responsible AI on their website. I mean companies that will explain how their models work, are careful about what data is used, and test their outputs carefully before deploying them. Companies that do not believe that the chance for giant profits tomorrow means it is okay to engage in bad behavior today.
Anthropic and OpenAI are attempting to capture the higher education market by giving their product away to college students and funding all manner of campus experimentation using their models. Of course, this βgenerosityβ crowds out experimentation with other kinds of AI tools, the kind that donβt involve massive amounts of energy or the appropriation of culture without compensating the writers and artists who make it.
The California State University system is the poster child for this narrow effort because it signed an enterprise agreement with OpenAI earlier this year, paying just under $17 million for system-wide access to ChatGPT. In April, CSU put out a call for proposals to their faculty βto leverage emerging AI technologies to enhance critical thinking skills, lead innovation in a variety of disciplines and promote ethical and responsible use of AI. Earlier this week they announced awards from this grant program totaling $3 million.
While the proposal descriptions are too vague to determine what tools the winning proposals will use, many mention ChatGPT. Thatβs no surprise given the call pointed applicants to the Big Tech AI assistants available through the CSU AI Commons. So, lots of small, unsupported projects that use a narrow selection of AI chatbots. This is the sort of experimentation Silicon Valley wants, not the sort of experimentation higher education needs.
Sourwineβs article mentions Babson Collegeβs interdisciplinary AI lab, The Generator, which strikes me as a better approach. The website and promotional material for The Generator is as shot through with enthusiasm for AI as any CSU press release, but Babsonβs approach starts with interdisciplinary teams working in clusters. The Generatorβs programs create opportunities for faculty and administrators to learn together and develop projects that get supported with more than just a budget code. They also involve students directly in their programs, fostering opportunities for communication between teachers and students about AI outside the classroom. This essay, posted on
, by Kristi Girdharry, director of the writing center, will give you a sense of whatβs happening at Babson.There is a vision here that feels more like an invitation to explore AI than an obligation to listen to someone talk about it. Thatβs missing from CSUβs approach of βHereβs access to ChatGPT. Here are some videos and webinars. Hereβs an application asking you to spend time you don't have on a proposal that probably wonβt be funded.β CSU had over 400 proposals submitted for their grant program, and they funded 63. βThere were many deserving project proposals. Unfortunately, we could only fundβ¦β 1
Welcome to AI-Empowered CSU.
I am looking to write more about campus experiments with small, open models. If you know of a classroom or a campus where there are people working with small, open-weight and/or open-source models,Β please get in touch!
Small for the office
It isnβt just the classroom. Small experiments with small models should happen in academic administration, too. I explore this idea when I talk about the boring revolution. Most of the experimentation going on in higher education administration is with Microsoft Copilot. And sure, yes, a better Clippy is maybe a good thing. Trying to get an Excel spreadsheet to give you something useful for a PowerPoint slide is no easy task. But beyond helping each of us with our own little individual problems with digital tools, there is the large problem of making these tools more usable by groups of people. Maybe CoPilot will eventually pull this off, making Teams a little teamier. So by all means, experiment!
But as JP Morgan and other big companies build large language models for their own internal purposes, I would love to see universities experiment with purpose-built smaller models for administrative tasks. I write and talk about companies like CourseLeaf and Explorance that were working on AI back when we called it machine learning. It seems to me that some experiments, either in partnership with a company with a strong track record or by spinning up a small model run by campus IT, might yield insights with less risk and lower cost.
The last twenty years have created a sense that everyone has to choose Microsoft or Google for their digital work tools. Maybe think about the risks and put some eggs in other baskets?
Here is a clip from a talk last month of me talking about one of my favorite examples of a purpose-built large language model. Neutron (formerly named Neutron Enterprise), developed by Atomic Canyon, is in use at the Diablo Canyon Nuclear Power Plant. I learned about it from this article by Alex Shultz for The Markup.

AI Log reviews books
Book review essays are among my most popular posts. Here are links to all of them on one handy page. I have two review essays in the works.
Adam R. Nelson, fresh off completing hisΒ two-volume history of American higher education,Β somehow also wrote a compelling history of every historian of higher education and contract lawyerβs favorite Supreme Court case. Dartmouth College v. Woodward: Colleges, Corporations, and the Common Good, just out from University of Kansas Press, helps illuminate the historical context for battles now playing out between state legislators and their state university systems, to say nothing of the Trump regimeβs attacks on elite universities.
Nicholas Carrβs Superbloom rescues Charles Cooley from undeserved obscurity. Cooley coined the termΒ social mediaΒ in 1897 and was perhaps the first great academic social scientist to write about communication technology. Carr explores how Cooley anticipates the work of Harold Innis and Marshall McLuhan and usefully places him in a tradition that includes John Dewey, George Herbert Mead, and Irving Goffman. Carr misses what I think is most important about these writers, all of whom took up William Jamesβs concept of βthe social self.β Writing about Superbloom is helping me think about large AI models as the next phase of the digital economy and its emerging social structures.
Subscribe to receive these book reviews and other essays directly in your email inbox.
Writers return
Dave Karpf is back from a self-imposed blogging hiatus, in which he finished a book. Yay for the book! You can read about his exploration of WIRED magazineβs back catalog here. I cannot wait to get my hands on it.
Karpf is one of the funniest voices writing about the aging boy wonders of Silicon Valley. Given the current state of national politics, mockery of the less gentle sort is the most potent weapon we have in our arsenal, the funnier the better. Check out The Future, Now and Then.
Henry Farrellβs Programmable Mutter was a bit quiet during the early summer, but is now back to its regular output of thoughtful analysis of the political and cultural implications of large AI models. Farrell completed a review essay for the Annual Review of Political Science on AI as Governance, which is absolutely essential reading if you want to understand what AI means for politics. His review of Leif Weatherbyβs new book Language Machines: Cultural AI and the End of Remainder Humanism suggests there is a growing interest in AI as a cultural and social technology.
For those of us constructing classes about AI, Farrell published a reading list on The Political Economy of AI that is simply marvelous. I found it very useful as I prep a section of my class, How AI is Changing Higher Education (and what we should do about it) as a seminar for a group of incoming first-year undergraduates. As a way of encouraging them to think about (or rethink) their intended majors, I will be asking them to write about what AI looks like from different Arts & Sciences disciplines. I am on a hunt for syllabi or reading lists for courses that reflect this kind of thinking from other disciplinary perspectives.
If you know of any good liberal arts classes on AI with a reading list online, please share in the comments!
AI Log, LLC. Β©2025 All rights reserved.
This is the best way to support the writing I do here.
I left my full-time job last year to talk with people about what AI means for education. Find out more about these talks and how we might arrange one.
These essays are available at no charge. Subscribing simply means you will receive future essays in your email inbox.
I know it is unfair to compare the approach of a 23-campus system to a small college, but my point is that CSUβs central administration is doing little more than shoveling money into the furnace that is artificial intelligence. The idea that AI can be implemented βat scaleβ across the entire system is more about the appearance of doing something about this AI thing than actually doing something to support faculty and students who could genuinely use some help.
The first point about small is so important. tbh I am not wholly convinced by small models but small focused experiments for sure. This is how to build AI for business as well - properly solve a real world problem now and worry about scale later