Pluralism in practice
Teaching with and without AI
Last week, I published a review essay about Michael Pollan’s new book, A World Appears: A Journey into Consciousness, arguing for pluralism when it comes to thinking about consciousness in humans and machines.
Here I argue for pluralism when it comes to teaching with and about large language models. These are remarks prepared for a panel titled “The Future of AI and What it Means for Higher Education” at the capstone event for the American Association of College and University’s 2025-2026 Institute on AI, Pedagogy, and the Curriculum. Applications for next year’s Institute are now open.
I’m grateful to Bryan Alexander, our moderator, and the other participants on the panel for their ideas as we prepared. The panel was fabulous, with a lively discussion happening in chat alongside the conversation on screen.

If you think it’s important to teach students how to use AI, that’s great. Go for it. If you want to teach students how not to use it, I think that’s great too. Again, go for it. Students are well served by a diversity of approaches to using AI (or not!). Encouraging discussion among those with different ideas is a better institutional approach to the intrusion of technologies we call artificial intelligence than mandatory AI literacy workshops and one-size-fits-none policies.
This plea for pluralism comes from a sense that battles between enthusiasts and critics of AI are not that important relative to larger problems. I’m talking about the sense that something is off in the functioning of institutions of higher education, and has been for a while now… Since COVID? Since iPhones? Since I was a student and life was good? The intrusions of AI intensify the feelings, providing a focus for unease or outrage. But most teachers and students feel overwhelmed, not by AI, but by all of it: the grind of the end of term as classroom joys fade into grading and being graded; the headlines describing how systems of higher education are under attack from the governments that have created and funded them; the depressing emails from administrators about belt-tightening.
Nothing I say here is meant to undermine a sense of solidarity in the face of external threats. Those who work in higher education and believe in its value should unify around academic freedom, public support for research, and the safety and success of students. But the need for unity should not extend to whether and how teachers use the technology.
Uniformity of practice is not possible with the tools we use to teach because what we do as scholars and teachers is so various, and so is what AI technologies offer. The choices we make as educators about AI lead to conflict because AI is contentious. Some of this conflict is moral: objections to technology so transparently aimed at replacing humans. Some is political: objections to building data centers or the disregard for the rights of working artists and writers. Some is educational: objections to asking students to use these tools because of their potential harms or worries about deskilling.
AI is a complication that arrives within a cluster of complications.
Those conflicts are compounded by methodological diversity. Each of us teaches within different disciplines, sharing knowledge and methods of knowledge-making with our students. Scientists I know speak gratefully about how easy it is now to write up lab results. Writing teachers I know are despondent over how easy it is now to generate a first draft. Put these folks in a room together and sparks fly. Those examples cross disciplines, but conflicts within departments and programs are often more intense; they are also where the important work happens.
I may view reading and writing as fundamental to my history seminar and so ban the use of AI models for any purpose. My colleague down the hall may use AI models to develop simulations and games in an effort to make history feel participatory and alive. Those different practices will conflict when we meet to make curricular and educational decisions for our department and programs. These are not fights to be won; they are differences to be valued and worked through within each department and program.
I don’t mean to dismiss the need for some institutional coordination. But beyond the evergreen goal of getting teachers to be transparent about their expectations for students and establishing clear institutional guidelines to protect student data, there is little that institutions must do about AI. The conflict and confusion that comes of teachers and scholars finding their way with or without AI is fundamental to free inquiry.
The pluralist approach to AI means that figuring out when and how to use AI models is embedded in the collaborative work of scholarship and teaching. This is as it should be, but it will likely fail in two ways.
First, the goal of managerial efficiency will combine with delusions that technology will reduce costs. Such delusions are fundamental to modern educational bureaucracies, even though spending on technology never saves institutions money, and will not in the case of AI. Such hopes are entangled with a trained incapacity, what Thorstein Veblen called “an habitual, and conventionally righteous disregard of other than pecuniary considerations.” Those managing colleges and universities are unable to understand why replacing teaching assistants and contingent faculty with AI products cannot possibly work when their spreadsheets say that it will.
Second, the difficult work of updating departmental curricula will lead departments and programs to ignore AI, leaving it to each individual teacher to figure out. Many of these teachers are contingent faculty and so will have to muddle through on their own, with little or no support. AI is just one more straw placed on the back of those who carry the weight of teaching without extra support or resources to update their practice.
I wish there were easy answers to mitigating those two points of failure, but AI is a complication that arrives within a cluster of complications. If there is one idea that should unite faculty, it is to resist the demand to automate teaching…not to save jobs, but to protect the essential human purposes of learning.
Let me close with one last thought on the value of pluralism. It has to do with fallibilism, the idea that what we know is never certain and should be revised in light of new evidence. Fallibilism pairs well with pluralism as a guide to action. Bringing both values to bear on questions of curriculum and AI will help set the stage for collaboration and productive disagreement about using (and not using!) these weird new language machines to teach.
If you like what you just read, please…
Nonacademic higher learning?
I’ve written before about Slime Mold Time Mold and other non-institutional forms of higher learning. Here is a new effort that has me intrigued.
AI Log, LLC. ©2026 All rights reserved.



An excellent piece on one of my favorite subjects -- demands for "sameness" in higher ed, whether sameness in syllabi, curricula, metrics that reveal nothing about any particular student. I love that you articulate the potential of AI to disrupt sameness! Everyone uses it differently for different ends.