10 Comments
User's avatar
Sheila Hayman's avatar

So very cheering to read a simple and lucid account of how to deploy this tech in a positive way. Having been privileged to find spelling and grammar enjoyable and easy, I never suffered from the shame of the red pen, but I've learned something about that, too. Thank you.

Expand full comment
Rob Nelson's avatar

Thanks, Shelia. Glad the piece spoke to you. "Lucid" is especially nice to hear, as it describes what I've been aiming in this piece and last week's about spiritual tools.

Expand full comment
James Mustich's avatar

Admired both of the last two pieces; more in this vein would be appreciated in this quarter.

Expand full comment
Rob Nelson's avatar

Thanks, James. It was fun to follow up the book review essay with something looser, so glad it hit the mark for you!

Expand full comment
Bette A. Ludwig, PhD 🌱's avatar

Your point about students being able to receive feedback a little bit easier from an LLM rather than a person is spot on. I also think it's another reason that a lot of people are more willing to believe ChatGPT over another person. You take the emotion and the personal attachment out of it.

Expand full comment
Rob Nelson's avatar

"Personal attachment" is a good phrase. Consumer tech is desperate for us to form attachments to these models. The more we can resist this and understand LLMs as tools and interfaces to data stores, the better we'll use them.

Expand full comment
Bette A. Ludwig, PhD 🌱's avatar

You're absolutely right, because the more attachment that we form to them, the harder it is for us to be without using them, which means later on they're going to really be able to scale up the cost. If people think they're going to be paying $20/month forever, they're sadly mistaken. These prices are going to start going up.

Expand full comment
TD's avatar

Really enjoyed this! Especially reframing outputs as “opportunities for skeptical analysis.” Speaking of outputs, I’ve been experimenting with something (Discourse Depot) that comes at this from a complementary angle: using LLMs as instruments for analyzing AI discourse itself. Working with students to write elaborate prompts that try to operationalize critical frameworks (metaphor analysis, explanation slippage), then run texts about generative AI through them. The outputs become data for examining how the "thing" gets framed, where "processes" becomes "understands,"etc. Your point about confabulation as a human phenomenon reflected in outputs is spot on. Check out some of the outputs from metaphor/anthropomorphism/explanation audit outputs. https://xmousia.github.io/discoursedepot/

Expand full comment
Rob Nelson's avatar

Great stuff at the Course Depot. Thanks for sharing it.

The best thing about writing about this stuff online is discovering how many educators and writers are working the actual problems, even as the discourse frames everything in terms of superintelligence and stock prices. I keep seeing signs of an inflection point, as "normal technology" and "cultural technology" break the surface, but then, the moment passes, and I think it's just my wishful thinking.

I hope we can move from a "pedagogy of disappointment" with the outputs of language machines to a pedagogy of excitement, as in "Wow! This is really interesting. What can we do with it that leads to human growth and better institutions?"

Expand full comment
TD's avatar

Yes! And that seems to have some relationship to "expectations." I've found the most interesting parts of the outputs at Discourse Depot are coming from getting the prompt to operationalize this work: Brown, R. (1963). Explanation and Experience in Social Science - surfacing this very subtle slippage between the mechanistic "how" and the intentional "why" explanation - which often happens in mid sentence in the discourse.

Expand full comment