Discussion about this post

User's avatar
TD's avatar

Really enjoyed this! Especially reframing outputs as “opportunities for skeptical analysis.” Speaking of outputs, I’ve been experimenting with something (Discourse Depot) that comes at this from a complementary angle: using LLMs as instruments for analyzing AI discourse itself. Working with students to write elaborate prompts that try to operationalize critical frameworks (metaphor analysis, explanation slippage), then run texts about generative AI through them. The outputs become data for examining how the "thing" gets framed, where "processes" becomes "understands,"etc. Your point about confabulation as a human phenomenon reflected in outputs is spot on. Check out some of the outputs from metaphor/anthropomorphism/explanation audit outputs. https://xmousia.github.io/discoursedepot/

Expand full comment
Bette A. Ludwig, PhD 🌱's avatar

Your point about students being able to receive feedback a little bit easier from an LLM rather than a person is spot on. I also think it's another reason that a lot of people are more willing to believe ChatGPT over another person. You take the emotion and the personal attachment out of it.

Expand full comment
8 more comments...

No posts

Ready for more?