3 Comments
User's avatar
Kenny Fraser's avatar

The first point about small is so important. tbh I am not wholly convinced by small models but small focused experiments for sure. This is how to build AI for business as well - properly solve a real world problem now and worry about scale later

Expand full comment
Rob Nelson's avatar

Sundar Pichai has been saying AI is like electricity for a while now. That analogy helps why I think small, open-source, open-weights models are worth experimenting with in educational contexts. Noah Smith likes to talk about the lesson the electrification of factories holds for AI. Factories initially switched out their steam engine with its central drive for an electric generator, which ended up being more expensive than steam. It wasn't until they figured out that electric outlets could be placed anywhere and everywhere on the factory floor and then small motors distributed to workstations anywhere that electrifying factories paid off.

Something similar may be true of AI models. Maybe a giant simulation of Socrates is the wrong approach. If LLMs end up powering educational simulations and games where the goal is not an intelligence that surpasses the student's, but a kind of limited problem-creator for the student to interact with in a constrained, goal-specific experience, then smaller, under-powered AI models may be just the ticket.

At any rate, I think that hypothesis is worth exploring.

Expand full comment
Kenny Fraser's avatar

Definitely - unproven in my mind not certain whether it will work out or not

Expand full comment