11 Comments
User's avatar
Hollis Robbins's avatar

This is really excellent

Expand full comment
Rob Nelson's avatar

Thanks, Hollis. It took me a while to figure out why I was resisting Weatherby's argument that structuralism is an answer, but I got there, and then it got interesting.

Expand full comment
James Mustich's avatar

Truly. First, second, and third time through.

Expand full comment
Rob Nelson's avatar

Thanks, James. Glad it spoke to you.

Expand full comment
Jim Amos's avatar

Interesting intellectual conjecture aside, I don't think it's innacurate to think of LLM outputs as alien language. Of course the words themselves originate with humans, but the way they are non deterministically and mathemetically determined in the transformer and arranged just so according to probability rather than actual meaning and intelligent purpose is definitely not human. It's the gaslighting false equivalence between these algorithms and human thought that I often find most jarring in the behavior of your typical AI apologist.

As for: "As language as a service grows, the work of coding, front-line customer service, and marketing becomes ensuring the quality of machine outputs and intervening when necessary." This is another stark reason why I oppose and assume the worst intent behind the corporate push for genAI: these new AI janitor and AI babysitter jobs are dehumanizing and deskilling. Where's the economic silver lining? I don't see it.

Expand full comment
Rob Nelson's avatar

You and I are on the same page on the important stuff, so I hesitate to wrangle over the distinction you want to make between probability (nonhuman) and actual meaning and purpose (human). To my way of thinking, as soon as humans harness something natural, let's say the power of the sun, through techne, it crosses the fuzzy lines between nonhuman and human, and becomes a human instrument, and its product is human made. A solar panel is an human instrument, and the sun is not.

The transformer is weird because it is both techne and poiesis. That is worth investigating, no matter what distinctions we use to describe it. For me, language machines are complex human instruments that we don't quite understand. But the list of human instruments we don't quite understand is long!

James Evans, whose work I very much admire, is fond of calling this social technology alien, I think, because we don't understand it. But we do understand probability and the stochastic machines that use it, at least to some degree! What I think it is better to say "new" technology rather than "alien" technology.

I am perfectly willing to agree to disagree with you both on the terminology, and turn our attention to making sure that as we use these things we do so in ways that leads to human flourishing.

Expand full comment
lucille robbins's avatar

“And there were shepherds in the fields, tending their sheep “…

Expand full comment
Tugomil Copcic - Tugi's avatar

That reminds me of a person I met as a young boy at a village party. He took the eye of a slaughtered animal, looked curiously through its retina, and invited me to do the same, arguing, "This is how animals see the world." Similarly, contemporary data classes often overlook the fact that machine-generated interpretant requires a human interpretant to complete the process of semiosis.

In this relation, I invite you to read my latest article:

https://tugilevy.substack.com/p/does-ai-dream-of-plato

Expand full comment
Rob Nelson's avatar

We are ploughing the same fields it appears, though you are more ambitious in the ground you cover. Thanks for the pointer.

Expand full comment
Rainbow Roxy's avatar

This article comes at the perfect time, realy. It builds so well on your previous work on AI ethics. Your concept of remainder humanism is a brilliant way to cut through the noise in the human-machine debate. So insightful!

Expand full comment
Rob Nelson's avatar

Thanks! Glad the piece spoke to you.

Expand full comment