04.12.26

April 12, 2026

This week on Le Show with Harry Shearer: Donald Trump's Truth Social Audio, News of Forever Chemicals, News of Musk Love, News of the Atom, News of AI, The Apologies of the Week, and more! Listen to the full Harry Shearer Le Show...

Read full story
04.05.26

April 05, 2026

This week on Le Show with Harry Shearer: Donald Trump's Truth Social Audio, Smart World, News of the Godly, News of AI, News of Nice Corp, The Apologies of the Week, and more! Listen to the full Harry Shearer Le Show here.

Read full story
View more posts +
Harry Shearer

Intelligence, Not the CIA Kind...

Just to establish my lack of credentials, I didn’t major in psychology, or computer science, or any of the other disciplines in which today’s AI promoters (and skeptics) specialized. I’m writing this purely as an interested observer, interested in intelligence, natural (which I have some of) and artificial (about which I’m dubious).

First things first, which is where I think the field of AI should have started. What other science began studying life-related phenomena by starting at the top of the animal pyramid—i.e., humans? Don’t most medical researchers begin their work by studying smaller creatures who may share some physical similarity with humans? Sure, part of the reason is that they can kill and/or torture those creatures, which might be more problematic with human subjects. But there are many, and we’re continually discovering more, species which exhibit some form of what we call intelligence—octopi, corvids, elephants, dolphins, apes. Wouldn’t it be a good idea to study them and how their intelligence works first? (Maybe Sam Altman is doing this in his garage, but…)

Thanks for reading! Subscribe for free to receive new posts and support my work.

Secondly, I’d suggest that, at least in humans, a couple of things come before the intelligence machinery starts up. One is perception, the ability to see, hear, feel, and smell things. The other is proprioception, the ability to sense one’s own body. Human intelligence, I’d suggest, is a structure that emerges gradually (sometimes taking decades) atop these building blocks. When “world models” are created in an AI environment, they can describe what can be seen and heard, and maybe smelled and felt. But, unless I’m seriously misinformed, AI models can’t experience those perceptions directly. Depending on these machines for anything but mathematical operations, or certain money-saving uses in the film industry, means one is being advised by a technological Helen Keller. But at least she could detect odors.

Finally, no matter how you slice it, machines—even highly advanced ones—are trained. Millions of data points, thousands of prompts, trained. Humans, on the other hand, eventually encounter training—schools, work, maybe marital counseling—but, again, the first floor of the human mentality is experience. One of the many virtues experiences has, which training lacks, is the ability to imagine situations in which the experience might differ, which of course would lead to a different reaction. That ability to imagine alternate experiences is very different from training linked to engineers’ imaginations. And, while every human has a different experience, no training set is tuned to individual machines. The eventual result is the narrowing of human life.

But, as I say, what the hell do I know?

Thanks for reading! Subscribe for free to receive new posts and support my work.