Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
Stop talking about Eliza
Science & Society Picture Library/Getty

Stop talking about Eliza

Please, I cannot read another solemn think piece on ELIZA the psychoanalyst chatbot from the 60s, and its profound yet overlooked implications for human-machine interaction in the age of ubiquitous AI. I cannot hear about it on your podcast. I cannot watch it in your documentary. I am asking you once again to spare the public your dark musings on the lesson of ELIZA.

Joseph Weizenbaum’s ELIZA chatbot actually does have an important lesson to teach us about artificial intelligence, but it’s the opposite of what everyone seems to think it is. The world has ELIZA precisely backward, so I would like to set the record straight, and then I would like to never ever hear about this chatbot again.

The history knower has entered the chat

ELIZA is the cautionary tale about how “AI” is all smoke and mirrors. It’s the go-to parable for any commentator on AI who wants to stand apart as that lone voice crying in the wilderness, warning the mob of its error. ELIZA is AI’s tulip mania — i.e., it’s the story you invoke amidst all the excitement to demonstrate that you know you some history, and as a history knower, it grieves you to watch the non-history-knowers in the mob rush headlong to repeat the history that, again, you know and they do not.

Yes, ELIZA is definitely a part of the history of AI, but the problem is that this ancient chatbot represents an evolutionary dead end that just isn’t very relevant to what is actually happening with machine learning since about 2017.

It’s like this: What if one corner of the discourse around cancer treatment was dominated by a group of skeptics whose signature move is to bring up the traveling medicine shows of the Old West? “Sure, these doctors at MD Anderson claim they can shrink that tumor with their machines and their pills, but settle in as I share with you the cautionary tale of Clark Stanley’s Snake Oil Liniment...”

Just like modern chemotherapy is a totally different kind of treatment than snake oil liniment (whatever that is), a modern deep learning network is a different kind of software than an old-school NLP chatbot that uses a list of keywords to do pattern matching and substitution.

This isn’t the place to do a deep dive into neural networks, but if you can vaguely grasp the difference between deterministic and stochastic then you can understand the difference between ELIZA and ChatGPT.

In computer science, a deterministic algorithm is an algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Deterministic algorithms are by far the most studied and familiar kind of algorithm, as well as one of the most practical, since they can be run on real machines efficiently.
Stochastic… refers to the property of being well described by a random probability distribution… In artificial intelligence, stochastic programs work by using probabilistic methods to solve problems, as in simulated annealing, stochastic neural networks, stochastic optimization, genetic algorithms, and genetic programming. A problem itself may be stochastic as well, as in planning under uncertainty.

Not only are large language models (LLMs) like ChatGPT fundamentally different from older NLP programs by virtue of the latter’s stochasticity, but at this point in history, they’re built on multiple fundamental innovations that separate them from the primitive neural nets that were being investigated at the time ELIZA was built. We’ve had backpropagation, deep learning, and transformers each shake up the field and drive significant advances in our ability to make this peculiar class of software perform feats that are so magical we often find it impossible to produce a satisfactory explanation of how they did it.

But suppose you can’t distinguish between deterministic computer programs and stochastic neural networks. In that case, I guess you’re doomed to be one of those constantly bringing up ELIZA in conversations about large language models.

What ELIZA is actually good for

I actually don’t think ELIZA is totally irrelevant for thinking about AI. There are two areas where I find it useful.

First and least interesting is that ELIZA works as a shibboleth or first-order filter — an indicator that the person warning you about the deceptive, untrustworthy nature of those fraudulent LLMs probably has a very superficial grasp of the issues. However, this isn’t a 100 percent accurate filter because ELIZA is so popular that even the very savvy are prone to invoke it in passing.

But the most significant implication of ELIZA for modern machine learning is that it should humble its invokers into silence on the question of whether what LLMs are doing can be meaningfully referred to as “intelligent.”

This probably seems counterintuitive because ELIZA’s primary function in computing lore is as an illustration of the way humans will ascribe intelligence to systems that are manifestly not intelligent, but that merely behave in certain ways. And it’s true — humans will do this. But humans are pragmatic functionalists about intelligence — they attribute intelligent, conscious, directed behavior of a range of stochastic processes, from weather to slot machines to chatbots — because humanity lacks a sophisticated explanation for consciousness. And because humanity lacks even a minimally satisfactory model of how consciousness arises from matter, we can’t say with any confidence which complex configurations of matter are and are not conscious.

To put it differently, if I’m going to say that two things are alike by virtue of how they work on the inside, then I first need to know at a reasonable level of abstraction how both things work on the inside. We have a much better picture of how LLMs work on the inside than we do of how conscious minds work on the inside, but even for LLMs, we’re still learning how they work at different levels of abstraction.

The result is that when we’re comparing LLMs and human minds, we can’t say whether or not minds and LLMs are doing the same things (or closely analogous things) at some level of abstraction that’s invisible to us in one or both of those stochastic phenomena. Here are the possible combinations:

  • My brain could be doing something LLM-like at some level we’ve not discovered in the brain
  • An LLM could be doing something brain-like at some level we’ve not discovered in the brain
  • My brain could be doing something LLM-like at some level we’ve not discovered in LLMs.
  • An LLM could be doing something brain-like at some level we’ve not discovered in LLMs.
  • Brains and LLMs could be doing something alike on levels we’ve not discovered in both brains and LLMs.
  • Some combination of two or more of the above.

I suspect the above options could be ranked in order of relative likelihood by someone who knows more about both brains and LLMs than I do. That said, given that we keep learning new things about both brains and LLMs, I have to imagine that all of them are on the table to one degree or another.

Ultimately, the lesson here is that we humans are so utterly bereft of any working, practical understanding of the mechanics of consciousness or rationality that we’re forced to cast a really wide net and categorize way too many stochastic processes as “conscious” by default. And because neither you nor I have a sufficiently detailed understanding of what it means to be conscious, neither of us can say that we’re definitely not doing whatever it is LLMs are doing in some part of our brains at some level of abstraction.

Humanity’s pragmatic functionalism around consciousness — if it quacks like a mind, maybe it’s a mind —makes good evolutionary sense, by the way. It seems better to misfire by assuming that a weather phenomenon has a plan for you than it is to misfire by assuming that a pack of hungry critters is a random process that is uninterested in you.

But as much sense as our tendency to see intentionality everywhere might make, it’s not obviously related to the question of what LLMs are and are not doing, and how we should and should not think of this technology. And by similar logic, your ELIZA story is old and no longer relevant, gramps. You can quietly retire it.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Jon Stokes

Jon Stokes

Jon M. Stokes is co-founder of Ars Technica. He has written extensively on microprocessor architecture and the technical aspects of personal computing for a variety of publications. Stokes holds a degree in computer engineering from Louisiana State University and two advanced degrees in the humanities from Harvard University.
@https://x.com/jonst0kes?s=20 →