
On Saturday’s Q&A at EV.com one of the questions answered dealt with so-called artificial intelligence. I thought AI was nonsense when I first heard the term forty or more years ago, and I still think it’s nonsense.
Anyway, the question was answered by (V), the pseudonym used by Andrew S. Tanenbaum, a U.S. citizen living in The Netherlands, and a professor of computer science at the Vrije Universiteit. I liked his answer, so I’m reprinting it in full here.
As usual, many of the other answered questions are equally interesting; go have a look.
M.R. in New Brighton, MN, asks: What thoughts do you have about how the term “Artificial Intelligence” is used by the non-technical public? Do you find instances when it is misused? In those instances, what terminology would you use instead?
(V) answers: In 1961, a Ph.D. student at MIT, Jim Slagle (who was blind), wrote a program to do symbolic integrals of the type asked on the MIT freshman calculus exam. Integrals come in categories like polynomials, trig functions, square roots, etc., each with methods for solving them. Slagle’s program figured out the type, looked up the rules, and applied them. It passed the exam. Does it take Intelligence to pass the MIT freshman calculus exam? MIT thinks so, but the program follows straightforward rules.
In 1966, Joe Weizenbaum wrote a program called ELIZA that pretended to be a psychiatrist. I was intrigued by this program and wrote a better version that worked with templates. If the user typed: “I am sad,” that matched the template “I am [X]” and the program (randomly) responded: “How long have you been sad?” “Do you like being sad?,” “Do you have friends who are sad?,” “Would you like me to help you stop being sad?,” etc. If you typed “I am president of the moon,” it might reply “Does your mother know you are president of the moon?” The program had no idea what it was saying. It just matched templates and gave canned responses that turned the input into a question. I wrote dozens and dozens of templates and hundreds of responses. The pattern matching engine was simple but the program could carry on a long conversation. People who tried it thought it was intelligent. Was this AI?
In the late 1960s, a word game called Jotto was popular. Each of two players picks a secret five-letter English word. Then player 1 asks player 2: How many letters in your secret word match [SOME FIVE-LETTER TEST WORD]. If the test word was, for example, “horse,” and the number of matches (jots) was zero, player 1 now knew the secret word does not contain any of “h,” “o,” “r,” “s,” or “e.” The players took turns until one of them guessed the other’s secret word. I wrote a program to play Jotto. It beat almost everyone almost all the time. Does beating people at a word game require intelligence? People thought so. But the program had a simple algorithm. Later, I even published a paper in a journal about the program and how it worked. It seems that something that seems “intelligent” loses its “intelligence” as soon as you explain how it works.
So whether some program is “intelligent” apparently is not dependent on what the program has achieved but on whether the person using it understands how it works. Modern large language models like ChatGPT use neural networks with billions of parameters. In short, they try to guess the best next word in a sequence. To the designers, they are clearly not intelligent, but to users, they appear to be, just like the three examples from the 1960s above. AI is probably as good a description as any for programs that seem “intelligent,” but at least thus far, all of them are just following some (complex) algorithm.