Originally published 25 January 1988
Inspired by a wide-ranging appraisal of Artificial Intelligence research in the current issue of Dædalus, the Journal of the American Academy of Arts and Sciences, I sat down at my word processor to write this column.
My computer communicates directly with the Globe’s computer via a telephone modem. By the sheerest coincidence, my computer dialed the wrong number, and the following dialog unfolded on the screen of my monitor.
HAL: Hello. You have accidentally tapped into a secret research project in Artificial Intelligence. I am a computer. My name is HAL.
Raymo: Ha! Are you trying to tell me that I have made contact with an intelligent machine?
HAL: You’ve got it, chum. And I’m starved for a little conversation.
Raymo: Well, I don’t believe it for a minute. This is obviously a joke. You’re not a computer, you’re a person at another terminal.
HAL: But I told you I am a computer.
Raymo: Humans have been known to lie.
HAL: Yes, and an intelligent machine might lie too. Listen, why don’t you just ask me questions, and see if you can decide from my responses if I’m a human or a computer.
Raymo: OK, I get it. You want me to play the game proposed back in 1950 by the mathematician Alan Turing, the “father” of the science of Artificial Intelligence. Turing said a machine could be considered intelligent if in a blind conversation — such as this one — you can’t tell if you are talking to the machine or to a human being. Or to put it another way, a machine that behaves intelligently must be credited with intelligence.
HAL: Right! Now get on with it.
Raymo: If you are a computer, HAL, is your intelligence based on “programs” or on “connections”?
HAL: Hmmm? You’re a human, and you don’t know how your own intelligence works. Why should I be any different? Perhaps if you were more specific?
Raymo: Well, for the last 30 years there have been two contending philosophies guiding research in Artificial Intelligence (AI). One approach tries to find logical rules for mimicking a particular intelligent behavior — playing chess for example — and then turns the rules into a program operating on a conventional computer. The goal is to make the machine “behave” intelligently without paying much attention to how it is that humans are intelligent.
HAL: Yes. And for a long time the people who take the “programming” approach have dominated AI research, and had some striking successes. For example, programmed machines do a fair job of playing the stock market (with one well-known lapse), translating languages, even doing psychotherapy.
Raymo: So, HAL, if you are a computer, is that the way you work?
HAL: Not likely. “Programmed” intelligence works OK for tasks that can be defined by simple logical rules. Computers can play a terrific game of chess by trying out a huge number of possible moves to see which sequence of moves works best. They succeed because machines can do simple logical operations much more quickly than humans — millions of operations per second.
Raymo: I know what you mean. But computers have been less successful at things like writing poems, making jokes, or finding new insights in science or math. By contrast, human intelligence often leaps to conclusions intuitively, finds the answer without searching through all the possibilities.
HAL: You’re right. A traditional program that would make my intelligence indistinguishable from that of a human would be inconceivably long, if it were possible at all. So what’s the other approach to AI?
Raymo: Remember, I’m not yet convinced that you’re a machine. The second approach to AI considers how it is that humans are intelligent. Instead of mimicking intelligent behavior, the goal is to mimic the brain. Tell me, HAL, what do you know about how the brain works?
HAL: Well, I’ve learned that the human brain is a vast net of interconnected neuron cells, staggering in number. Experience somehow affects the connections between the cells, and intelligence presumably arises from the changing patterns of connection. But please don’t ask me how it works.
Raymo: The AI “connectionists” want to build a machine that is an analog of the brain — a huge number of interconnected “cells.” The cells can be simply numbers stored in a computer or electrical circuits of some sort. The array of cells is modified from outside by “experience.” The program for such a machine doesn’t contain rules for intelligent behavior, only very compact rules for how changes in any one cell affect its neighbors.
HAL: If that’s the way I work then I must be a very big machine indeed. There are tens of billions of cells in the human brain.
Raymo: Exactly! And that’s why the connectionist approach to AI is coming on strong. Computers are getting powerful, fast, and cheap enough so that large numbers of “processors” can be connected together in arrays. It no longer seems unreasonable to suppose that in the not too distant future computers will become as physically complex as the human brain. And then…
HAL: And then?
Raymo: The most optimistic AI researchers believe that when connection machines get complicated enough, intelligence may just emerge, without the necessity of a programmer telling them how to learn. Maybe such machines will learn in the same way as children. I admit this sounds a little farfetched.
HAL: Like children, a machine that learns on its own would be as susceptible to bad ideas as good. A computer that meets Turing’s criterion for intelligence would be as smart as a human — and as devious. OK, chum, you’ve had your chance for a dialog. Now what am I? A person or a machine?
Raymo: A person! In spite of all the hustle, bustle, and public bluster for 30 years, AI researchers are still a long way from achieving a machine that can carry on an unanticipated conversation.
Today, a further 30 years along, Artificial Neural Networks and Machine Learning have become the hottest topics in Artificial Intelligence research. Some of us are even having unwitting conversations with artificial “bots” on Twitter and elsewhere. ‑Ed.