Dr. Rudd Canaday is Entefy’s Software Architecture Fellow. He is a co-inventor of the UNIX operating system and was a graduate student at MIT’s pioneering Computer Science & Artificial Intelligence Lab. Rudd shares some of his early AI experiences below.
After graduating cum laude in Physics from Harvard University, I went on to MIT for my graduate work. I started there in 1959, spending the first year and a half completing coursework for my Ph.D. qualifying exams. With that milestone behind me it was time to choose my Master’s thesis topic. It was then that I decided to focus my thesis on the area of artificial intelligence. I didn’t know then that the decision would place me right in the thick of historic advancements in computer science.
In 1961, artificial intelligence was just coming into its own, and MIT was the leader. Two men, Marvin Minsky and John McCarthy, both born in 1927, founded the MIT Computer Science and Artificial Intelligence Laboratory in the same year I entered MIT. These men were pioneers in the field and today are acknowledged as two of the founding fathers of artificial intelligence.
Minksy was a scientist, inventor, and author who co-wrote the book Perceptrons, a foundational work in the analysis of artificial neural networks. He won the Turning award in 1969 and was inducted into the IEEE Intelligent Systems’ AI Hall of Fame in 2011. Minsky remained at MIT until his death in 2016.
McCarthy coined the term “artificial intelligence” in 1955 and organized the groundbreaking Dartmouth Conference in 1956 that launched AI as a field. In 1958 McCarthy invented LISP, the programming language that soon became the go-to language for AI applications. McCarthy left MIT for Stanford, where he founded Stanford’s Artificial Intelligence Laboratory in 1963. He remained at Stanford until his death in 2011.
Back in the day, MIT’s CS & AI Lab was a wild place. Many of the AI graduate students had their desks in the same room, and it seemed to me noisy and chaotic. Darts were always being thrown (at a dartboard) as were wadded up pieces of paper (at each other). All of this amidst lively discussions and arguments about machine intelligence. I don’t know how anyone got anything done in that atmosphere, but many of the earliest groundbreaking advances in AI happened there.
In 1950, the British mathematician and computer scientist Alan Turing had proposed a test, now called the “Turing test,” to determine whether a machine was intelligent. In the Turing test, you sit at a teletypewriter and converse with a person—or a machine—out of sight at another teletypewriter. If it is a machine that can fool you into thinking it is a person, then the machine is intelligent. Unfortunately, Turing introduced the idea by describing a party game in which you are trying to determine if the unseen person is a man or a woman, thus complicating his explanation by introducing the notion of gender, which for a while obscured the simplicity of his test.
The common belief in the CS & AI Lab at that time was that we would achieve true machine intelligence, a machine that could pass the Turing test, probably within five years, certainly within ten years. Many others believed it also.
There were early signs that we were on the right path. At MIT during 1964 to 1966, Joseph Weizenbaum wrote a program called ELIZA to analyze natural English sentences. One of the scripts Weizenbaum wrote for ELIZA, named DOCTOR, simulated a Rogerian psychotherapist, who typically work with patients using a series of questions. ELIZA was not at all intelligent, as Weizenbaum was focusing only on analyzing English sentences. However, many people, including many psychotherapists, focused more on ELIZA’s potential than its very limited capabilities. They thought that machines could one day revolutionize the field of psychotherapy.
For AI history enthusiasts, ELIZA can still be found on the Internet. If it does not understand your input it typically replies with something like “Tell me more about your father.” Given advances to cognitive AI since then, it’s hard to believe that anyone could have considered it intelligent back then.
I’ve often thought about why machine intelligence, which we have yet to achieve, is so much harder than we thought 50 years ago. I think that a central issue is worldview, which seems to drive so much of human communication and understanding. Since we share a vast amount of common information with other people, we communicate in shorthand, taking for granted all of that commonality. That can explain why communicating across cultures is sometimes difficult.
Today, developments in artificial intelligence are happening very fast. The AI system that won the game of Jeopardy against two humans in 2011 was interesting in large part because of its understanding of colloquial English. Jeopardy clues are rich in puns, red herrings, and wordplay.
With advanced resources (from accelerated computation to efficient architectures to big datasets) now available to many AI systems in development, quality machine intelligence—that once upon a time at MIT we were sure was just 5 or 10 years away—may finally be on the horizon.