Marvin Minsky and Artificial Neural Networks

Marvin Minsky at OLPCb (1927-2016)

Marvin Minsky at OLPCb (1927-2016)

On August 9, 1927, American biochemist and the founder of the MIT Artificial Intelligence Project Marvin Minsky was born. He is also known for his foundational work in the analysis of artificial neural networks.

“Once the computers got control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”
– Marvin Minsky, Life Magazine (20 November 1970), p. 68

Neural Networks and Marvin Minsky

Actually, I did my diploma thesis in computer science on multi-layer neural networks, a special variant of artificial neural networks. Therefore, Marvin Minsky definitely deserves an entry in this blog, simply because he is amongst others one of the fathers of artificial neural networks. Marvin Lee Minsky was born in New York City into a Jewish family in 1927. His father was an eye surgeon and Marvin attended The Fieldston School and the Bronx High School of Science. He later attended Phillips Academy in Andover, Massachusetts. He served in the US Navy from 1944 to 1945. He holds a BA in Mathematics from Harvard (1950) and a PhD in mathematics from Princeton (1954). He taught at Harvard before moving to the Massachusetts Institute of Technology in 1957 as professor of mathematics, a post he occupied until 1962 when he became professor of electrical engineering.

The Dartmouth Conference

In the summer of 1956 Minsky attended a conference on Artificial Intelligence (AI) at Dartmouth, New Hampshire. There, it was generally agreed that powerful modern computers would soon be able to simulate all aspects of human learning and intelligence. Much of Minsky’s later career has been spent testing this claim. In 1959 Minsky and John McCarthy founded what is now known as the MIT Computer Science and Artificial Intelligence Laboratory.[3] Under Minsky’s direction a number of AI programs have been developed at MIT. One of the earliest, a program to solve problems in calculus, showed that most problems could be solved by a careful application of about 100 rules. The computer actually received a grade A in an MIT calculus exam. Other programs developed such topics as reasoning by analogy, handling information expressed in English, and how to catch a bouncing ball with a robotic arm [2].

“What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.”
– Marvin Minsky, The Society of the Mind (1986)

A First Neural Network

Already in 1951 as a student, Minsky built the first randomly wired neural network learning machine, SNARC. SNARC is a randomly connected network of Hebb synapses built in hardware using vacuum tubes, and was possibly the first artificial self-learning machine. In general, artificial neural networks are computational models inspired by an animal’s central nervous systems (i.e. the brain) which is capable of machine learning as well as pattern recognition. They are generally presented as systems of interconnected “neurons” which can compute values from inputs. Like other machine learning methods neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition.

Head-Mounted Displays and Logo

Minsky’s inventions include the first head-mounted graphical display (1963) and the confocal microscope (1957, a predecessor to today’s widely used confocal laser scanning microscope). Together with Seymour Papert he developed the first Logo “turtle”. LOGO is an educational programming language and a “turtle is something like a “robot” used in computer science and mechanical engineering training.[4] Turtles specifically designed for use with Logo systems often come with pen mechanisms allowing the programmer to create a design on a large sheet of paper.

“Computer languages of the future will be more concerned with goals and less with procedures specified by the programmer.”
– Marvin Minsky, Turing Award Lecture “Form and Content in Computer Science” (1969), in Journal of the Association for Computing Machinery 17 (2) (April 1970)

Towards Artificial Intelligence

His seminal 1961 paper, “Steps Towards Artificial Intelligence” surveyed and analyzed what had been done before, and outlined many major problems that the infant discipline would later later need to face. The 1963 paper, “Matter, Mind, and Models” addressed the problem of making self-aware machines [1]. In 1969 Minsky wrote the book Perceptrons (with Seymour Papert), which became the foundational work in the analysis of artificial neural networks. This book is the center of a long-standing controversy in the study of artificial intelligence. It is claimed that pessimistic predictions made by the authors were responsible for an erroneous change in the direction of research in AI, concentrating efforts on so-called “symbolic” systems, and contributing to the so-called AI winter, a period of reduced funding and interest in artificial intelligence research. This decision, supposedly, proved to be unfortunate in the 1980s, when new discoveries showed that the prognostics in the book were wrong.

The Perceptron Problem

Actually, the main problem of perceptrons, i.e. single layer neural networks, is that they are only able to learn or classify linearly separable problems. However, a simple logic function like the exclusive-or function (XOR) is not solvable with a perceptron and requires higher order classification systems. With the advent of multilayer neural networks and the introduction of the backpropagation learning algorithm for multilayer neural networks in the 1980s, machine learning with neural networks became a popular subject again in artificial intelligence, and thereby also ended up as a toolset for my diploma thesis on stock market prediction. Since I did not become insanely rich, the educated reader might have already realized that stock market prediction based on neural networks does not really work well.

“For generations, scientists and philosophers have tried to explain ordinary reasoning in terms of logical principles — with virtually no success. I suspect this enterprise failed because it was looking in the wrong direction: common sense works so well not because it is an approximation of logic; logic is only a small part of our great accumulation of different, useful ways to chain things together.”
– Marvin Minsky, The Society of the Mind (1986)

Symbolic Knowledge Representation

Besides neural networks Minsky also founded several other famous AI models. His book “A framework for representing knowledge” created a new paradigm in programming. While his “Perceptrons” is now more a historical than practical book, the theory of frames is still a working concept in knowledge representation. Minsky has also written on the possibility that extraterrestrial life may think like humans, permitting communication. He was an adviser on Stanley Kubrick’s movie 2001: A Space Odyssey [5] and is referred to in the movie as well as in Arthur C. Clarke’s original book. In the early 1970s at the MIT Artificial Intelligence Lab, Minsky and Seymour Papert started developing what came to be called The Society of Mind theory. The theory attempts to explain how what we call intelligence could be a product of the interaction of non-intelligent parts. Minsky says that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children’s blocks. In 1986, Minsky published The Society of Mind, a comprehensive book on the theory which, unlike most of his previously published work, was written for a general audience.

The Turing Award

“We’ll show you that you can build a mind from many little parts, each mindless by itself.”
– Marvin Minsky, The Society of the Mind, Prologue (1986)

Minsky won the Turing Award in 1969 – the most prestigious award in computer science, and is currently the Toshiba Professor of Media Arts and Sciences, and Professor of electrical engineering and computer science at MIT. The science fiction author Isaac Asimov described Minsky as one of only two people he would admit were more intelligent than he was, the other being Carl Sagan.

On January 24, 2016, Marvin Minsky died of a cerebral hemorrhage at the age of 88.


Marvin Minsky, The Society of Mind, Introduction, [7]

References and Further Reading:

Leave a Reply

Your email address will not be published. Required fields are marked *

Relation Browser
Timeline
0 Recommended Articles:
0 Recommended Articles: