Pondering the differences between minds and machines
by Kitta MacPherson
Princeton neuroscientist Asif Ghazanfar is asking students in his freshman seminar to do nothing less than rethink the way the brain and body interact and to deeply consider a new science underlying what he calls “the biology of thought.”
The subject is dense, almost intimidating in its complexity. Known as “the embodiment hypothesis,” it represents a radical shift in thinking about the brain and combines elements of neuroscience, artificial intelligence, psychology and philosophy in its approach.
But what the students were doing under Ghazanfar’s supervision on a golden autumn afternoon looked suspiciously like fun.
Armed with what were 577-piece Lego MINDSTORMS kits at the semester’s start, five teams of three students each were showing off the fruits of their labors — boxy and bulbous robotic creations — one by one. These pint-size machines, some called “Teddy” and “Fluffy” and others nameless, spun, cavorted and sidled up to light sources when prompted. The students, clumped in a corner of a classroom in Chancellor Green, collectively “ooh”-ed and “aah”-ed as their programmed creations worked their way over a plastic grid. They coaxed them on when they hesitated. They cheered when they met their mark.
Ghazanfar, an assistant professor of psychology and the Princeton Neuroscience Institute, quietly took it all in, making few remarks. The demonstrations — in which the freshmen had to program and build robots to respond to light — were designed to test the students’ problem-solving abilities. They also were intended to probe the students’ fledgling understanding of new concepts of how the mind adapts to a changing environment.
When they were finished, Ghazanfar shuttled the group to a seminar room deep in the confines of Firestone Library. Here he moved them through a different journey. Over the span of two hours, the students traversed a trail offering intellectual rather than physical challenges. Can machines ever be considered “intelligent” or “alive”? Is the human brain somehow special? How do our minds work, and why do we solve problems so differently from each other? They laughed, gasped, debated and drank it in as each asked a question in turn, based on readings in assigned texts.
“You’re exhausted by the end of the class,” said Robert Klein, a freshman who is considering studying electrical engineering. The robot he designed with his team, which he tested in his dorm room, spoke with a British accent, saying, “Light!” and “Yes, yes, yes!” as it raced toward a lit bulb. Keeping in mind what they were reading about the interplay between the body and the mind, Klein and his team tried to keep their robot simple, modeling it on an insect brain.
In the class, titled “How the Body Shapes the Way the Brain Works,” Ghazanfar is giving the students a new toolbox with which to view reality. Traditional ideas about how people interact with the world assume that behavior is controlled by the brain and ignore how body states — postures, emotions, fatigue — may be affecting mental processes. The new hypothesis suggests that behaviors are best explained by constant feedback among the brain, the body and the environment.
“Most of the time we think of our behaviors as emanating from the brain, and that it controls everything,” said Ghazanfar, who studies primate communication and its neurobiology. “But it is so much more complicated than that. The bodies we operate in the world influence our brain and vice versa. The environments we operate in provide, in turn, scaffolds for such brain-body interactions.”
He has asked the students to work with robots to show them how much the field of artificial intelligence has changed. In the past, scientists trying to simulate the human brain imagined it as a disembodied computer calculating rational thought. Those systems failed, however, at solving even the simplest of tasks performed by the human brain, such as recognizing faces. Now, scientists working to recreate an electronic version of the human mind are building autonomous robots that can operate in the real, unpredictable world. “They’ve realized you cannot separate the brain from the body to be successful in an environment,” he said.
The readings helped focus the discussion. One student asked: If experience is so central to learning, as some adherents of the new hypothesis suggest, how is it that someone like Albert Einstein could develop theories about events he couldn’t possibly have experienced, such as light moving at high speeds? After a few students chimed in, Ghazanfar suggested that Einstein employed mental scaffolding to help himself, imagining himself straddling a beam of light and hurtling through space. Another asked which of two subjects would be a preferable companion — a brain-damaged human described in one of their texts or a moderately gifted robot. The student who asked the question said she would choose the robot. The class was divided on that one.
By acting more as a mediator, Ghazanfar lets the students discover how the brain works on their own. “Many times these questions are unanswerable or currently under investigation,” said Sean Sketch, a freshman who hopes to study mechanical engineering. “In this case, everyone’s thoughts are purely speculation or opinion, and Professor Ghazanfar doesn’t get in the way of our debate. It is this open class style that fosters our genuine curiosity about the subject.”
The seminar, which is designated as the Shelly and Michael Kassen ’76 Freshman Seminar in the Life Sciences, has exceeded all of Sketch’s expectations. “I’ll never forget this course, simply because I’ve never had anything like it,” he said. “For me, it epitomizes what the learning process should be — self-motivated and, for lack of a better word, fun.”