Vision in Art and Neuroscience at MIT
Sarah Schwettmann (PhD candidate, Department of Brain & Cognitive Science, MIT), guest contributor
From limited and noisy sensory data with infinite potential interpretations, the brain builds a rich world of experience and expectation. Creating visual art throws that world of experience back to the outside, and in it we find reflected some mechanisms of the constructive process of vision, giving clues to its underlying framework. That framework is the focus of Vision in Art and Neuroscience, a new MIT course that I developed and co-teach with Seth Riskin, SM ’89, who manages the MIT Museum Studio and Compton Gallery, and Pawan Sinha, Professor of Vision and Computational Neuroscience in the Department of Brain and Cognitive Sciences.
While designing the course, Pawan, Seth, and I found that we were each addressing a similar set of questions, the same that would motivate the class, through our own research and practice. In parallel to computational vision research, Pawan leads a humanitarian initiative called Project Prakash, which provides treatment to blind children in India and explores the development of vision following the restoration of sight. Where does structure in perception originate? In the MIT Museum Studio, Seth works with articulated light to sculpt a structured world from darkness. I also live on this interface where the brain meets the world – my research in the Department of Brain and Cognitive Sciences examines the neural basis of mental models for simulating physics. The course is an experiment in synthesis.
Discussions around the intersection of art and neuroscience often consider what each field has to offer the other. We take a different approach, one I refer to as “occupying the gap,” or positioning ourselves between the two fields and asking what fundamental questions underlie them both. One such question addresses the nature of the human relationship to the world. The course suggests one answer: this relationship is deeply creative, from the brain’s interpretation of incoming sensory data in perception, to the explicit construction of experiential worlds in art.
Neuroscience and art each provide a set of tools for investigating different levels of the constructive process. Through neuroscience, we develop a richer understanding of the models of the world that the brain uses to make sense of incoming visual data. With articulation of those models, we can engineer specific types of inputs that interact with visual processing architecture in particularly exquisite ways, and do so reliably, giving artists a toolkit for remixing and modulating experience. In the studio component of the course, we experiment with this toolkit and collectively evolve it forward.
The course material and structure are informed by cutting-edge research in computational vision and neuroscience at MIT and beyond. Broadly, perception research seeks to discover how the brain finds more meaning in incoming data than is explained by the signal alone. The work being done at MIT around generative models, for instance in the labs of Josh Tenenbaum and Josh McDermott, addresses this. Researchers present an ambiguous stimulus and by probing someone’s perceptual interpretation, they get a handle on the structures that the mind generates to interpret incoming data, and can begin to build computational models of the process. In Vision in Art and Neuroscience, we focus on the experiential as well as the experimental, probing the perceiver's experience of that structure-generating process – perceiving perception itself. So we face the pedagogical question: what exercises, in the studio, can evoke so striking an experience of students’ own perception that cutting edge research takes on new meaning, understood in the immediacy of seeing? Later in the semester, students face a similar question as artists: how does one create experiential domains where viewers experience their own perceptual processing at work? Done well, this experience becomes the artwork itself.
Early in the course, students explore the Ganzfeld effect, popularized by artist James Turrell, where the viewer is exposed to an unstructured visual field of uniform illumination. In this experience, one feels the mind struggling to fit unstructured input to models of the world, and doing this over and over again – an interpretation process which often goes unnoticed when input structure is expected by visual processing architecture. The progression of the course follows, in spirit, the hierarchy of visual processing in the brain, which builds increasingly complex interpretations of visual inputs, from adding edges, to depth, color, and recognizable form.
Our students first encounter those concepts in the seminar component of the course, at the beginning of each week. Then later in the week, we translate findings as well as experimental approaches into the studio, with instruction from Seth. We work with light directly, from introducing a single pinpoint of light into an otherwise completely dark room, to building intricate environments using programmable electronics. Students begin to take this work into their own hands, first in small groups, then individually, culminating in final projects for exhibition.
We’ve put together two such exhibitions so far, in the MIT Museum Studio’s Compton Gallery. The first was "Perceiving Perception," a collection of 14 individual installations that suspended the viewer in the moment of visual creation, allowing them to experience the constructive nature of their own perception. You can find the artworks and accompanying descriptions in the online catalog for the show. Our second exhibition, "Dessert of the Real," revisits the theme of constructive perception, and suggests that experience we’re conscious of mentally synthesizing is deliciously immediate, more real than representation. The show is on display this spring in the Compton Gallery.
The spirit of Vision in Art and Neuroscience has resonated across disciplines on campus – in addition to neuroscience, we have students and researchers joining us from computer science, mechanical engineering, mathematics, the Media Lab, and ACT (Art, Culture, and Technology). The course is growing into something larger, a community of practice interested in applying the scientific methodology we develop to study the world, to probe experience, and articulate models for its generation and replication. Students from the course have remained members of the Studio community, and have traveled with us for offsite installations, symposia, and collaborations with museums including the Metropolitan Museum of Art and the Peabody Essex Museum. We’re partners with the Zero Group in Germany, and ARTMATR in New York. With ARTMATR, we brought a 6-axis robotic arm into the Studio, and ran a short course this winter where students collaborated with the robot to explore the intersection of articulated motion and perception. We’re eager to open this conversation to a broad network of collaborators, both human and machine.
It’s an exciting moment in the fields of neuroscience and computing; there is great energy to build technologies that perceive the world like humans do. We stress on the first day of class that perception is a fundamentally creative act. We see the potential for models of perception to themselves be tools for scaling and translating creativity across domains, and for building a deeply creative relationship to our environment.