The brain is an organ that generates actions based on an organism’s goals, past experiences, and current sensory information. How does this evolutionarily designed machine work? What algorithms does it use to produce successful behavior? These questions are difficult because all of these elements, goals, experiences and sensory stimuli come in many different varieties. One simplifying approach is to reformulate behavior as a series of choices between possible options.
In mammals—especially primates—one particular choice is made all the time, at a rate of about 16,000 choices per hour: the choice of where to look next. Our eyes explore the visual world mostly by making rapid, stereotyped movements, called saccades, from one fixation location to another. This is because our visual system can analyze the information around the fixation point with much higher resolution than that in the periphery. Although we typically generate saccades without much thought, each one of them is preceded by a brief yet complex competition process where various potentially informative locations in space are considered until the next target to look at is selected. This occurs in areas of the brain dedicated to oculomotor control.
The goal of our research is to understand how saccadic choices are made, and because this process varies according to the visual information that is available at each point in space as well as the individual’s current goals and past experiences, we think this is likely to provide valuable mechanistic intuition about how brains operate in general. In our lab, we use novel choice tasks, electrophysiological recordings from oculomotor circuits, and computer models and simulations to determine how the activity of single neurons relates to the visually guided choices made by a participant.
For example, in a typical task, two gray circles appear on a computer monitor, one turns green and the other red, and the participant is instructed to look at the red one — so red is target and green is distracter. Across trials, the location of the target varies unpredictably, and we measure the outcome (correct/incorrect) and how rapid the response is (i.e., the reaction time). Although similar tasks have been and are used by many labs, what makes our approach unique is that our tasks are urgent; participants must respond very quickly, often before knowing which circle is target and which distracter. This mimics what often happens in real life: relevant visual information arrives spontaneously (say, a stoplight turns yellow) while the oculomotor system is busy planning the next saccade (say, to a car on an adjacent lane). This dynamic allows us to determine exactly when the relevant visual event (the circles turning red and green) informs the participant’s choice and whether the eye movement was driven by the initial ongoing plan or by the new visual information.
This way, depending on when and how a particular neuron responds during the task, we can circumscribe its potential contribution to the observed behavior. For instance, some neurons may respond to the colored circles early enough to inform the target selection process, whereas others may respond too late to causally influence the choice. Finally, by simulating the activity of populations of neurons, we can test whether their hypothesized contributions can explain in detail the behavior of the participant and can make testable predictions about how those neurons interact.
Current Projects
Oculomotor areas contain three classically-defined neuronal types:
- Visual
- Visuomotor
- Motor
Although their properties are well-understood for simple oculomotor tasks, their roles in saccadic choices are less clear. For instance, we recently discovered that the responses of visual neurons depend critically on the saliency of visual stimuli, such that, when the choice is between potential targets that are equally salient, visual neurons do not contribute to the target selection process at all. These cells can only help select objects that stand out. One aim of our research is to further characterize the functional properties of different oculomotor cell types, and different oculomotor areas as well.
Spatial attention refers to the capacity to preferentially process information from a specific region of space while filtering out information from other regions. Attention can be deployed either overtly, by moving the eyes, or covertly, maintaining the eyes fixed (e.g., when you are looking straight ahead and suddenly notice something happening in the periphery).
Although it is well-established that spatial attention is controlled by the same oculomotor circuits that generate saccades, how it is implemented remains uncertain. For instance, attention can be directed either voluntarily (at will) or involuntarily (reflexively), but what is the neuronal basis of this distinction? Our current hypothesis is that, regardless of how attention is directed, the degree to which it is focused on a particular location in space corresponds directly to the amount of motor activity favoring a saccade to that point. An important aim of our research is to test and refine this hypothesis.
Our behavioral techniques allow us to accurately measure the time course of fundamental perceptual processes. In a dark background, detecting that a light was turned on takes about 100 ms, finding a ripe tomato among many lemons takes about 130 ms, and distinguishing a coin from a Scrabble™ tile about 160 ms. This is very fast, considering that a significant portion of these times (at least 70 ms) is consumed simply by transmission delays, or the time it takes for the visual information to go from our eyes to the areas of the brain that actually analyze it. (No wonder that, under natural viewing conditions, most eye movements are preceded by a deliberation period of just 200–250 ms.)
Importantly, these numbers depend not only on the content of the visual scene, current goals and past experiences, but also on the participant. For example, the time required to look away from a bright light may vary by 30 ms or more from one individual to another. Because such idiosyncratic differences are highly reliable, we are exploring whether they can serve as measures of mental capacity more generally. Such "cognitive fingerprints" could work as sensitive diagnostic markers of mental dysfunction for conditions such as ADHD, in which attention and perceptual processing are significantly compromised.
In natural environments, vision and audition are often activated together, such as when we try to identify a bird. Similarly, a cucumber may be distinguished from a tomato on the basis of both shape and color. Objects or sensory stimuli that consist of multiple sensory modalities (multisensory integration) or multiple features within a given modality (multifeature integration) are typically analyzed with much higher sensitivity because they combine independent pieces of information.
Another line of research in our laboratory uses the timing of urgent, perceptually guided saccadic choices to quantify the corresponding advantage in sensitivity relative to that of single-modality or single-feature discriminations. In this way, we can study and characterize the underlying multisensory/multifeature integration mechanisms.