The brain is an organ that generates actions based on an organism’s goals, past experiences, and current sensory information. How does this evolutionarily designed machine work? What algorithms does it use to produce successful behavior? These questions are difficult because all of these elements, goals, experiences and sensory stimuli come in many different varieties. One simplifying approach is to reformulate behavior as a series of choices between possible options. 

In mammals—especially primates—one particular choice is made all the time, at a rate of about 16,000 choices per hour: the choice of where to look next. Our eyes explore the visual world mostly by making rapid, stereotyped movements, called saccades, from one fixation location to another. This is because our visual system can analyze the information around the fixation point with much higher resolution than that in the periphery. Although we typically generate saccades without much thought, each one of them is preceded by a brief yet complex competition process where various potentially informative locations in space are considered until the next target to look at is selected. This occurs in areas of the brain dedicated to oculomotor control. 

The goal of our research is to understand how saccadic choices are made, and because this process varies according to the visual information that is available at each point in space as well as the individual’s current goals and past experiences, we think this is likely to provide valuable mechanistic intuition about how brains operate in general. In our lab, we use novel choice tasks, electrophysiological recordings from oculomotor circuits, and computer models and simulations to determine how the activity of single neurons relates to the visually guided choices made by a participant. 

For example, in a typical task, two gray circles appear on a computer monitor, one turns green and the other red, and the participant is instructed to look at the red one — so red is target and green is distracter. Across trials, the location of the target varies unpredictably, and we measure the outcome (correct/incorrect) and how rapid the response is (i.e., the reaction time). Although similar tasks have been and are used by many labs, what makes our approach unique is that our tasks are urgent; participants must respond very quickly, often before knowing which circle is target and which distracter. This mimics what often happens in real life: relevant visual information arrives spontaneously (say, a stoplight turns yellow) while the oculomotor system is busy planning the next saccade (say, to a car on an adjacent lane). This dynamic allows us to determine exactly when the relevant visual event (the circles turning red and green) informs the participant’s choice and whether the eye movement was driven by the initial ongoing plan or by the new visual information. 

This way, depending on when and how a particular neuron responds during the task, we can circumscribe its potential contribution to the observed behavior. For instance, some neurons may respond to the colored circles early enough to inform the target selection process, whereas others may respond too late to causally influence the choice. Finally, by simulating the activity of populations of neurons, we can test whether their hypothesized contributions can explain in detail the behavior of the participant and can make testable predictions about how those neurons interact.

Current Projects