Computational Mechanisms and Neural Systems Underlying Reinforcement Learning

Bioengineering

Computational Mechanisms and Neural Systems Underlying Reinforcement Learning

Bruno Averbeck, PhD
Chief, Section on Learning and Decision Making
National Institute of Mental Health
March 3, 2022 - 4:00pm
Benedum Hall, Room 157

Abstract:  Biological agents adapt behavior using reinforcement learning (RL) to support the survival needs of the individual and the species. In my talk I will discuss the neural and computational mechanisms that support reinforcement learning in biological agents. Many theories of RL focus on a simple model. Anatomically, this model is encompassed by mid-brain dopamine neurons and their projections to the striatum. According to this model mid-brain dopamine neurons code reward prediction errors (RPEs), and medium spiny neurons in the striatum integrate the RPEs signaled by the dopamine neurons. Action values are the integral of RPEs and therefore the striatum is thought to represent the values of actions. Although these structures are important, and the basic model has substantial predictive validity, our work has shown that a broader set of neural systems are important for RL. In my talk I will discuss the role of cortical-striatal circuits, including the amygdala and prefrontal cortex, in RL. Specifically, we will show that the amygdala plays an important role in learning the values of objects in standard bandit paradigms. I will also show that dorsal-lateral prefrontal cortex carries important signals related to state inference, in the context of a reversal learning experiment in which monkeys learn to rapidly reverse their choice preferences, in a Bayesian manner, when choice-outcome mappings are reversed. Overall, we believe that a broad set of cortical-basal ganglia circuits underlie multiple aspects of RL.