1
|
Imtiaz Z, Kato A, Kopell BH, Qasim SE, Davis AN, Martinez LN, Heflin M, Kulkarni K, Morsi A, Gu X, Saez I. Human Substantia Nigra Neurons Encode Reward Expectations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.10.593406. [PMID: 38766086 PMCID: PMC11100806 DOI: 10.1101/2024.05.10.593406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Dopamine (DA) signals originating from substantia nigra (SN) neurons are centrally involved in the regulation of motor and reward processing. DA signals behaviorally relevant events where reward outcomes differ from expectations (reward prediction errors, RPEs). RPEs play a crucial role in learning optimal courses of action and in determining response vigor when an agent expects rewards. Nevertheless, how reward expectations, crucial for RPE calculations, are conveyed to and represented in the dopaminergic system is not fully understood, especially in the human brain where the activity of DA neurons is difficult to study. One possibility, suggested by evidence from animal models, is that DA neurons explicitly encode reward expectations. Alternatively, they may receive RPE information directly from upstream brain regions. To address whether SN neuron activity directly reflects reward expectation information, we directly examined the encoding of reward expectation signals in human putative DA neurons by performing single-unit recordings from the SN of patients undergoing neurosurgery. Patients played a two-armed bandit decision-making task in which they attempted to maximize reward. We show that neuronal firing rates (FR) of putative DA neurons during the reward expectation period explicitly encode reward expectations. First, activity in these neurons was modulated by previous trial outcomes, such that FR were greater after positive outcomes than after neutral or negative outcome trials. Second, this increase in FR was associated with shorter reaction times, consistent with an invigorating effect of DA neuron activity during expectation. These results suggest that human DA neurons explicitly encode reward expectations, providing a neurophysiological substrate for a signal critical for reward learning.
Collapse
Affiliation(s)
- Zarghona Imtiaz
- Nash Family Department of Neuroscience and the Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Ayaka Kato
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Brian H. Kopell
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Salman E. Qasim
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Arianna Neal Davis
- Nash Family Department of Neuroscience and the Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Lizbeth Nunez Martinez
- Nash Family Department of Neuroscience and the Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Matt Heflin
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Kaustubh Kulkarni
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Amr Morsi
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Xiaosi Gu
- Nash Family Department of Neuroscience and the Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Ignacio Saez
- Nash Family Department of Neuroscience and the Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
2
|
Parker NF, Baidya A, Cox J, Haetzel LM, Zhukovskaya A, Murugan M, Engelhard B, Goldman MS, Witten IB. Choice-selective sequences dominate in cortical relative to thalamic inputs to NAc to support reinforcement learning. Cell Rep 2022; 39:110756. [PMID: 35584665 PMCID: PMC9218875 DOI: 10.1016/j.celrep.2022.110756] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2019] [Revised: 02/18/2022] [Accepted: 04/07/2022] [Indexed: 11/25/2022] Open
Abstract
How are actions linked with subsequent outcomes to guide choices? The nucleus accumbens, which is implicated in this process, receives glutamatergic inputs from the prelimbic cortex and midline regions of the thalamus. However, little is known about whether and how representations differ across these input pathways. By comparing these inputs during a reinforcement learning task in mice, we discovered that prelimbic cortical inputs preferentially represent actions and choices, whereas midline thalamic inputs preferentially represent cues. Choice-selective activity in the prelimbic cortical inputs is organized in sequences that persist beyond the outcome. Through computational modeling, we demonstrate that these sequences can support the neural implementation of reinforcement-learning algorithms, in both a circuit model based on synaptic plasticity and one based on neural dynamics. Finally, we test and confirm a prediction of our circuit models by direct manipulation of nucleus accumbens input neurons.
Collapse
Affiliation(s)
- Nathan F Parker
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Avinash Baidya
- Center for Neuroscience, University of California, Davis, Davis, CA 95616, USA; Department of Physics and Astronomy, University of California, Davis, Davis, CA 95616, USA
| | - Julia Cox
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Laura M Haetzel
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Anna Zhukovskaya
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Malavika Murugan
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Ben Engelhard
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Mark S Goldman
- Center for Neuroscience, University of California, Davis, Davis, CA 95616, USA; Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, CA 95616, USA; Department of Ophthalmology and Vision Science, University of California, Davis, Davis, CA 95616, USA.
| | - Ilana B Witten
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Department of Psychology, Princeton University, Princeton, NJ 08544, USA.
| |
Collapse
|
3
|
Shi Z, Jagannathan K, Padley JH, Wang A, Fairchild VP, O'Brien CP, Childress AR, Langleben DD. The role of withdrawal in mesocorticolimbic drug cue reactivity in opioid use disorder. Addict Biol 2021; 26:e12977. [PMID: 33098179 DOI: 10.1111/adb.12977] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 09/21/2020] [Accepted: 09/26/2020] [Indexed: 01/08/2023]
Abstract
Opioid use disorder (OUD) is characterized by heightened cognitive, physiological, and neural responses to opioid-related cues that are mediated by mesocorticolimbic brain pathways. Craving and withdrawal are key symptoms of addiction that persist during physiological abstinence. The present study evaluated the relationship between the brain response to drug cues in OUD and baseline levels of craving and withdrawal. We used functional magnetic resonance imaging (fMRI) to examine brain responses to opioid-related pictures and control pictures in 29 OUD patients. Baseline measures of drug use severity, opioid craving, and withdrawal symptoms were assessed prior to cue exposure and correlated with subsequent brain responses to drug cues. Mediation analysis was conducted to test the indirect effect of drug use severity on brain cue reactivity through craving and withdrawal symptoms. We found that baseline drug use severity and opioid withdrawal symptoms, but not craving, were positively associated with the neural response to drug cues in the nucleus accumbens, orbitofrontal cortex, and amygdala. Withdrawal, but not craving, mediated the effect of drug use severity on the nucleus accumbens' response to drug cues. We did not find similar effects for the neural responses to stimuli unrelated to drugs. Our findings emphasize the central role of withdrawal symptoms as the mediator between the clinical severity of OUD and the brain correlates of sensitization to opioid-related cues. They suggest that in OUD, baseline withdrawal symptoms signal a high vulnerability to drug cues.
Collapse
Affiliation(s)
- Zhenhao Shi
- Center for Studies of Addiction, Department of Psychiatry University of Pennsylvania Perelman School of Medicine Philadelphia Pennsylvania USA
| | - Kanchana Jagannathan
- Center for Studies of Addiction, Department of Psychiatry University of Pennsylvania Perelman School of Medicine Philadelphia Pennsylvania USA
| | - James H. Padley
- Center for Studies of Addiction, Department of Psychiatry University of Pennsylvania Perelman School of Medicine Philadelphia Pennsylvania USA
| | - An‐Li Wang
- Department of Psychiatry Icahn School of Medicine at Mount Sinai New York New York USA
| | - Victoria P. Fairchild
- Department of Psychology, Queens College The City University of New York New York New York USA
| | - Charles P. O'Brien
- Center for Studies of Addiction, Department of Psychiatry University of Pennsylvania Perelman School of Medicine Philadelphia Pennsylvania USA
| | - Anna Rose Childress
- Center for Studies of Addiction, Department of Psychiatry University of Pennsylvania Perelman School of Medicine Philadelphia Pennsylvania USA
| | - Daniel D. Langleben
- Center for Studies of Addiction, Department of Psychiatry University of Pennsylvania Perelman School of Medicine Philadelphia Pennsylvania USA
- Annenberg Public Policy Center University of Pennsylvania Philadelphia Pennsylvania USA
- Behavioral Health Service Corporal Michael J. Crescenz Veterans Administration Medical Center Philadelphia Pennsylvania USA
| |
Collapse
|
4
|
Hamid AA, Frank MJ, Moore CI. Wave-like dopamine dynamics as a mechanism for spatiotemporal credit assignment. Cell 2021; 184:2733-2749.e16. [PMID: 33861952 PMCID: PMC8122079 DOI: 10.1016/j.cell.2021.03.046] [Citation(s) in RCA: 71] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Revised: 12/31/2020] [Accepted: 03/23/2021] [Indexed: 12/17/2022]
Abstract
Significant evidence supports the view that dopamine shapes learning by encoding reward prediction errors. However, it is unknown whether striatal targets receive tailored dopamine dynamics based on regional functional specialization. Here, we report wave-like spatiotemporal activity patterns in dopamine axons and release across the dorsal striatum. These waves switch between activational motifs and organize dopamine transients into localized clusters within functionally related striatal subregions. Notably, wave trajectories were tailored to task demands, propagating from dorsomedial to dorsolateral striatum when rewards are contingent on animal behavior and in the opponent direction when rewards are independent of behavioral responses. We propose a computational architecture in which striatal dopamine waves are sculpted by inference about agency and provide a mechanism to direct credit assignment to specialized striatal subregions. Supporting model predictions, dorsomedial dopamine activity during reward-pursuit signaled the extent of instrumental control and interacted with reward waves to predict future behavioral adjustments.
Collapse
Affiliation(s)
- Arif A Hamid
- Department of Neuroscience, Brown University, Providence, RI 02912, USA; Carney Institute for Brain Science, Brown University, Providence, RI 02912, USA.
| | - Michael J Frank
- Department of Cognitive Linguistics & Psychological Sciences, Brown University, Providence, RI 02912, USA; Carney Institute for Brain Science, Brown University, Providence, RI 02912, USA.
| | - Christopher I Moore
- Department of Neuroscience, Brown University, Providence, RI 02912, USA; Carney Institute for Brain Science, Brown University, Providence, RI 02912, USA.
| |
Collapse
|
5
|
Inglis JB, Valentin VV, Ashby FG. Modulation of Dopamine for Adaptive Learning: A Neurocomputational Model. COMPUTATIONAL BRAIN & BEHAVIOR 2021; 4:34-52. [PMID: 34151186 PMCID: PMC8210637 DOI: 10.1007/s42113-020-00083-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
There have been many proposals that learning rates in the brain are adaptive, in the sense that they increase or decrease depending on environmental conditions. The majority of these models are abstract and make no attempt to describe the neural circuitry that implements the proposed computations. This article describes a biologically detailed computational model that overcomes this shortcoming. Specifically, we propose a neural circuit that implements adaptive learning rates by modulating the gain on the dopamine response to reward prediction errors, and we model activity within this circuit at the level of spiking neurons. The model generates a dopamine signal that depends on the size of the tonically active dopamine neuron population and the phasic spike rate. The model was tested successfully against results from two single-neuron recording studies and a fast-scan cyclic voltammetry study. We conclude by discussing the general applicability of the model to dopamine mediated tasks that transcend the experimental phenomena it was initially designed to address.
Collapse
Affiliation(s)
- Jeffrey B Inglis
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara
| | - Vivian V Valentin
- Department of Psychological & Brain Sciences, University of California, Santa Barbara
| | - F Gregory Ashby
- Department of Psychological & Brain Sciences, University of California, Santa Barbara
| |
Collapse
|
6
|
Neural Mechanisms of Human Decision-Making. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2021; 21:35-57. [PMID: 33409958 DOI: 10.3758/s13415-020-00842-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/28/2020] [Indexed: 11/08/2022]
Abstract
We present a theory and neural network model of the neural mechanisms underlying human decision-making. We propose a detailed model of the interaction between brain regions, under a proposer-predictor-actor-critic framework. This theory is based on detailed animal data and theories of action-selection. Those theories are adapted to serial operation to bridge levels of analysis and explain human decision-making. Task-relevant areas of cortex propose a candidate plan using fast, model-free, parallel neural computations. Other areas of cortex and medial temporal lobe can then predict likely outcomes of that plan in this situation. This optional prediction- (or model-) based computation can produce better accuracy and generalization, at the expense of speed. Next, linked regions of basal ganglia act to accept or reject the proposed plan based on its reward history in similar contexts. If that plan is rejected, the process repeats to consider a new option. The reward-prediction system acts as a critic to determine the value of the outcome relative to expectations and produce dopamine as a training signal for cortex and basal ganglia. By operating sequentially and hierarchically, the same mechanisms previously proposed for animal action-selection could explain the most complex human plans and decisions. We discuss explanations of model-based decisions, habitization, and risky behavior based on the computational model.
Collapse
|
7
|
Mollick JA, Hazy TE, Krueger KA, Nair A, Mackie P, Herd SA, O'Reilly RC. A systems-neuroscience model of phasic dopamine. Psychol Rev 2020; 127:972-1021. [PMID: 32525345 PMCID: PMC8453660 DOI: 10.1037/rev0000199] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
We describe a neurobiologically informed computational model of phasic dopamine signaling to account for a wide range of findings, including many considered inconsistent with the simple reward prediction error (RPE) formalism. The central feature of this PVLV framework is a distinction between a primary value (PV) system for anticipating primary rewards (Unconditioned Stimuli [USs]), and a learned value (LV) system for learning about stimuli associated with such rewards (CSs). The LV system represents the amygdala, which drives phasic bursting in midbrain dopamine areas, while the PV system represents the ventral striatum, which drives shunting inhibition of dopamine for expected USs (via direct inhibitory projections) and phasic pausing for expected USs (via the lateral habenula). Our model accounts for data supporting the separability of these systems, including individual differences in CS-based (sign-tracking) versus US-based learning (goal-tracking). Both systems use competing opponent-processing pathways representing evidence for and against specific USs, which can explain data dissociating the processes involved in acquisition versus extinction conditioning. Further, opponent processing proved critical in accounting for the full range of conditioned inhibition phenomena, and the closely related paradigm of second-order conditioning. Finally, we show how additional separable pathways representing aversive USs, largely mirroring those for appetitive USs, also have important differences from the positive valence case, allowing the model to account for several important phenomena in aversive conditioning. Overall, accounting for all of these phenomena strongly constrains the model, thus providing a well-validated framework for understanding phasic dopamine signaling. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
- Jessica A Mollick
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Thomas E Hazy
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Kai A Krueger
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Ananta Nair
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Prescott Mackie
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Seth A Herd
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Randall C O'Reilly
- Department of Psychology and Neuroscience, University of Colorado Boulder
| |
Collapse
|
8
|
O'Reilly RC. Unraveling the Mysteries of Motivation. Trends Cogn Sci 2020; 24:425-434. [PMID: 32392468 PMCID: PMC7219631 DOI: 10.1016/j.tics.2020.03.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 03/05/2020] [Accepted: 03/06/2020] [Indexed: 11/24/2022]
Abstract
Motivation plays a central role in human behavior and cognition but is not well captured by widely used artificial intelligence (AI) and computational modeling frameworks. This Opinion article addresses two central questions regarding the nature of motivation: what are the nature and dynamics of the internal goals that drive our motivational system and how can this system be sufficiently flexible to support our ability to rapidly adapt to novel situations, tasks, etc.? In reviewing existing systems and neuroscience research and theorizing on these questions, a wealth of insights to constrain the development of computational models of motivation can be found.
Collapse
Affiliation(s)
- Randall C O'Reilly
- Department of Psychology and Computer Science Center for Neuroscience, University of California, Davis, 1544 Newton Ct, Davis, CA 95816, USA.
| |
Collapse
|
9
|
Hasselmo ME, Alexander AS, Hoyland A, Robinson JC, Bezaire MJ, Chapman GW, Saudargiene A, Carstensen LC, Dannenberg H. The Unexplored Territory of Neural Models: Potential Guides for Exploring the Function of Metabotropic Neuromodulation. Neuroscience 2020; 456:143-158. [PMID: 32278058 DOI: 10.1016/j.neuroscience.2020.03.048] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 03/30/2020] [Accepted: 03/31/2020] [Indexed: 12/16/2022]
Abstract
The space of possible neural models is enormous and under-explored. Single cell computational neuroscience models account for a range of dynamical properties of membrane potential, but typically do not address network function. In contrast, most models focused on network function address the dimensions of excitatory weight matrices and firing thresholds without addressing the complexities of metabotropic receptor effects on intrinsic properties. There are many under-explored dimensions of neural parameter space, and the field needs a framework for representing what has been explored and what has not. Possible frameworks include maps of parameter spaces, or efforts to categorize the fundamental elements and molecules of neural circuit function. Here we review dimensions that are under-explored in network models that include the metabotropic modulation of synaptic plasticity and presynaptic inhibition, spike frequency adaptation due to calcium-dependent potassium currents, and afterdepolarization due to calcium-sensitive non-specific cation currents and hyperpolarization activated cation currents. Neuroscience research should more effectively explore possible functional models incorporating under-explored dimensions of neural function.
Collapse
Affiliation(s)
- Michael E Hasselmo
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States.
| | - Andrew S Alexander
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Alec Hoyland
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Jennifer C Robinson
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Marianne J Bezaire
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - G William Chapman
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Ausra Saudargiene
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Lucas C Carstensen
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Holger Dannenberg
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| |
Collapse
|
10
|
Synchronicity: The Role of Midbrain Dopamine in Whole-Brain Coordination. eNeuro 2019; 6:ENEURO.0345-18.2019. [PMID: 31053604 PMCID: PMC6500793 DOI: 10.1523/eneuro.0345-18.2019] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Revised: 03/10/2019] [Accepted: 03/31/2019] [Indexed: 01/02/2023] Open
Abstract
Midbrain dopamine seems to play an outsized role in motivated behavior and learning. Widely associated with mediating reward-related behavior, decision making, and learning, dopamine continues to generate controversies in the field. While many studies and theories focus on what dopamine cells encode, the question of how the midbrain derives the information it encodes is poorly understood and comparatively less addressed. Recent anatomical studies suggest greater diversity and complexity of afferent inputs than previously appreciated, requiring rethinking of prior models. Here, we elaborate a hypothesis that construes midbrain dopamine as implementing a Bayesian selector in which individual dopamine cells sample afferent activity across distributed brain substrates, comprising evidence to be evaluated on the extent to which stimuli in the on-going sensorimotor stream organizes distributed, parallel processing, reflecting implicit value. To effectively generate a temporally resolved phasic signal, a population of dopamine cells must exhibit synchronous activity. We argue that synchronous activity across a population of dopamine cells signals consensus across distributed afferent substrates, invigorating responding to recognized opportunities and facilitating further learning. In framing our hypothesis, we shift from the question of how value is computed to the broader question of how the brain achieves coordination across distributed, parallel processing. We posit the midbrain is part of an “axis of agency” in which the prefrontal cortex (PFC), basal ganglia (BGS), and midbrain form an axis mediating control, coordination, and consensus, respectively.
Collapse
|
11
|
O'Reilly RC, Russin J, Herd SA. Computational models of motivated frontal function. HANDBOOK OF CLINICAL NEUROLOGY 2019; 163:317-332. [PMID: 31590738 DOI: 10.1016/b978-0-12-804281-6.00017-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Computational models of frontal function have made important contributions to understanding how the frontal lobes support a wide range of important functions, in their interactions with other brain areas including, critically, the basal ganglia (BG). We focus here on the specific case of how different frontal areas support goal-directed, motivated decision-making, by representing three essential types of information: possible plans of action (in more dorsal and lateral frontal areas), affectively significant outcomes of those action plans (in ventral, medial frontal areas including the orbital frontal cortex), and the overall utility of a given plan compared to other possible courses of action (in anterior cingulate cortex). Computational models of goal-directed action selection at multiple different levels of analysis provide insight into the nature of learning and processing in these areas and the relative contributions of the frontal cortex versus the BG. The most common neurologic disorders implicate these areas, and understanding their precise function and modes of dysfunction can contribute to the new field of computational psychiatry, within the broader field of computational neuroscience.
Collapse
Affiliation(s)
- Randall C O'Reilly
- Department of Psychology and Neuroscience, University of Colorado Boulder, Boulder, CO, United States.
| | - Jacob Russin
- Department of Psychology and Neuroscience, University of Colorado Boulder, Boulder, CO, United States
| | - Seth A Herd
- Department of Psychology and Neuroscience, University of Colorado Boulder, Boulder, CO, United States
| |
Collapse
|
12
|
Abstract
The Virtual Personalities Model is a motive-based neural network model that provides both a psychological model and a computational implementation that explicates the dynamics and often large within-person variability in behavior that arises over time. At the same time the same model can produce -- across many virtual personalities - between subject variability in behavior that when factor analyzed yields familiar personality structure (e.g., the Big-5). First, we describe our personality model and its implementation as a neural network model. Second, we focus on detailing the neurobiological underpinnings of this model. Third, we examine the learning mechanisms, and their biological substrates, as ways that the model gets "wired up", discussing Pavlovian and instrumental conditioning, Pavlovian to instrumental transfer (PIT), and habits. Finally, we describe the dynamics of how initial differences in propensities (e.g., dopamine functioning), wiring differences due to experience, and other factors could operate together to develop and change personality over time, and how this might be empirically examined. Thus, our goal is to contribute to the rising chorus of voices seeking a more precise neurobiologically-based science of the complex dynamics underlying personality.
Collapse
|
13
|
Pauli WM, Nili AN, Tyszka JM. A high-resolution probabilistic in vivo atlas of human subcortical brain nuclei. Sci Data 2018; 5:180063. [PMID: 29664465 PMCID: PMC5903366 DOI: 10.1038/sdata.2018.63] [Citation(s) in RCA: 259] [Impact Index Per Article: 43.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 02/14/2018] [Indexed: 01/18/2023] Open
Abstract
Recent advances in magnetic resonance imaging methods, including data acquisition, pre-processing and analysis, have benefited research on the contributions of subcortical brain nuclei to human cognition and behavior. At the same time, these developments have led to an increasing need for a high-resolution probabilistic in vivo anatomical atlas of subcortical nuclei. In order to address this need, we constructed high spatial resolution, three-dimensional templates, using high-accuracy diffeomorphic registration of T1- and T2- weighted structural images from 168 typical adults between 22 and 35 years old. In these templates, many tissue boundaries are clearly visible, which would otherwise be impossible to delineate in data from individual studies. The resulting delineations of subcortical nuclei complement current histology-based atlases. We further created a companion library of software tools for atlas development, to offer an open and evolving resource for the creation of a crowd-sourced in vivo probabilistic anatomical atlas of the human brain.
Collapse
Affiliation(s)
- Wolfgang M Pauli
- Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA.,Computation and Neural Systems Program, California Institute of Technology, Pasadena, CA 91125, USA
| | | | - J Michael Tyszka
- Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA
| |
Collapse
|
14
|
Pauli WM, Cockburn J, Pool ER, Pérez OD, O’Doherty JP. Computational approaches to habits in a model-free world. Curr Opin Behav Sci 2018. [DOI: 10.1016/j.cobeha.2017.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
15
|
Maes PJ, Nijs L, Leman M. A Conceptual Framework for Music-Based Interaction Systems. SPRINGER HANDBOOK OF SYSTEMATIC MUSICOLOGY 2018. [DOI: 10.1007/978-3-662-55004-5_37] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
16
|
Angwin AJ, Wilson WJ, Arnott WL, Signorini A, Barry RJ, Copland DA. White noise enhances new-word learning in healthy adults. Sci Rep 2017; 7:13045. [PMID: 29026121 PMCID: PMC5638812 DOI: 10.1038/s41598-017-13383-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Accepted: 09/22/2017] [Indexed: 01/27/2023] Open
Abstract
Research suggests that listening to white noise may improve some aspects of cognitive performance in individuals with lower attention. This study investigated the impact of white noise on new word learning in healthy young adults, and whether this effect was mediated by executive attention skills. Eighty participants completed a single training session to learn the names of twenty novel objects. The session comprised 5 learning phases, each followed by a recall test. A final recognition test was also administered. Half the participants listened to white noise during the learning phases, and half completed the learning in silence. The noise group demonstrated superior recall accuracy over time, which was not impacted by participant attentional capacity. Recognition accuracy was near ceiling for both groups. These findings suggest that white noise has the capacity to enhance lexical acquisition.
Collapse
Affiliation(s)
- Anthony J Angwin
- University of Queensland, School of Health and Rehabilitation Sciences, Brisbane, Australia.
| | - Wayne J Wilson
- University of Queensland, School of Health and Rehabilitation Sciences, Brisbane, Australia
| | - Wendy L Arnott
- University of Queensland, School of Health and Rehabilitation Sciences, Brisbane, Australia.,Hear and Say, Brisbane, Australia
| | - Annabelle Signorini
- University of Queensland, School of Health and Rehabilitation Sciences, Brisbane, Australia
| | - Robert J Barry
- University of Wollongong, School of Psychology and Brain & Behaviour Research Institute, Wollongong, Australia
| | - David A Copland
- University of Queensland, School of Health and Rehabilitation Sciences, Brisbane, Australia.,University of Queensland, UQ Centre for Clinical Research, Brisbane, Australia
| |
Collapse
|
17
|
Jilk DJ, Herd SJ, Read SJ, O’Reilly RC. Anthropomorphic reasoning about neuromorphic AGI safety. J EXP THEOR ARTIF IN 2017. [DOI: 10.1080/0952813x.2017.1354081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
| | | | - Stephen J. Read
- Department of Psychology, University of Southern California, Los Angeles, CA, USA
| | - Randall C. O’Reilly
- eCortex, Inc., Westminster, CO USA
- Department of Psychology & Neuroscience, University of Colorado at Boulder, Boulder, CO USA
| |
Collapse
|
18
|
Tyree SM, de Lecea L. Lateral Hypothalamic Control of the Ventral Tegmental Area: Reward Evaluation and the Driving of Motivated Behavior. Front Syst Neurosci 2017; 11:50. [PMID: 28729827 PMCID: PMC5498520 DOI: 10.3389/fnsys.2017.00050] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Accepted: 06/22/2017] [Indexed: 12/25/2022] Open
Abstract
The lateral hypothalamus (LH) plays an important role in many motivated behaviors, sleep-wake states, food intake, drug-seeking, energy balance, etc. It is also home to a heterogeneous population of neurons that express and co-express multiple neuropeptides including hypocretin (Hcrt), melanin-concentrating hormone (MCH), cocaine- and amphetamine-regulated transcript (CART) and neurotensin (NT). These neurons project widely throughout the brain to areas such as the locus coeruleus, the bed nucleus of the stria terminalis, the amygdala and the ventral tegmental area (VTA). Lateral hypothalamic projections to the VTA are believed to be important for driving behavior due to the involvement of dopaminergic reward circuitry. The purpose of this article is to review current knowledge regarding the lateral hypothalamic connections to the VTA and the role they play in driving these behaviors.
Collapse
Affiliation(s)
- Susan M Tyree
- Department of Psychiatry and Behavioral Sciences, Stanford UniversityStanford, CA, United States
| | - Luis de Lecea
- Department of Psychiatry and Behavioral Sciences, Stanford UniversityStanford, CA, United States
| |
Collapse
|
19
|
The dopamine hypothesis of bipolar affective disorder: the state of the art and implications for treatment. Mol Psychiatry 2017; 22:666-679. [PMID: 28289283 PMCID: PMC5401767 DOI: 10.1038/mp.2017.16] [Citation(s) in RCA: 265] [Impact Index Per Article: 37.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2016] [Revised: 12/10/2016] [Accepted: 01/09/2017] [Indexed: 12/25/2022]
Abstract
Bipolar affective disorder is a common neuropsychiatric disorder. Although its neurobiological underpinnings are incompletely understood, the dopamine hypothesis has been a key theory of the pathophysiology of both manic and depressive phases of the illness for over four decades. The increased use of antidopaminergics in the treatment of this disorder and new in vivo neuroimaging and post-mortem studies makes it timely to review this theory. To do this, we conducted a systematic search for post-mortem, pharmacological, functional magnetic resonance and molecular imaging studies of dopamine function in bipolar disorder. Converging findings from pharmacological and imaging studies support the hypothesis that a state of hyperdopaminergia, specifically elevations in D2/3 receptor availability and a hyperactive reward processing network, underlies mania. In bipolar depression imaging studies show increased dopamine transporter levels, but changes in other aspects of dopaminergic function are inconsistent. Puzzlingly, pharmacological evidence shows that both dopamine agonists and antidopaminergics can improve bipolar depressive symptoms and perhaps actions at other receptors may reconcile these findings. Tentatively, this evidence suggests a model where an elevation in striatal D2/3 receptor availability would lead to increased dopaminergic neurotransmission and mania, whilst increased striatal dopamine transporter (DAT) levels would lead to reduced dopaminergic function and depression. Thus, it can be speculated that a failure of dopamine receptor and transporter homoeostasis might underlie the pathophysiology of this disorder. The limitations of this model include its reliance on pharmacological evidence, as these studies could potentially affect other monoamines, and the scarcity of imaging evidence on dopaminergic function. This model, if confirmed, has implications for developing new treatment strategies such as reducing the dopamine synthesis and/or release in mania and DAT blockade in bipolar depression.
Collapse
|
20
|
Abstract
Dopamine neurons facilitate learning by calculating reward prediction error, or the difference between expected and actual reward. Despite two decades of research, it remains unclear how dopamine neurons make this calculation. Here we review studies that tackle this problem from a diverse set of approaches, from anatomy to electrophysiology to computational modeling and behavior. Several patterns emerge from this synthesis: that dopamine neurons themselves calculate reward prediction error, rather than inherit it passively from upstream regions; that they combine multiple separate and redundant inputs, which are themselves interconnected in a dense recurrent network; and that despite the complexity of inputs, the output from dopamine neurons is remarkably homogeneous and robust. The more we study this simple arithmetic computation, the knottier it appears to be, suggesting a daunting (but stimulating) path ahead for neuroscience more generally.
Collapse
Affiliation(s)
- Mitsuko Watabe-Uchida
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138; ,
| | - Neir Eshel
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138; , .,Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, California 94305;
| | - Naoshige Uchida
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138; ,
| |
Collapse
|
21
|
Menegas W, Babayan BM, Uchida N, Watabe-Uchida M. Opposite initialization to novel cues in dopamine signaling in ventral and posterior striatum in mice. eLife 2017; 6. [PMID: 28054919 PMCID: PMC5271609 DOI: 10.7554/elife.21886] [Citation(s) in RCA: 146] [Impact Index Per Article: 20.9] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2016] [Accepted: 01/04/2017] [Indexed: 01/02/2023] Open
Abstract
Dopamine neurons are thought to encode novelty in addition to reward prediction error (the discrepancy between actual and predicted values). In this study, we compared dopamine activity across the striatum using fiber fluorometry in mice. During classical conditioning, we observed opposite dynamics in dopamine axon signals in the ventral striatum (‘VS dopamine’) and the posterior tail of the striatum (‘TS dopamine’). TS dopamine showed strong excitation to novel cues, whereas VS dopamine showed no responses to novel cues until they had been paired with a reward. TS dopamine cue responses decreased over time, depending on what the cue predicted. Additionally, TS dopamine showed excitation to several types of stimuli including rewarding, aversive, and neutral stimuli whereas VS dopamine showed excitation only to reward or reward-predicting cues. Together, these results demonstrate that dopamine novelty signals are localized in TS along with general salience signals, while VS dopamine reliably encodes reward prediction error. DOI:http://dx.doi.org/10.7554/eLife.21886.001 New experiences trigger a variety of responses in animals. We are surprised by, move towards, and often explore new objects. But how does the brain control these responses? Dopamine is a molecule that controls many processes in the brain and plays critical roles in various mental disorders, diseases that affect movement, and addiction. Rewarding experiences (like a glass of cold water on a hot day) can trigger dopamine neurons and studies have also shown that dopamine neurons respond to new experiences. This suggested that novelty may be rewarding in itself, or that novelty may signal the potential for future reward. On the other hand, it may be that different groups of dopamine neurons play different roles in responding to new or rewarding experiences. In 2015, it was reported that dopamine neurons connected to the rear part of an area in the brain called the striatum receive signals from different parts of the brain than most other dopamine neurons. The dopamine neurons connected to this “tail” of the striatum preferentially received inputs from regions involved in arousal rather than reward, suggesting that they may have a unique role and transmit a different type of information. Now, Menegas et al. have shown that dopamine signals in different areas of the striatum separate reward from novelty and other signals in mice. The results demonstrate that new odors activate dopamine neurons projecting to the tail of the striatum, but that this activity fades as the novelty wears off (as the mice learn to associate the odor with a particular outcome). By contrast, dopamine neurons projecting to the front of the striatum do not respond to novelty, but rather become more active as mice learn which odors accompany rewards (only responding to odors that predict reward). The experiments also show that dopamine neurons in the tail of the striatum encode information about the importance of a stimulus. Together, these findings indicate that some of the roles dopamine plays in the brain may not be related to reward, but are instead linked to the novelty and importance of the stimulus. The next challenge will be to find out how the separate reward and novelty signals in dopamine neurons relate to the animals’ behavior. This may help us to better understand dopamine-related psychiatric conditions, such as depression and addiction. DOI:http://dx.doi.org/10.7554/eLife.21886.002
Collapse
Affiliation(s)
- William Menegas
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, United States
| | - Benedicte M Babayan
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, United States
| | - Naoshige Uchida
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, United States
| | - Mitsuko Watabe-Uchida
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, United States
| |
Collapse
|
22
|
Maes PJ, Buhmann J, Leman M. 3Mo: A Model for Music-Based Biofeedback. Front Neurosci 2016; 10:548. [PMID: 27994535 PMCID: PMC5133250 DOI: 10.3389/fnins.2016.00548] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2016] [Accepted: 11/15/2016] [Indexed: 01/18/2023] Open
Abstract
In the domain of sports and motor rehabilitation, it is of major importance to regulate and control physiological processes and physical motion in most optimal ways. For that purpose, real-time auditory feedback of physiological and physical information based on sound signals, often termed “sonification,” has been proven particularly useful. However, the use of music in biofeedback systems has been much less explored. In the current article, we assert that the use of music, and musical principles, can have a major added value, on top of mere sound signals, to the benefit of psychological and physical optimization of sports and motor rehabilitation tasks. In this article, we present the 3Mo model to describe three main functions of music that contribute to these benefits. These functions relate the power of music to Motivate, and to Monitor and Modify physiological and physical processes. The model brings together concepts and theories related to human sensorimotor interaction with music, and specifies the underlying psychological and physiological principles. This 3Mo model is intended to provide a conceptual framework that guides future research on musical biofeedback systems in the domain of sports and motor rehabilitation.
Collapse
Affiliation(s)
- Pieter-Jan Maes
- Department of Art, Music and Theatre Sciences, Institute for Psychoacoustics and Electronic Music, Ghent University Ghent, Belgium
| | - Jeska Buhmann
- Department of Art, Music and Theatre Sciences, Institute for Psychoacoustics and Electronic Music, Ghent University Ghent, Belgium
| | - Marc Leman
- Department of Art, Music and Theatre Sciences, Institute for Psychoacoustics and Electronic Music, Ghent University Ghent, Belgium
| |
Collapse
|
23
|
Canavier CC, Evans RC, Oster AM, Pissadaki EK, Drion G, Kuznetsov AS, Gutkin BS. Implications of cellular models of dopamine neurons for disease. J Neurophysiol 2016; 116:2815-2830. [PMID: 27582295 DOI: 10.1152/jn.00530.2016] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Accepted: 08/24/2016] [Indexed: 12/21/2022] Open
Abstract
This review addresses the present state of single-cell models of the firing pattern of midbrain dopamine neurons and the insights that can be gained from these models into the underlying mechanisms for diseases such as Parkinson's, addiction, and schizophrenia. We will explain the analytical technique of separation of time scales and show how it can produce insights into mechanisms using simplified single-compartment models. We also use morphologically realistic multicompartmental models to address spatially heterogeneous aspects of neural signaling and neural metabolism. Separation of time scale analyses are applied to pacemaking, bursting, and depolarization block in dopamine neurons. Differences in subpopulations with respect to metabolic load are addressed using multicompartmental models.
Collapse
Affiliation(s)
- Carmen C Canavier
- Department of Cell Biology and Anatomy, Louisiana State University Health Sciences Center, New Orleans, Louisiana;
| | - Rebekah C Evans
- Cellular Neurophysiology Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, Maryland
| | - Andrew M Oster
- Department of Mathematics, Eastern Washington University, Cheney, Washington
| | - Eleftheria K Pissadaki
- IBM T.J. Watson Research Center, Yorktown Heights, New York.,Department of Computer Science, University of Sheffield, Sheffield, United Kingdom
| | - Guillaume Drion
- Department of Electrical Engineering and Computer Science, University of Liege, Liege, Belgium
| | - Alexey S Kuznetsov
- Department of Mathematical Sciences and Center for Mathematical Biosciences, Indiana University, Purdue University Indianapolis, Indianapolis, Indiana
| | - Boris S Gutkin
- Group for Neural Theory, LNC INSERM U960, Département d'Études Cognitives, École Normale Supérieure PSL Research University, Paris, France.,Center for Cognition and Decision Making, NRU Higher School of Economics, Moscow, Russia; and
| |
Collapse
|
24
|
Maes PJ. Sensorimotor Grounding of Musical Embodiment and the Role of Prediction: A Review. Front Psychol 2016; 7:308. [PMID: 26973587 PMCID: PMC4778011 DOI: 10.3389/fpsyg.2016.00308] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2015] [Accepted: 02/17/2016] [Indexed: 01/23/2023] Open
Abstract
In a previous article, we reviewed empirical evidence demonstrating action-based effects on music perception to substantiate the musical embodiment thesis (Maes et al., 2014). Evidence was largely based on studies demonstrating that music perception automatically engages motor processes, or that body states/movements influence music perception. Here, we argue that more rigorous evidence is needed before any decisive conclusion in favor of a “radical” musical embodiment thesis can be posited. In the current article, we provide a focused review of recent research to collect further evidence for the “radical” embodiment thesis that music perception is a dynamic process firmly rooted in the natural disposition of sounds and the human auditory and motor system. Though, we emphasize that, on top of these natural dispositions, long-term processes operate, rooted in repeated sensorimotor experiences and leading to learning, prediction, and error minimization. This approach sheds new light on the development of musical repertoires, and may refine our understanding of action-based effects on music perception as discussed in our previous article (Maes et al., 2014). Additionally, we discuss two of our recent empirical studies demonstrating that music performance relies on similar principles of sensorimotor dynamics and predictive processing.
Collapse
Affiliation(s)
- Pieter-Jan Maes
- Department of Art, Music, and Theatre Sciences, IPEM, Ghent University Belgium
| |
Collapse
|
25
|
Shellshear L, MacDonald AD, Mahoney J, Finch E, McMahon K, Silburn P, Nathan PJ, Copland DA. Levodopa enhances explicit new-word learning in healthy adults: a preliminary study. Hum Psychopharmacol 2015; 30:341-9. [PMID: 25900350 DOI: 10.1002/hup.2480] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2014] [Revised: 01/15/2015] [Accepted: 02/09/2015] [Indexed: 01/31/2023]
Abstract
OBJECTIVE While the role of dopamine in modulating executive function, working memory and associative learning has been established; its role in word learning and language processing more generally is not clear. This preliminary study investigated the impact of increased synaptic dopamine levels on new-word learning ability in healthy young adults using an explicit learning paradigm. METHOD A double-blind, placebo-controlled, between-groups design was used. Participants completed five learning sessions over 1 week with levodopa or placebo administered at each session (five doses, 100 mg). Each session involved a study phase followed by a test phase. Test phases involved recall and recognition tests of the new (non-word) names previously paired with unfamiliar objects (half with semantic descriptions) during the study phase. RESULTS The levodopa group showed superior recall accuracy for new words over five learning sessions compared with the placebo group and better recognition accuracy at a 1-month follow-up for words learnt with a semantic description. CONCLUSIONS These findings suggest that dopamine boosts initial lexical acquisition and enhances longer-term consolidation of words learnt with semantic information, consistent with dopaminergic enhancement of semantic salience.
Collapse
Affiliation(s)
- Leanne Shellshear
- UQ Centre for Clinical Research, The University of Queensland, Brisbane, Australia.,School of Health and Rehabilitation Sciences, Division of Speech Pathology, The University of Queensland, Brisbane, Australia
| | - Anna D MacDonald
- UQ Centre for Clinical Research, The University of Queensland, Brisbane, Australia
| | - Jeffrey Mahoney
- UQ Centre for Clinical Research, The University of Queensland, Brisbane, Australia
| | - Emma Finch
- School of Health and Rehabilitation Sciences, Division of Speech Pathology, The University of Queensland, Brisbane, Australia
| | - Katie McMahon
- UQ Centre for Advanced Imaging, The University of Queensland, Brisbane, Australia
| | - Peter Silburn
- UQ Centre for Clinical Research, The University of Queensland, Brisbane, Australia
| | - Pradeep J Nathan
- Brain Mapping Unit, Department of Psychiatry, University of Cambridge, Cambridge, UK.,School of Psychology and Psychiatry, Monash University, Melbourne, Australia
| | - David A Copland
- UQ Centre for Clinical Research, The University of Queensland, Brisbane, Australia.,School of Health and Rehabilitation Sciences, Division of Speech Pathology, The University of Queensland, Brisbane, Australia
| |
Collapse
|
26
|
Eshel N, Bukwich M, Rao V, Hemmelder V, Tian J, Uchida N. Arithmetic and local circuitry underlying dopamine prediction errors. Nature 2015; 525:243-6. [PMID: 26322583 PMCID: PMC4567485 DOI: 10.1038/nature14855] [Citation(s) in RCA: 215] [Impact Index Per Article: 23.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2014] [Accepted: 06/23/2015] [Indexed: 12/18/2022]
Abstract
Dopamine neurons are thought to facilitate learning by comparing actual and expected reward1,2. Despite two decades of investigation, little is known about how this comparison is made. To determine how dopamine neurons calculate prediction error, we combined optogenetic manipulations with extracellular recordings in the ventral tegmental area (VTA) while mice engaged in classical conditioning. By manipulating the temporal expectation of reward, we demonstrate that dopamine neurons perform subtraction, a computation that is ideal for reinforcement learning but rarely observed in the brain. Furthermore, selectively exciting and inhibiting neighbouring GABA neurons in the VTA reveals that these neurons are a source of subtraction: they inhibit dopamine neurons when reward is expected, causally contributing to prediction error calculations. Finally, bilaterally stimulating VTA GABA neurons dramatically reduces anticipatory licking to conditioned odours, consistent with an important role for these neurons in reinforcement learning. Together, our results uncover the arithmetic and local circuitry underlying dopamine prediction errors.
Collapse
Affiliation(s)
- Neir Eshel
- Center for Brain Science, Department of Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Michael Bukwich
- Center for Brain Science, Department of Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Vinod Rao
- Center for Brain Science, Department of Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Vivian Hemmelder
- Center for Brain Science, Department of Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Ju Tian
- Center for Brain Science, Department of Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Naoshige Uchida
- Center for Brain Science, Department of Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts 02138, USA
| |
Collapse
|
27
|
Morita K, Kawaguchi Y. Computing reward-prediction error: an integrated account of cortical timing and basal-ganglia pathways for appetitive and aversive learning. Eur J Neurosci 2015; 42:2003-21. [PMID: 26095906 PMCID: PMC5034842 DOI: 10.1111/ejn.12994] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2015] [Revised: 06/11/2015] [Accepted: 06/17/2015] [Indexed: 12/12/2022]
Abstract
There are two prevailing notions regarding the involvement of the corticobasal ganglia system in value‐based learning: (i) the direct and indirect pathways of the basal ganglia are crucial for appetitive and aversive learning, respectively, and (ii) the activity of midbrain dopamine neurons represents reward‐prediction error. Although (ii) constitutes a critical assumption of (i), it remains elusive how (ii) holds given (i), with the basal‐ganglia influence on the dopamine neurons. Here we present a computational neural‐circuit model that potentially resolves this issue. Based on the latest analyses of the heterogeneous corticostriatal neurons and connections, our model posits that the direct and indirect pathways, respectively, represent the values of upcoming and previous actions, and up‐regulate and down‐regulate the dopamine neurons via the basal‐ganglia output nuclei. This explains how the difference between the upcoming and previous values, which constitutes the core of reward‐prediction error, is calculated. Simultaneously, it predicts that blockade of the direct/indirect pathway causes a negative/positive shift of reward‐prediction error and thereby impairs learning from positive/negative error, i.e. appetitive/aversive learning. Through simulation of reward‐reversal learning and punishment‐avoidance learning, we show that our model could indeed account for the experimentally observed features that are suggested to support notion (i) and could also provide predictions on neural activity. We also present a behavioral prediction of our model, through simulation of inter‐temporal choice, on how the balance between the two pathways relates to the subject's time preference. These results indicate that our model, incorporating the heterogeneity of the cortical influence on the basal ganglia, is expected to provide a closed‐circuit mechanistic understanding of appetitive/aversive learning.
Collapse
Affiliation(s)
- Kenji Morita
- Physical and Health Education, Graduate School of Education, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Yasuo Kawaguchi
- Division of Cerebral Circuitry, National Institute for Physiological Sciences, Okazaki, Japan.,Department of Physiological Sciences, SOKENDAI (The Graduate University for Advanced Studies), Okazaki, Japan.,Japan Science and Technology Agency, Core Research for Evolutional Science and Technology, Tokyo, Japan
| |
Collapse
|
28
|
Hoffmann S, Beste C. A perspective on neural and cognitive mechanisms of error commission. Front Behav Neurosci 2015; 9:50. [PMID: 25784865 PMCID: PMC4347623 DOI: 10.3389/fnbeh.2015.00050] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2014] [Accepted: 02/11/2015] [Indexed: 12/20/2022] Open
Abstract
Behavioral adaptation and cognitive control are crucial for goal-reaching behaviors. Every creature is ubiquitously faced with choices between behavioral alternatives. Common sense suggests that errors are an important source of information in the regulation of such processes. Several theories exist regarding cognitive control and the processing of undesired outcomes. However, most of these models focus on the consequences of an error, and less attention has been paid to the mechanisms that underlie the commissioning of an error. In this article, we present an integrative review of neuro-cognitive models that detail the determinants of the occurrence of response errors. The factors that may determine the likelihood of committing errors are likely related to the stability of task-representations in prefrontal networks, attentional selection mechanisms and mechanisms of action selection in basal ganglia circuits. An important conclusion is that the likelihood of committing an error is not stable over time but rather changes depending on the interplay of different functional neuro-anatomical and neuro-biological systems. We describe factors that might determine the time-course of cognitive control and the need to adapt behavior following response errors. Finally, we outline the mechanisms that may proof useful for predicting the outcomes of cognitive control and the emergence of response errors in future research.
Collapse
Affiliation(s)
- Sven Hoffmann
- Performance Psychology, Institute of Psychology, German Sport University Cologne Cologne, Germany
| | - Christian Beste
- Cognitive Neurophysiology, Department of Child and Adolescent Psychiatry, Faculty of Medicine of the TU Dresden, University Hospital Carl Gustav Carus Dresden, Germany
| |
Collapse
|
29
|
Herd SA, O'Reilly RC, Hazy TE, Chatham CH, Brant AM, Friedman NP. A neural network model of individual differences in task switching abilities. Neuropsychologia 2014; 62:375-89. [PMID: 24791709 PMCID: PMC4167201 DOI: 10.1016/j.neuropsychologia.2014.04.014] [Citation(s) in RCA: 71] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2013] [Revised: 04/01/2014] [Accepted: 04/13/2014] [Indexed: 11/23/2022]
Abstract
We use a biologically grounded neural network model to investigate the brain mechanisms underlying individual differences specific to the selection and instantiation of representations that exert cognitive control in task switching. Existing computational models of task switching do not focus on individual differences and so cannot explain why task switching abilities are separable from other executive function (EF) abilities (such as response inhibition). We explore hypotheses regarding neural mechanisms underlying the "Shifting-Specific" and "Common EF" components of EF proposed in the Unity/Diversity model (Miyake & Friedman, 2012) and similar components in related theoretical frameworks. We do so by adapting a well-developed neural network model of working memory (Prefrontal cortex, Basal ganglia Working Memory or PBWM; Hazy, Frank, & O'Reilly, 2007) to task switching and the Stroop task, and comparing its behavior on those tasks under a variety of individual difference manipulations. Results are consistent with the hypotheses that variation specific to task switching (i.e., Shifting-Specific) may be related to uncontrolled, automatic persistence of goal representations, whereas variation general to multiple EFs (i.e., Common EF) may be related to the strength of PFC representations and their effect on processing in the remainder of the cognitive system. Moreover, increasing signal to noise ratio in PFC, theoretically tied to levels of tonic dopamine and a genetic polymorphism in the COMT gene, reduced Stroop interference but increased switch costs. This stability-flexibility tradeoff provides an explanation for why these two EF components sometimes show opposing correlations with other variables such as attention problems and self-restraint.
Collapse
Affiliation(s)
- Seth A Herd
- Department of Psychology and Neuroscience, University of Colorado Boulder, 345 UCB, Boulder, CO 80309, USA
| | - Randall C O'Reilly
- Department of Psychology and Neuroscience, University of Colorado Boulder, 345 UCB, Boulder, CO 80309, USA
| | - Tom E Hazy
- Department of Psychology and Neuroscience, University of Colorado Boulder, 345 UCB, Boulder, CO 80309, USA
| | - Christopher H Chatham
- Department of Psychology and Neuroscience, University of Colorado Boulder, 345 UCB, Boulder, CO 80309, USA
| | - Angela M Brant
- Department of Psychology and Neuroscience, University of Colorado Boulder, 345 UCB, Boulder, CO 80309, USA
| | - Naomi P Friedman
- Department of Psychology and Neuroscience, University of Colorado Boulder, 345 UCB, Boulder, CO 80309, USA; Institute for Behavioral Genetics, University of Colorado Boulder, 447 UCB, Boulder, CO 80309, USA.
| |
Collapse
|
30
|
Costa VD, Tran VL, Turchi J, Averbeck BB. Dopamine modulates novelty seeking behavior during decision making. Behav Neurosci 2014; 128:556-66. [PMID: 24911320 DOI: 10.1037/a0037128] [Citation(s) in RCA: 112] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Novelty seeking refers to the tendency of humans and animals to explore novel and unfamiliar stimuli and environments. The idea that dopamine modulates novelty seeking is supported by evidence that novel stimuli excite dopamine neurons and activate brain regions receiving dopaminergic input. In addition, dopamine is shown to drive exploratory behavior in novel environments. It is not clear whether dopamine promotes novelty seeking when it is framed as the decision to explore novel options versus the exploitation of familiar options. To test this hypothesis, we administered systemic injections of saline or GBR-12909, a selective dopamine transporter (DAT) inhibitor, to monkeys and assessed their novelty seeking behavior during a probabilistic decision making task. The task involved pseudorandom introductions of novel choice options. This allowed monkeys the opportunity to explore novel options or to exploit familiar options that they had already sampled. We found that DAT blockade increased the monkeys' preference for novel options. A reinforcement learning (RL) model fit to the monkeys' choice data showed that increased novelty seeking after DAT blockade was driven by an increase in the initial value the monkeys assigned to novel options. However, blocking DAT did not modulate the rate at which the monkeys learned which cues were most predictive of reward or their tendency to exploit that knowledge. These data demonstrate that dopamine enhances novelty-driven value and imply that excessive novelty seeking-characteristic of impulsivity and behavioral addictions-might be caused by increases in dopamine, stemming from less reuptake.
Collapse
Affiliation(s)
- Vincent D Costa
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health
| | - Valery L Tran
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health
| | - Janita Turchi
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health
| | - Bruno B Averbeck
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health
| |
Collapse
|
31
|
Guyer AE, Benson B, Choate VR, Bar-Haim Y, Perez-Edgar K, Jarcho JM, Pine DS, Ernst M, Fox NA, Nelson EE. Lasting associations between early-childhood temperament and late-adolescent reward-circuitry response to peer feedback. Dev Psychopathol 2014; 26:229-43. [PMID: 24444176 PMCID: PMC4096565 DOI: 10.1017/s0954579413000941] [Citation(s) in RCA: 65] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Behavioral inhibition, a temperament identifiable in infancy, is associated with heightened withdrawal from social encounters. Prior studies raise particular interest in the striatum, which responds uniquely to monetary gains in behaviorally inhibited children followed into adolescence. Although behavioral manifestations of inhibition are expressed primarily in the social domain, it remains unclear whether observed striatal alterations to monetary incentives also extend to social contexts. In the current study, imaging data were acquired from 39 participants (17 males, 22 females; ages 16-18 years) characterized since infancy on measures of behavioral inhibition. A social evaluation task was used to assess neural response to anticipation and receipt of positive and negative feedback from novel peers, classified by participants as being of high or low interest. As with monetary rewards, striatal response patterns differed during both anticipation and receipt of social reward between behaviorally inhibited and noninhibited adolescents. The current results, when combined with prior findings, suggest that early-life temperament predicts altered striatal response in both social and nonsocial contexts and provide support for continuity between temperament measured in early childhood and neural response to social signals measured in late adolescence and early adulthood.
Collapse
|
32
|
Vitay J, Hamker FH. Timing and expectation of reward: a neuro-computational model of the afferents to the ventral tegmental area. Front Neurorobot 2014; 8:4. [PMID: 24550821 PMCID: PMC3907710 DOI: 10.3389/fnbot.2014.00004] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2013] [Accepted: 01/15/2014] [Indexed: 12/24/2022] Open
Abstract
Neural activity in dopaminergic areas such as the ventral tegmental area is influenced by timing processes, in particular by the temporal expectation of rewards during Pavlovian conditioning. Receipt of a reward at the expected time allows to compute reward-prediction errors which can drive learning in motor or cognitive structures. Reciprocally, dopamine plays an important role in the timing of external events. Several models of the dopaminergic system exist, but the substrate of temporal learning is rather unclear. In this article, we propose a neuro-computational model of the afferent network to the ventral tegmental area, including the lateral hypothalamus, the pedunculopontine nucleus, the amygdala, the ventromedial prefrontal cortex, the ventral basal ganglia (including the nucleus accumbens and the ventral pallidum), as well as the lateral habenula and the rostromedial tegmental nucleus. Based on a plausible connectivity and realistic learning rules, this neuro-computational model reproduces several experimental observations, such as the progressive cancelation of dopaminergic bursts at reward delivery, the appearance of bursts at the onset of reward-predicting cues or the influence of reward magnitude on activity in the amygdala and ventral tegmental area. While associative learning occurs primarily in the amygdala, learning of the temporal relationship between the cue and the associated reward is implemented as a dopamine-modulated coincidence detection mechanism in the nucleus accumbens.
Collapse
Affiliation(s)
- Julien Vitay
- Department of Computer Science, Chemnitz University of Technology Chemnitz, Germany
| | - Fred H Hamker
- Department of Computer Science, Chemnitz University of Technology Chemnitz, Germany ; Bernstein Center for Computational Neuroscience, Charité University Medicine Berlin, Germany
| |
Collapse
|
33
|
Multiplexing signals in reinforcement learning with internal models and dopamine. Curr Opin Neurobiol 2014; 25:123-9. [PMID: 24463329 DOI: 10.1016/j.conb.2014.01.001] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2013] [Revised: 12/10/2013] [Accepted: 01/02/2014] [Indexed: 11/23/2022]
Abstract
A fundamental challenge for computational and cognitive neuroscience is to understand how reward-based learning and decision-making are made and how accrued knowledge and internal models of the environment are incorporated. Remarkable progress has been made in the field, guided by the midbrain dopamine reward prediction error hypothesis and the underlying reinforcement learning framework, which does not involve internal models ('model-free'). Recent studies, however, have begun not only to address more complex decision-making processes that are integrated with model-free decision-making, but also to include internal models about environmental reward structures and the minds of other agents, including model-based reinforcement learning and using generalized prediction errors. Even dopamine, a classic model-free signal, may work as multiplexed signals using model-based information and contribute to representational learning of reward structure.
Collapse
|
34
|
Kirkpatrick K. Interactions of timing and prediction error learning. Behav Processes 2014; 101:135-45. [PMID: 23962670 PMCID: PMC3926915 DOI: 10.1016/j.beproc.2013.08.005] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2013] [Revised: 06/24/2013] [Accepted: 08/06/2013] [Indexed: 11/28/2022]
Abstract
Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields.
Collapse
|
35
|
Herd SA, Krueger KA, Kriete TE, Huang TR, Hazy TE, O'Reilly RC. Strategic cognitive sequencing: a computational cognitive neuroscience approach. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2013; 2013:149329. [PMID: 23935605 PMCID: PMC3722785 DOI: 10.1155/2013/149329] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2012] [Revised: 05/07/2013] [Accepted: 05/28/2013] [Indexed: 11/20/2022]
Abstract
We address strategic cognitive sequencing, the "outer loop" of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC) and basal ganglia (BG) cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or "self-instruction"). The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a "bridging" state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area.
Collapse
Affiliation(s)
- Seth A Herd
- Department of Psychology, University of Colorado Boulder, Boulder, CO 80309, USA.
| | | | | | | | | | | |
Collapse
|
36
|
Rangel-Gomez M, Hickey C, van Amelsvoort T, Bet P, Meeter M. The detection of novelty relies on dopaminergic signaling: evidence from apomorphine's impact on the novelty N2. PLoS One 2013; 8:e66469. [PMID: 23840482 PMCID: PMC3688774 DOI: 10.1371/journal.pone.0066469] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2013] [Accepted: 05/08/2013] [Indexed: 11/18/2022] Open
Abstract
Despite much research, it remains unclear if dopamine is directly involved in novelty detection or plays a role in orchestrating the subsequent cognitive response. This ambiguity stems in part from a reliance on experimental designs where novelty is manipulated and dopaminergic activity is subsequently observed. Here we adopt the alternative approach: we manipulate dopamine activity using apomorphine (D1/D2 agonist) and measure the change in neurological indices of novelty processing. In separate drug and placebo sessions, participants completed a von Restorff task. Apomorphine speeded and potentiated the novelty-elicited N2, an Event-Related Potential (ERP) component thought to index early aspects of novelty detection, and caused novel-font words to be better recalled. Apomorphine also decreased the amplitude of the novelty-P3a. An increase in D1/D2 receptor activation thus appears to potentiate neural sensitivity to novel stimuli, causing this content to be better encoded.
Collapse
|
37
|
The parietal cortex in sensemaking: the dissociation of multiple types of spatial information. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2013; 2013:152073. [PMID: 23710165 PMCID: PMC3654633 DOI: 10.1155/2013/152073] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2012] [Revised: 03/13/2013] [Accepted: 03/28/2013] [Indexed: 01/29/2023]
Abstract
According to the data-frame theory, sensemaking is a macrocognitive process in which people try to make sense of or explain their observations by processing a number of explanatory structures called frames until the observations and frames become congruent. During the sensemaking process, the parietal cortex has been implicated in various cognitive tasks for the functions related to spatial and temporal information processing, mathematical thinking, and spatial attention. In particular, the parietal cortex plays important roles by extracting multiple representations of magnitudes at the early stages of perceptual analysis. By a series of neural network simulations, we demonstrate that the dissociation of different types of spatial information can start early with a rather similar structure (i.e., sensitivity on a common metric), but accurate representations require specific goal-directed top-down controls due to the interference in selective attention. Our results suggest that the roles of the parietal cortex rely on the hierarchical organization of multiple spatial representations and their interactions. The dissociation and interference between different types of spatial information are essentially the result of the competition at different levels of abstraction.
Collapse
|
38
|
Bolado-Gomez R, Gurney K. A biologically plausible embodied model of action discovery. Front Neurorobot 2013; 7:4. [PMID: 23487577 PMCID: PMC3594743 DOI: 10.3389/fnbot.2013.00004] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Accepted: 02/20/2013] [Indexed: 11/13/2022] Open
Abstract
During development, animals can spontaneously discover action-outcome pairings enabling subsequent achievement of their goals. We present a biologically plausible embodied model addressing key aspects of this process. The biomimetic model core comprises the basal ganglia and its loops through cortex and thalamus. We incorporate reinforcement learning (RL) with phasic dopamine supplying a sensory prediction error, signalling "surprising" outcomes. Phasic dopamine is used in a cortico-striatal learning rule which is consistent with recent data. We also hypothesized that objects associated with surprising outcomes acquire "novelty salience" contingent on the predicability of the outcome. To test this idea we used a simple model of prediction governing the dynamics of novelty salience and phasic dopamine. The task of the virtual robotic agent mimicked an in vivo counterpart (Gancarz et al., 2011) and involved interaction with a target object which caused a light flash, or a control object which did not. Learning took place according to two schedules. In one, the phasic outcome was delivered after interaction with the target in an unpredictable way which emulated the in vivo protocol. Without novelty salience, the model was unable to account for the experimental data. In the other schedule, the phasic outcome was reliably delivered and the agent showed a rapid increase in the number of interactions with the target which then decreased over subsequent sessions. We argue this is precisely the kind of change in behavior required to repeatedly present representations of context, action and outcome, to neural networks responsible for learning action-outcome contingency. The model also showed cortico-striatal plasticity consistent with learning a new action in basal ganglia. We conclude that action learning is underpinned by a complex interplay of plasticity and stimulus salience, and that our model contains many of the elements for biological action discovery to take place.
Collapse
Affiliation(s)
- Rufino Bolado-Gomez
- Department of Psychology, Adaptive Behaviour Research Group, University of Sheffield Sheffield, UK
| | | |
Collapse
|
39
|
Huang TR, Hazy TE, Herd SA, O'Reilly RC. Assembling old tricks for new tasks: a neural model of instructional learning and control. J Cogn Neurosci 2013; 25:843-51. [PMID: 23384191 DOI: 10.1162/jocn_a_00365] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We can learn from the wisdom of others to maximize success. However, it is unclear how humans take advice to flexibly adapt behavior. On the basis of data from neuroanatomy, neurophysiology, and neuroimaging, a biologically plausible model is developed to illustrate the neural mechanisms of learning from instructions. The model consists of two complementary learning pathways. The slow-learning parietal pathway carries out simple or habitual stimulus-response (S-R) mappings, whereas the fast-learning hippocampal pathway implements novel S-R rules. Specifically, the hippocampus can rapidly encode arbitrary S-R associations, and stimulus-cued responses are later recalled into the basal ganglia-gated pFC to bias response selection in the premotor and motor cortices. The interactions between the two model learning pathways explain how instructions can override habits and how automaticity can be achieved through motor consolidation.
Collapse
|
40
|
Ameliorating effects of aripiprazole on cognitive functions and depressive-like behavior in a genetic rat model of absence epilepsy and mild-depression comorbidity. Neuropharmacology 2013; 64:371-9. [DOI: 10.1016/j.neuropharm.2012.06.039] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2012] [Revised: 06/14/2012] [Accepted: 06/18/2012] [Indexed: 01/01/2023]
|
41
|
O'Brien MJ, Srinivasa N. A Spiking Neural Model for Stable Reinforcement of Synapses Based on Multiple Distal Rewards. Neural Comput 2013; 25:123-56. [DOI: 10.1162/neco_a_00387] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In this letter, a novel critic-like algorithm was developed to extend the synaptic plasticity rule described in Florian ( 2007 ) and Izhikevich ( 2007 ) in order to solve the problem of learning multiple distal rewards simultaneously. The system is augmented with short-term plasticity (STP) to stabilize the learning dynamics, thereby increasing the system's learning capacity. A theoretical threshold is estimated for the number of distal rewards that this system can learn. The validity of the novel algorithm was verified by computer simulations.
Collapse
Affiliation(s)
- Michael J. O'Brien
- Department of Mathematics, University of California at Los Angeles, Los Angeles, CA 90095, U.S.A., and Center for Neural and Emergent Systems, Information and System Sciences Lab, HRL Laboratories LLC, Malibu CA 90265, U.S.A
| | - Narayan Srinivasa
- Center for Neural and Emergent Systems, Information and System Sciences Lab, HRL Laboratories LLC, Malibu CA 90265, U.S.A
| |
Collapse
|
42
|
Abstract
Dopamine modulates executive function, including sustaining cognitive control during mental fatigue. Using event-related functional magnetic resonance imaging (fMRI) during the color-word Stroop task, we aimed to model mental fatigue with repeated task exposures in 33 cocaine abusers and 20 healthy controls. During such mental fatigue (indicated by increased errors, and decreased post-error slowing and dorsal anterior cingulate response to error as a function of time-on-task), healthy individuals showed increased activity in the dopaminergic midbrain to error. Cocaine abusers, characterized by disrupted dopamine neurotransmission, showed an opposite pattern of response. This midbrain fMRI activity with repetition was further correlated with objective indices of endogenous motivation in all subjects: a state measure (task reaction time) and a trait measure (dopamine D2 receptor availability in caudate, as revealed by positron emission tomography data collected in a subset of this sample, which directly points to a contribution of dopamine to these results). In a second sample of 14 cocaine abusers and 15 controls, administration of an indirect dopamine agonist, methylphenidate, reversed these midbrain responses in both groups, possibly indicating normalization of response in cocaine abusers because of restoration of dopamine signaling but degradation of response in healthy controls owing to excessive dopamine signaling. Together, these multimodal imaging findings suggest a novel involvement of the dopaminergic midbrain in sustaining motivation during fatigue. This region might provide a useful target for strengthening self-control and/or endogenous motivation in addiction.
Collapse
|
43
|
Frank GKW, Reynolds JR, Shott ME, Jappe L, Yang TT, Tregellas JR, O'Reilly RC. Anorexia nervosa and obesity are associated with opposite brain reward response. Neuropsychopharmacology 2012; 37:2031-46. [PMID: 22549118 PMCID: PMC3398719 DOI: 10.1038/npp.2012.51] [Citation(s) in RCA: 197] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/15/2011] [Revised: 03/15/2012] [Accepted: 03/16/2012] [Indexed: 12/17/2022]
Abstract
Anorexia nervosa (AN) is a severe psychiatric disorder associated with food avoidance and malnutrition. In this study, we wanted to test whether we would find brain reward alterations in AN, compared with individuals with normal or increased body weight. We studied 21 underweight, restricting-type AN (age M 22.5, SD 5.8 years), 19 obese (age M 27.1, SD 6.7 years), and 23 healthy control women (age M 24.8, SD 5.6 years), using blood oxygen level-dependent functional magnetic resonance brain imaging together with a reward-conditioning task. This paradigm involves learning the association between conditioned visual stimuli and unconditioned taste stimuli, as well as the unexpected violation of those learned associations. The task has been associated with activation of brain dopamine reward circuits, and it allows the comparison of actual brain response with expected brain activation based on established neuronal models. A group-by-task condition analysis (family-wise-error-corrected P<0.05) indicated that the orbitofrontal cortex differentiated all three groups. The dopamine model reward-learning signal distinguished groups in the anteroventral striatum, insula, and prefrontal cortex (P<0.001, 25 voxel cluster threshold), with brain responses that were greater in the AN group, but lesser in the obese group, compared with controls. These results suggest that brain reward circuits are more responsive to food stimuli in AN, but less responsive in obese women. The mechanism for this association is uncertain, but these brain reward response patterns could be biomarkers for the respective weight state.
Collapse
Affiliation(s)
- Guido K W Frank
- Department of Psychiatry, University of Colorado, Anschutz Medical Campus, Aurora, CO, USA.
| | | | | | | | | | | | | |
Collapse
|
44
|
Bouret S, Ravel S, Richmond BJ. Complementary neural correlates of motivation in dopaminergic and noradrenergic neurons of monkeys. Front Behav Neurosci 2012; 6:40. [PMID: 22822392 PMCID: PMC3398259 DOI: 10.3389/fnbeh.2012.00040] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2012] [Accepted: 06/24/2012] [Indexed: 12/01/2022] Open
Abstract
Rewards have many influences on learning, decision-making, and performance. All seem to rely on complementary actions of two closely related catecholaminergic neuromodulators, dopamine (DA), and noradrenaline (NA). We compared single unit activity of dopaminergic neurons of the substantia nigra pars compacta (SNc) and noradrenergic neurons of the locus coeruleus (LC) in monkeys performing a reward schedule task. Their motivation, indexed using operant performance, increased as they progressed through schedules ending in reward delivery. The responses of dopaminergic and noradrenergic neurons around the time of major task events, visual cues predicting trial outcome and operant action to complete a trial were similar in that they occurred at the same time. They were also similar in that they both responded most strongly to the first cues in schedules, which are the most informative cues. The neuronal responses around the time of the monkeys' actions were different, in that the response intensity profiles changed in opposite directions. Dopaminergic responses were stronger around predictably rewarded correct actions whereas noradrenergic responses were greater around predictably unrewarded correct actions. The complementary response profiles related to the monkeys operant actions suggest that DA neurons might relate to the value of the current action whereas the noradrenergic neurons relate to the psychological cost of that action.
Collapse
|
45
|
Understanding interpersonal function in psychiatric illness through multiplayer economic games. Biol Psychiatry 2012; 72:119-125. [PMID: 22579510 PMCID: PMC4174538 DOI: 10.1016/j.biopsych.2012.03.033] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/08/2011] [Revised: 02/23/2012] [Accepted: 03/19/2012] [Indexed: 11/22/2022]
Abstract
Interpersonal factors play significant roles in the onset, maintenance, and remission of psychiatric conditions. In the current major diagnostic classification systems for psychiatric disorders, some conditions are defined by the presence of impairments in social interaction or maintaining interpersonal relationships; these include autism, social phobia, and the personality disorders. Other psychopathologies confer significant difficulties in the social domain, including major depression, posttraumatic stress disorder, and psychotic disorders. Still other mental health conditions, including substance abuse and eating disorders, seem to be exacerbated or triggered in part by the influence of social peers. For each of these and other psychiatric conditions, the extent and quality of social support is a strong determinant of outcome such that high social support predicts symptom improvement and remission. Despite the central role of interpersonal factors in psychiatric illness, the neurobiology of social impairments remains largely unexplored, in part due to difficulties eliciting and quantifying interpersonal processes in a parametric manner. Recent advances in functional neuroimaging, combined with multiplayer exchange games drawn from behavioral economics, and computational/quantitative approaches more generally, provide a fitting paradigm within which to study interpersonal function and dysfunction in psychiatric conditions. In this review, we outline the importance of interpersonal factors in psychiatric illness and discuss ways in which neuroeconomics provides a tractable framework within which to examine the neurobiology of social dysfunction.
Collapse
|
46
|
Aggarwal M, Hyland BI, Wickens JR. Neural control of dopamine neurotransmission: implications for reinforcement learning. Eur J Neurosci 2012; 35:1115-23. [DOI: 10.1111/j.1460-9568.2012.08055.x] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
47
|
Frank MJ, Badre D. Mechanisms of hierarchical reinforcement learning in corticostriatal circuits 1: computational analysis. Cereb Cortex 2012; 22:509-26. [PMID: 21693490 PMCID: PMC3278315 DOI: 10.1093/cercor/bhr114] [Citation(s) in RCA: 188] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Growing evidence suggests that the prefrontal cortex (PFC) is organized hierarchically, with more anterior regions having increasingly abstract representations. How does this organization support hierarchical cognitive control and the rapid discovery of abstract action rules? We present computational models at different levels of description. A neural circuit model simulates interacting corticostriatal circuits organized hierarchically. In each circuit, the basal ganglia gate frontal actions, with some striatal units gating the inputs to PFC and others gating the outputs to influence response selection. Learning at all of these levels is accomplished via dopaminergic reward prediction error signals in each corticostriatal circuit. This functionality allows the system to exhibit conditional if-then hypothesis testing and to learn rapidly in environments with hierarchical structure. We also develop a hybrid Bayesian-reinforcement learning mixture of experts (MoE) model, which can estimate the most likely hypothesis state of individual participants based on their observed sequence of choices and rewards. This model yields accurate probabilistic estimates about which hypotheses are attended by manipulating attentional states in the generative neural model and recovering them with the MoE model. This 2-pronged modeling approach leads to multiple quantitative predictions that are tested with functional magnetic resonance imaging in the companion paper.
Collapse
Affiliation(s)
- Michael J Frank
- Department of Cognitive, Linguistic Sciences and Psychological Sciences, Brown Institute for Brain Science, Brown University, Providence RI 02912-1978, USA.
| | | |
Collapse
|
48
|
Schulte T, Müller-Oehring E, Sullivan E, Pfefferbaum A. Synchrony of corticostriatal-midbrain activation enables normal inhibitory control and conflict processing in recovering alcoholic men. Biol Psychiatry 2012; 71:269-78. [PMID: 22137506 PMCID: PMC3253929 DOI: 10.1016/j.biopsych.2011.10.022] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/01/2011] [Revised: 09/28/2011] [Accepted: 10/18/2011] [Indexed: 10/14/2022]
Abstract
BACKGROUND Alcohol dependence is associated with inhibitory control deficits, possibly related to abnormalities in frontoparietal cortical and midbrain function and connectivity. METHODS We examined functional connectivity and microstructural fiber integrity between frontoparietal and midbrain structures using a Stroop Match-to-Sample task with functional magnetic resonance imaging and diffusion tensor imaging in 18 alcoholic and 17 control subjects. Manipulation of color cues and response repetition sequences modulated cognitive demands during Stroop conflict. RESULTS Despite similar lateral frontoparietal activity and functional connectivity in alcoholic and control subjects when processing conflict, control subjects deactivated the posterior cingulate cortex (PCC), whereas alcoholic subjects did not. Posterior cingulum fiber integrity predicted the degree of PCC deactivation in control but not alcoholic subjects. Also, PCC activity was modulated by executive control demands: activated during response switching and deactivated during response repetition. Alcoholics showed the opposite pattern: activation during repetition and deactivation during switching. Here, in alcoholic subjects, greater deviations from the normal PCC activity correlated with higher amounts of lifetime alcohol consumption. A functional dissociation of brain network connectivity between the groups further showed that control subjects exhibited greater corticocortical connectivity among middle cingulate, posterior cingulate, and medial prefrontal cortices than alcoholic subjects. In contrast, alcoholic subjects exhibited greater midbrain-orbitofrontal cortical network connectivity than control subjects. Degree of microstructural fiber integrity predicted robustness of functional connectivity. CONCLUSIONS Thus, even subtle compromise of microstructural connectivity in alcoholism can influence modulation of functional connectivity and underlie alcohol-related cognitive impairment.
Collapse
Affiliation(s)
- T. Schulte
- Neuroscience Program, SRI International, Menlo Park, CA 94025, USA
| | - E.M. Müller-Oehring
- Neuroscience Program, SRI International, Menlo Park, CA 94025, USA,Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Road, Stanford, CA 94305, USA
| | - E.V. Sullivan
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Road, Stanford, CA 94305, USA
| | - A. Pfefferbaum
- Neuroscience Program, SRI International, Menlo Park, CA 94025, USA,Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Road, Stanford, CA 94305, USA
| |
Collapse
|
49
|
Friston KJ, Shiner T, FitzGerald T, Galea JM, Adams R, Brown H, Dolan RJ, Moran R, Stephan KE, Bestmann S. Dopamine, affordance and active inference. PLoS Comput Biol 2012; 8:e1002327. [PMID: 22241972 PMCID: PMC3252266 DOI: 10.1371/journal.pcbi.1002327] [Citation(s) in RCA: 222] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2011] [Accepted: 11/10/2011] [Indexed: 11/18/2022] Open
Abstract
The role of dopamine in behaviour and decision-making is often cast in terms of reinforcement learning and optimal decision theory. Here, we present an alternative view that frames the physiology of dopamine in terms of Bayes-optimal behaviour. In this account, dopamine controls the precision or salience of (external or internal) cues that engender action. In other words, dopamine balances bottom-up sensory information and top-down prior beliefs when making hierarchical inferences (predictions) about cues that have affordance. In this paper, we focus on the consequences of changing tonic levels of dopamine firing using simulations of cued sequential movements. Crucially, the predictions driving movements are based upon a hierarchical generative model that infers the context in which movements are made. This means that we can confuse agents by changing the context (order) in which cues are presented. These simulations provide a (Bayes-optimal) model of contextual uncertainty and set switching that can be quantified in terms of behavioural and electrophysiological responses. Furthermore, one can simulate dopaminergic lesions (by changing the precision of prediction errors) to produce pathological behaviours that are reminiscent of those seen in neurological disorders such as Parkinson's disease. We use these simulations to demonstrate how a single functional role for dopamine at the synaptic level can manifest in different ways at the behavioural level.
Collapse
Affiliation(s)
- Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, University College London, Queen Square, London, United Kingdom.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
50
|
Der-Avakian A, Markou A. The neurobiology of anhedonia and other reward-related deficits. Trends Neurosci 2011; 35:68-77. [PMID: 22177980 DOI: 10.1016/j.tins.2011.11.005] [Citation(s) in RCA: 663] [Impact Index Per Article: 51.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2011] [Revised: 11/06/2011] [Accepted: 11/17/2011] [Indexed: 01/07/2023]
Abstract
Anhedonia, or markedly diminished interest or pleasure, is a hallmark symptom of major depression, schizophrenia and other neuropsychiatric disorders. Over the past three decades, the clinical definition of anhedonia has remained relatively unchanged, although cognitive psychology and behavioral neuroscience have expanded our understanding of other reward-related processes. Here, we review the neural bases of the construct of anhedonia that reflects deficits in hedonic capacity and also closely linked to the constructs of reward valuation, decision-making, anticipation and motivation. The neural circuits subserving these reward-related processes include the ventral striatum, prefrontal cortical regions, and afferent and efferent projections. An understanding of anhedonia and other reward-related constructs will facilitate the diagnosis and treatment of disorders that include reward deficits as key symptoms.
Collapse
Affiliation(s)
- Andre Der-Avakian
- Department of Psychiatry, School of Medicine, University of California San Diego, La Jolla, CA 92093-0603, USA
| | | |
Collapse
|