1
|
The Recruitment of a Neuronal Ensemble in the Central Nucleus of the Amygdala During the First Extinction Episode Has Persistent Effects on Extinction Expression. Biol Psychiatry 2023; 93:300-308. [PMID: 36336498 DOI: 10.1016/j.biopsych.2022.07.021] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 06/30/2022] [Accepted: 07/29/2022] [Indexed: 01/21/2023]
Abstract
BACKGROUND Adaptive behavior depends on the delicate and dynamic balance between acquisition and extinction memories. Disruption of this balance, particularly when the extinction of memory loses control over behavior, is the root of treatment failure of maladaptive behaviors such as substance abuse or anxiety disorders. Understanding this balance requires a better understanding of the underlying neurobiology and its contribution to behavioral regulation. METHODS We microinjected Daun02 in Fos-lacZ transgenic rats following a single extinction training episode to delete extinction-recruited neuronal ensembles in the basolateral amygdala (BLA) and central nucleus of the amygdala (CN) and examined their contribution to behavior in an appetitive Pavlovian task. In addition, we used immunohistochemistry and neuronal staining methods to identify the molecular markers of activated neurons in the BLA and CN during extinction learning or retrieval. RESULTS CN neurons were preferentially engaged following extinction, and deletion of these extinction-activated ensembles in the CN but not the BLA impaired the retrieval of extinction despite additional extinction training and promoted greater levels of behavioral restoration in spontaneous recovery and reinstatement. Disrupting extinction processing in the CN in turn increased activity in the BLA. Our results also show a specific role for CN PKCδ+ neurons in behavioral inhibition but not during initial extinction learning. CONCLUSIONS We showed that the initial extinction-recruited CN ensemble is critical to the acquisition-extinction balance and that greater behavioral restoration does not mean weaker extinction contribution. These findings provide a novel avenue for thinking about the neural mechanisms of extinction and for developing treatments for cue-triggered appetitive behaviors.
Collapse
|
2
|
Keefer SE, Petrovich GD. Necessity and recruitment of cue-specific neuronal ensembles within the basolateral amygdala during appetitive reversal learning. Neurobiol Learn Mem 2022; 194:107663. [PMID: 35870716 PMCID: PMC10326893 DOI: 10.1016/j.nlm.2022.107663] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 07/14/2022] [Accepted: 07/17/2022] [Indexed: 11/28/2022]
Abstract
Through Pavlovian appetitive conditioning, environmental cues can become predictors of food availability. Over time, however, the food, and thus the value of the associated cues, can change based on environmental variations. This change in outcome necessitates updating of the value of the cue to appropriately alter behavioral responses to these cues. The basolateral amygdala (BLA) is critical in updating the outcomes of learned cues. However, it is unknown if the same BLA neuronal ensembles that are recruited in the initial associative memory are required when the new cue-outcome association is formed during reversal learning. The current study used the Daun02 inactivation method that enables selective targeting and disruption of activated neuronal ensembles in Fos-lacZ transgenic rats. Rats were implanted with bilateral cannulas that target the BLA and underwent appetitive discriminative conditioning in which rats had to discriminate between two auditory stimuli. One stimulus (CS+) co-terminated with food delivery, and the other stimulus was unrewarded (CS-; counterbalanced). Rats were then tested for CS+ or CS- memory retrieval and infused with either Daun02 or a vehicle solution into the BLA to inactivate either CS+ or CS- neuronal ensembles that were activated during that test. To assess if the same neuronal ensembles are necessary to update the value of the new association when the outcomes are changed, rats underwent reversal learning: the CS+ was no longer followed by food (reversal CS-, rCS-), and the CS- was now followed by food (reversal CS+; rCS+). The group that received Daun02 following CS+ session showed a decrease in conditioned responding and increased latency to the rCS- (previously CS+) during the first session of reversal learning, specifically during the first trial. This indicates that the neuronal ensemble that was activated during the recall of the CS+ memory was the same neuronal ensemble needed for learning the new outcome of the same CS, now rCS-. Additionally, the group that received Daun02 following CS- session was slower to respond to the rCS+ (previously CS-) during reversal learning. This indicates that the neuronal ensemble that was activated during the recall of the CS- memory was the same neuronal ensemble needed for learning the new outcome of the same CS. These results demonstrate that different neuronal ensembles within the BLA mediate memory recall of CS+ and CS- cues and reactivation of each cue-specific neuronal ensemble is necessary to update the value of that specific cue to respond appropriately during reversal learning. These results also indicate substantial plasticity within the BLA for behavioral flexibility as both groups eventually showed similar terminal levels of reversal learning.
Collapse
Affiliation(s)
- Sara E Keefer
- Department of Psychology and Neuroscience, Boston College, 140 Commonwealth Avenue, Chestnut Hill, MA 02467, USA.
| | - Gorica D Petrovich
- Department of Psychology and Neuroscience, Boston College, 140 Commonwealth Avenue, Chestnut Hill, MA 02467, USA
| |
Collapse
|
3
|
Wassum KM. Amygdala-cortical collaboration in reward learning and decision making. eLife 2022; 11:80926. [PMID: 36062909 PMCID: PMC9444241 DOI: 10.7554/elife.80926] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 08/22/2022] [Indexed: 12/16/2022] Open
Abstract
Adaptive reward-related decision making requires accurate prospective consideration of the specific outcome of each option and its current desirability. These mental simulations are informed by stored memories of the associative relationships that exist within an environment. In this review, I discuss recent investigations of the function of circuitry between the basolateral amygdala (BLA) and lateral (lOFC) and medial (mOFC) orbitofrontal cortex in the learning and use of associative reward memories. I draw conclusions from data collected using sophisticated behavioral approaches to diagnose the content of appetitive memory in combination with modern circuit dissection tools. I propose that, via their direct bidirectional connections, the BLA and OFC collaborate to help us encode detailed, outcome-specific, state-dependent reward memories and to use those memories to enable the predictions and inferences that support adaptive decision making. Whereas lOFC→BLA projections mediate the encoding of outcome-specific reward memories, mOFC→BLA projections regulate the ability to use these memories to inform reward pursuit decisions. BLA projections to lOFC and mOFC both contribute to using reward memories to guide decision making. The BLA→lOFC pathway mediates the ability to represent the identity of a specific predicted reward and the BLA→mOFC pathway facilitates understanding of the value of predicted events. Thus, I outline a neuronal circuit architecture for reward learning and decision making and provide new testable hypotheses as well as implications for both adaptive and maladaptive decision making.
Collapse
Affiliation(s)
- Kate M Wassum
- Department of Psychology, University of California, Los Angeles, Los Angeles, United States.,Brain Research Institute, University of California, Los Angeles, Los Angeles, United States.,Integrative Center for Learning and Memory, University of California, Los Angeles, Los Angeles, United States.,Integrative Center for Addictive Disorders, University of California, Los Angeles, Los Angeles, United States
| |
Collapse
|
4
|
Smith DM, Torregrossa MM. Valence encoding in the amygdala influences motivated behavior. Behav Brain Res 2021; 411:113370. [PMID: 34051230 DOI: 10.1016/j.bbr.2021.113370] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 05/11/2021] [Accepted: 05/14/2021] [Indexed: 01/02/2023]
Abstract
The amygdala is critical for emotional processing and motivated behavior. Its role in these functions is due to its processing of the valence of environmental stimuli. The amygdala receives direct sensory input from sensory thalamus and cortical regions to integrate sensory information from the environment with aversive and/or appetitive outcomes. As many reviews have discussed the amygdala's role in threat processing and fear conditioning, this review will focus on how the amygdala encodes positive valence and the mechanisms that allow it to distinguish between stimuli of positive and negative valence. These findings are also extended to consider how valence encoding populations in the amygdala contribute to local and long-range circuits including those that integrate environmental cues and positive valence. Understanding the complexity of valence encoding in the amygdala is crucial as these mechanisms are implicated in a variety of disease states including anxiety disorders and substance use disorders.
Collapse
Affiliation(s)
- Dana M Smith
- Department of Psychiatry, University of Pittsburgh, 450 Technology Drive, Pittsburgh, PA, 15219, USA; Center for Neuroscience, University of Pittsburgh, 4200 Fifth Ave, Pittsburgh, PA, 15213, USA.
| | - Mary M Torregrossa
- Department of Psychiatry, University of Pittsburgh, 450 Technology Drive, Pittsburgh, PA, 15219, USA; Center for Neuroscience, University of Pittsburgh, 4200 Fifth Ave, Pittsburgh, PA, 15213, USA
| |
Collapse
|
5
|
Mollick JA, Hazy TE, Krueger KA, Nair A, Mackie P, Herd SA, O'Reilly RC. A systems-neuroscience model of phasic dopamine. Psychol Rev 2020; 127:972-1021. [PMID: 32525345 PMCID: PMC8453660 DOI: 10.1037/rev0000199] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
We describe a neurobiologically informed computational model of phasic dopamine signaling to account for a wide range of findings, including many considered inconsistent with the simple reward prediction error (RPE) formalism. The central feature of this PVLV framework is a distinction between a primary value (PV) system for anticipating primary rewards (Unconditioned Stimuli [USs]), and a learned value (LV) system for learning about stimuli associated with such rewards (CSs). The LV system represents the amygdala, which drives phasic bursting in midbrain dopamine areas, while the PV system represents the ventral striatum, which drives shunting inhibition of dopamine for expected USs (via direct inhibitory projections) and phasic pausing for expected USs (via the lateral habenula). Our model accounts for data supporting the separability of these systems, including individual differences in CS-based (sign-tracking) versus US-based learning (goal-tracking). Both systems use competing opponent-processing pathways representing evidence for and against specific USs, which can explain data dissociating the processes involved in acquisition versus extinction conditioning. Further, opponent processing proved critical in accounting for the full range of conditioned inhibition phenomena, and the closely related paradigm of second-order conditioning. Finally, we show how additional separable pathways representing aversive USs, largely mirroring those for appetitive USs, also have important differences from the positive valence case, allowing the model to account for several important phenomena in aversive conditioning. Overall, accounting for all of these phenomena strongly constrains the model, thus providing a well-validated framework for understanding phasic dopamine signaling. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
- Jessica A Mollick
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Thomas E Hazy
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Kai A Krueger
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Ananta Nair
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Prescott Mackie
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Seth A Herd
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Randall C O'Reilly
- Department of Psychology and Neuroscience, University of Colorado Boulder
| |
Collapse
|
6
|
Lafferty CK, Britt JP. Off-Target Influences of Arch-Mediated Axon Terminal Inhibition on Network Activity and Behavior. Front Neural Circuits 2020; 14:10. [PMID: 32269514 PMCID: PMC7109268 DOI: 10.3389/fncir.2020.00010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Accepted: 03/09/2020] [Indexed: 11/13/2022] Open
Abstract
Archaerhodopsin (ArchT)-mediated photoinhibition of axon terminals is commonly used to test the involvement of specific long-range neural projections in behavior. Although sustained activation of this opsin in axon terminals has the unintended consequence of enhancing spontaneous vesicle release, it is unclear whether this desynchronized signaling is consequential for ArchT’s behavioral effects. Here, we compare axon terminal and cell body photoinhibition of nucleus accumbens (NAc) afferents to test the utility of these approaches for uncovering pathway-specific contributions of neural circuits to behavior. First, in brain slice recordings we confirmed that ArchT photoinhibition of glutamatergic axons reduces evoked synaptic currents and increases spontaneous transmitter release. A further consequence was increased interneuron activity, which served to broadly suppress glutamate input via presynaptic GABAB receptors. In vivo, axon terminal photoinhibition increased feeding and reward-seeking behavior irrespective of the afferent pathway targeted. These behavioral effects are comparable to those obtained with broad inhibition of NAc neurons. In contrast, cell body inhibition of excitatory NAc afferents revealed a pathway-specific contribution of thalamic input to feeding behavior and amygdala input to reward-seeking under extinction conditions. These findings underscore the off-target behavioral consequences of ArchT-mediated axon terminal inhibition while highlighting cell body inhibition as a valuable alternative for pathway-specific optogenetic silencing.
Collapse
Affiliation(s)
- Christopher K Lafferty
- Department of Psychology, McGill University, Montreal, QC, Canada.,Center for Studies in Behavioral Neurobiology, Concordia University, Montreal, QC, Canada
| | - Jonathan P Britt
- Department of Psychology, McGill University, Montreal, QC, Canada.,Center for Studies in Behavioral Neurobiology, Concordia University, Montreal, QC, Canada
| |
Collapse
|
7
|
Stolyarova A, Izquierdo A. Complementary contributions of basolateral amygdala and orbitofrontal cortex to value learning under uncertainty. eLife 2017; 6. [PMID: 28682238 PMCID: PMC5533586 DOI: 10.7554/elife.27483] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Accepted: 07/05/2017] [Indexed: 11/24/2022] Open
Abstract
We make choices based on the values of expected outcomes, informed by previous experience in similar settings. When the outcomes of our decisions consistently violate expectations, new learning is needed to maximize rewards. Yet not every surprising event indicates a meaningful change in the environment. Even when conditions are stable overall, outcomes of a single experience can still be unpredictable due to small fluctuations (i.e., expected uncertainty) in reward or costs. In the present work, we investigate causal contributions of the basolateral amygdala (BLA) and orbitofrontal cortex (OFC) in rats to learning under expected outcome uncertainty in a novel delay-based task that incorporates both predictable fluctuations and directional shifts in outcome values. We demonstrate that OFC is required to accurately represent the distribution of wait times to stabilize choice preferences despite trial-by-trial fluctuations in outcomes, whereas BLA is necessary for the facilitation of learning in response to surprising events. DOI:http://dx.doi.org/10.7554/eLife.27483.001 Nobody likes waiting – we opt for online shopping to avoid standing in lines, grow impatient in traffic, and often prefer restaurants that serve food quickly. When making decisions, humans and other animals try to maximize the benefits by weighing up the costs and rewards associated with a situation. Many regions in the brain help us choose the best options based on quality and size of rewards, and required waiting times. Even before we make decisions, the activity in these brain regions predicts what we will choose. Sometimes, however, unexpected changes can lead to longer waiting times and our preferences suddenly become less desirable. The brain can detect such changes by comparing the outcomes we anticipate to those we experience. When the outcomes are surprising, specific areas in the brain such as the amygdala and the orbitofrontal cortex help us learn to make better choices. However, as surprising events can occur purely by chance, we need to be able to ignore irrelevant surprises and only learn from meaningful ones. Until now, it was not clear whether the amygdala and orbitofrontal cortex play specific roles in successfully learning under such conditions. Stolyarova and Izquierdo trained rats to select between two images and rewarded them with sugar pellets after different delays. If rats chose one of these images they received the rewards after a predictable delay that was about 10 seconds, while choosing the other one produced variable delays – sometimes the time intervals were either very short or very long. Then, the waiting times for one of the alternatives changed unexpectedly. Rats with healthy brains quickly learned to choose the option with the shorter waiting time. Stolyarova and Izquierdo repeated the experiments with rats that had damage in a part of the amygdala. These rats learned more slowly, particularly when the variable option changed for the better. Rats with damage to the orbitofrontal cortex failed to learn at all. Stolyarova and Izquierdo then examined the rats’ behavior during delays. Rats with damage to the orbitofrontal cortex could not distinguish between meaningful and irrelevant surprises and always looked for the food pellet (i.e. anticipated a reward) at the average delay interval. These findings highlight two brain regions that help us distinguish meaningful surprises from irrelevant ones. A next step will be to examine how the amygdala and orbitofrontal cortex interact during learning and see if changes to the activity of these brain regions may affect responses. Advanced methods to non-invasively manipulate brain activity in humans may help people who find it hard to cope with changes; or individuals suffering from substance use disorders, who often struggle to give up drugs that provide them immediate and predictable rewards. DOI:http://dx.doi.org/10.7554/eLife.27483.002
Collapse
Affiliation(s)
- Alexandra Stolyarova
- Department of Psychology, University of California, Los Angeles, Los Angeles, United States
| | - Alicia Izquierdo
- Department of Psychology, University of California, Los Angeles, Los Angeles, United States.,Integrative Center for Learning and Memory, University of California, Los Angeles, Los Angeles, United States.,Integrative Center for Addictions, University of California, Los Angeles, Los Angeles, United States.,The Brain Research Institute, University of California, Los Angeles, Los Angeles, United States
| |
Collapse
|
8
|
Nasser HM, Calu DJ, Schoenbaum G, Sharpe MJ. The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning. Front Psychol 2017; 8:244. [PMID: 28275359 PMCID: PMC5319959 DOI: 10.3389/fpsyg.2017.00244] [Citation(s) in RCA: 50] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2016] [Accepted: 02/07/2017] [Indexed: 12/31/2022] Open
Abstract
Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value.
Collapse
Affiliation(s)
- Helen M Nasser
- Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore MD, USA
| | - Donna J Calu
- Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore MD, USA
| | - Geoffrey Schoenbaum
- Department of Anatomy and Neurobiology, University of Maryland School of Medicine, BaltimoreMD, USA; Cellular Neurobiology Research Branch, National Institute on Drug Abuse Intramural Research Program, BaltimoreMD, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, BaltimoreMD, USA
| | - Melissa J Sharpe
- Cellular Neurobiology Research Branch, National Institute on Drug Abuse Intramural Research Program, BaltimoreMD, USA; Princeton Neuroscience Institute, Princeton University, PrincetonNJ, USA
| |
Collapse
|
9
|
Schiffino FL, Holland PC. Consolidation of altered associability information by amygdala central nucleus. Neurobiol Learn Mem 2016; 133:204-213. [PMID: 27427328 PMCID: PMC4987260 DOI: 10.1016/j.nlm.2016.07.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2016] [Revised: 07/13/2016] [Accepted: 07/14/2016] [Indexed: 12/26/2022]
Abstract
The surprising omission of a reinforcer can enhance the associability of the stimuli that were present when the reward prediction error was induced, so that they more readily enter into new associations in the future. Previous research from this laboratory identified brain circuit elements critical to the enhancement of stimulus associability by the omission of an expected event and to the subsequent expression of that altered associability in more rapid learning. These elements include the amygdala, the midbrain substantia nigra, the basal forebrain substantia innominata, the dorsolateral striatum, the secondary visual cortex, and the posterior parietal cortex. Here, we found that consolidation of a surprise-enhanced associability memory in a serial prediction task depends on processing in the amygdala central nucleus (CeA) after completion of sessions that included the surprising omission of an expected event. Post-surprise infusions of anisomycin, lidocaine, or muscimol prevented subsequent display of surprise-enhanced associability. Because previous studies indicated that CeA function is unnecessary for the expression of associability enhancements that were induced previously when CeA function was intact (Holland & Gallagher, 2006), we interpreted these results as indicating that post-surprise activity of CeA ("surprise replay") is necessary for the consolidation of altered associability memories elsewhere in the brain, such as the posterior parietal cortex (Schiffino et al., 2014a).
Collapse
Affiliation(s)
- Felipe L Schiffino
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Peter C Holland
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
10
|
Holland PC. Effects of amygdala lesions on overexpectation phenomena in food cup approach and autoshaping procedures. Behav Neurosci 2016; 130:357-75. [PMID: 27176564 DOI: 10.1037/bne0000149] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Prediction error (PE) plays a critical role in most modern theories of associative learning, by determining the effectiveness of conditioned stimuli (CS) or unconditioned stimuli (US). Here, we examined the effects of lesions of central (CeA) or basolateral (BLA) amygdala on performance in overexpectation tasks. In 2 experiments, after 2 CSs were separately paired with the US, they were combined and followed by the same US. In a subsequent test, we observed losses in strength of both CSs, as expected if the negative PE generated on reinforced compound trials encouraged inhibitory learning. CeA lesions, known to interfere with PE-induced enhancements in CS effectiveness, reduced those losses, suggesting that normally the negative PE also enhances cue associability in this task. BLA lesions had no effect. When a novel cue accompanied the reinforced compound, it acquired net conditioned inhibition, despite its consistent pairings with the US, consonant with US effectiveness models. That acquisition was unaffected by either CeA or BLA lesions, suggesting different rules for assignment of credit of changes in cue strength and cue associability. Finally, we examined a puzzling autoshaping phenomenon previously attributed to overexpectation effects. When a previously food-paired auditory cue was combined with the insertion of a lever and paired with the same food US, the auditory cue not only failed to block conditioning to the lever, but also lost strength, as in an overexpectation experiment. This effect was abolished by BLA lesions but unaffected by CeA lesions, suggesting it was unrelated to other overexpectation effects. (PsycINFO Database Record
Collapse
|
11
|
Holland PC, Schiffino FL. Mini-review: Prediction errors, attention and associative learning. Neurobiol Learn Mem 2016; 131:207-15. [PMID: 26948122 DOI: 10.1016/j.nlm.2016.02.014] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2016] [Revised: 02/12/2016] [Accepted: 02/20/2016] [Indexed: 10/22/2022]
Abstract
Most modern theories of associative learning emphasize a critical role for prediction error (PE, the difference between received and expected events). One class of theories, exemplified by the Rescorla-Wagner (1972) model, asserts that PE determines the effectiveness of the reinforcer or unconditioned stimulus (US): surprising reinforcers are more effective than expected ones. A second class, represented by the Pearce-Hall (1980) model, argues that PE determines the associability of conditioned stimuli (CSs), the rate at which they may enter into new learning: the surprising delivery or omission of a reinforcer enhances subsequent processing of the CSs that were present when PE was induced. In this mini-review we describe evidence, mostly from our laboratory, for PE-induced changes in the associability of both CSs and USs, and the brain systems involved in the coding, storage and retrieval of these altered associability values. This evidence favors a number of modifications to behavioral models of how PE influences event processing, and suggests the involvement of widespread brain systems in animals' responses to PE.
Collapse
|
12
|
Lopatina N, McDannald MA, Styer CV, Sadacca BF, Cheer JF, Schoenbaum G. Lateral orbitofrontal neurons acquire responses to upshifted, downshifted, or blocked cues during unblocking. eLife 2015; 4:e11299. [PMID: 26670544 PMCID: PMC4733037 DOI: 10.7554/elife.11299] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 12/09/2015] [Indexed: 01/06/2023] Open
Abstract
The lateral orbitofrontal cortex (lOFC) has been described as signaling either outcome expectancies or value. Previously, we used unblocking to show that lOFC neurons respond to a predictive cue signaling a ‘valueless’ change in outcome features (McDannald, 2014). However, many lOFC neurons also fired to a cue that simply signaled more reward. Here, we recorded lOFC neurons in a variant of this task in which rats learned about cues that signaled either more (upshift), less (downshift) or the same (blocked) amount of reward. We found that neurons acquired responses specifically to one of the three cues and did not fire to the other two. These results show that, at least early in learning, lOFC neurons fire to valued cues in a way that is more consistent with signaling of the predicted outcome’s features than with signaling of a general, abstract or cached value that is independent of the outcome. DOI:http://dx.doi.org/10.7554/eLife.11299.001
Collapse
Affiliation(s)
- Nina Lopatina
- Intramural Research Program, National Institute on Drug Abuse, Baltimore, United States.,Program in Neuroscience, University of Maryland School of Medicine, Baltimore, United States
| | | | - Clay V Styer
- Intramural Research Program, National Institute on Drug Abuse, Baltimore, United States
| | - Brian F Sadacca
- Intramural Research Program, National Institute on Drug Abuse, Baltimore, United States
| | - Joseph F Cheer
- Department Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, United States
| | - Geoffrey Schoenbaum
- Intramural Research Program, National Institute on Drug Abuse, Baltimore, United States.,Department of Neuroscience, Johns Hopkins University, Baltimore, United States.,Department Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, United States
| |
Collapse
|
13
|
Wassum KM, Izquierdo A. The basolateral amygdala in reward learning and addiction. Neurosci Biobehav Rev 2015; 57:271-83. [PMID: 26341938 DOI: 10.1016/j.neubiorev.2015.08.017] [Citation(s) in RCA: 200] [Impact Index Per Article: 22.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2015] [Revised: 08/27/2015] [Accepted: 08/28/2015] [Indexed: 12/22/2022]
Abstract
Sophisticated behavioral paradigms partnered with the emergence of increasingly selective techniques to target the basolateral amygdala (BLA) have resulted in an enhanced understanding of the role of this nucleus in learning and using reward information. Due to the wide variety of behavioral approaches many questions remain on the circumscribed role of BLA in appetitive behavior. In this review, we integrate conclusions of BLA function in reward-related behavior using traditional interference techniques (lesion, pharmacological inactivation) with those using newer methodological approaches in experimental animals that allow in vivo manipulation of cell type-specific populations and neural recordings. Secondly, from a review of appetitive behavioral tasks in rodents and monkeys and recent computational models of reward procurement, we derive evidence for BLA as a neural integrator of reward value, history, and cost parameters. Taken together, BLA codes specific and temporally dynamic outcome representations in a distributed network to orchestrate adaptive responses. We provide evidence that experiences with opiates and psychostimulants alter these outcome representations in BLA, resulting in long-term modified action.
Collapse
Affiliation(s)
- Kate M Wassum
- Department of Psychology, University of California at Los Angeles, Los Angeles, CA, USA; Brain Research Institute, University of California at Los Angeles, Los Angeles, CA, USA
| | - Alicia Izquierdo
- Department of Psychology, University of California at Los Angeles, Los Angeles, CA, USA; Brain Research Institute, University of California at Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|