1
|
Martin E, Chowdury A, Kopchick J, Thomas P, Khatib D, Rajan U, Zajac-Benitez C, Haddad L, Amirsadri A, Robison AJ, Thakkar KN, Stanley JA, Diwadkar VA. The mesolimbic system and the loss of higher order network features in schizophrenia when learning without reward. Front Psychiatry 2024; 15:1337882. [PMID: 39355381 PMCID: PMC11443173 DOI: 10.3389/fpsyt.2024.1337882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 08/16/2024] [Indexed: 10/03/2024] Open
Abstract
Introduction Schizophrenia is characterized by a loss of network features between cognition and reward sub-circuits (notably involving the mesolimbic system), and this loss may explain deficits in learning and cognition. Learning in schizophrenia has typically been studied with tasks that include reward related contingencies, but recent theoretical models have argued that a loss of network features should be seen even when learning without reward. We tested this model using a learning paradigm that required participants to learn without reward or feedback. We used a novel method for capturing higher order network features, to demonstrate that the mesolimbic system is heavily implicated in the loss of network features in schizophrenia, even when learning without reward. Methods fMRI data (Siemens Verio 3T) were acquired in a group of schizophrenia patients and controls (n=78; 46 SCZ, 18 ≤ Age ≤ 50) while participants engaged in associative learning without reward-related contingencies. The task was divided into task-active conditions for encoding (of associations) and cued-retrieval (where the cue was to be used to retrieve the associated memoranda). No feedback was provided during retrieval. From the fMRI time series data, network features were defined as follows: First, for each condition of the task, we estimated 2nd order undirected functional connectivity for each participant (uFC, based on zero lag correlations between all pairs of regions). These conventional 2nd order features represent the task/condition evoked synchronization of activity between pairs of brain regions. Next, in each of the patient and control groups, the statistical relationship between all possible pairs of 2nd order features were computed. These higher order features represent the consistency between all possible pairs of 2nd order features in that group and embed within them the contributions of individual regions to such group structure. Results From the identified inter-group differences (SCZ ≠ HC) in higher order features, we quantified the respective contributions of individual brain regions. Two principal effects emerged: 1) SCZ were characterized by a massive loss of higher order features during multiple task conditions (encoding and retrieval of associations). 2) Nodes in the mesolimbic system were over-represented in the loss of higher order features in SCZ, and notably so during retrieval. Discussion Our analytical goals were linked to a recent circuit-based integrative model which argued that synergy between learning and reward circuits is lost in schizophrenia. The model's notable prediction was that such a loss would be observed even when patients learned without reward. Our results provide substantial support for these predictions where we observed a loss of network features between the brain's sub-circuits for a) learning (including the hippocampus and prefrontal cortex) and b) reward processing (specifically constituents of the mesolimbic system that included the ventral tegmental area and the nucleus accumbens. Our findings motivate a renewed appraisal of the relationship between reward and cognition in schizophrenia and we discuss their relevance for putative behavioral interventions.
Collapse
Affiliation(s)
- Elizabeth Martin
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
- Department of Psychiatry, University of Texas Austin, Austin, TX, United States
| | - Asadur Chowdury
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
- Department of Neurosurgery, University of Michigan, Ann Arbor, MI, United States
| | - John Kopchick
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| | - Patricia Thomas
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| | - Dalal Khatib
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| | - Usha Rajan
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| | - Caroline Zajac-Benitez
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| | - Luay Haddad
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| | - Alireza Amirsadri
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| | - Alfred J. Robison
- Department of Physiology, Michigan State University, East Lansing, MI, United States
| | - Katherine N. Thakkar
- Department of Psychology, Michigan State University, East Lansing, MI, United States
| | - Jeffrey A. Stanley
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| | - Vaibhav A. Diwadkar
- Department of Psychiatry & Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| |
Collapse
|
2
|
Pepper JL, Nuttall HE. Age-Related Changes to Multisensory Integration and Audiovisual Speech Perception. Brain Sci 2023; 13:1126. [PMID: 37626483 PMCID: PMC10452685 DOI: 10.3390/brainsci13081126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/20/2023] [Accepted: 07/22/2023] [Indexed: 08/27/2023] Open
Abstract
Multisensory integration is essential for the quick and accurate perception of our environment, particularly in everyday tasks like speech perception. Research has highlighted the importance of investigating bottom-up and top-down contributions to multisensory integration and how these change as a function of ageing. Specifically, perceptual factors like the temporal binding window and cognitive factors like attention and inhibition appear to be fundamental in the integration of visual and auditory information-integration that may become less efficient as we age. These factors have been linked to brain areas like the superior temporal sulcus, with neural oscillations in the alpha-band frequency also being implicated in multisensory processing. Age-related changes in multisensory integration may have significant consequences for the well-being of our increasingly ageing population, affecting their ability to communicate with others and safely move through their environment; it is crucial that the evidence surrounding this subject continues to be carefully investigated. This review will discuss research into age-related changes in the perceptual and cognitive mechanisms of multisensory integration and the impact that these changes have on speech perception and fall risk. The role of oscillatory alpha activity is of particular interest, as it may be key in the modulation of multisensory integration.
Collapse
Affiliation(s)
| | - Helen E. Nuttall
- Department of Psychology, Lancaster University, Bailrigg LA1 4YF, UK;
| |
Collapse
|
3
|
Zhang L, Wang X, Alain C, Du Y. Successful aging of musicians: Preservation of sensorimotor regions aids audiovisual speech-in-noise perception. SCIENCE ADVANCES 2023; 9:eadg7056. [PMID: 37126550 PMCID: PMC10132752 DOI: 10.1126/sciadv.adg7056] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Musicianship can mitigate age-related declines in audiovisual speech-in-noise perception. We tested whether this benefit originates from functional preservation or functional compensation by comparing fMRI responses of older musicians, older nonmusicians, and young nonmusicians identifying noise-masked audiovisual syllables. Older musicians outperformed older nonmusicians and showed comparable performance to young nonmusicians. Notably, older musicians retained similar neural specificity of speech representations in sensorimotor areas to young nonmusicians, while older nonmusicians showed degraded neural representations. In the same region, older musicians showed higher neural alignment to young nonmusicians than older nonmusicians, which was associated with their training intensity. In older nonmusicians, the degree of neural alignment predicted better performance. In addition, older musicians showed greater activation in frontal-parietal, speech motor, and visual motion regions and greater deactivation in the angular gyrus than older nonmusicians, which predicted higher neural alignment in sensorimotor areas. Together, these findings suggest that musicianship-related benefit in audiovisual speech-in-noise processing is rooted in preserving youth-like representations in sensorimotor regions.
Collapse
Affiliation(s)
- Lei Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiuyi Wang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, ON M8V 2S4, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai 200031, China
- Chinese Institute for Brain Research, Beijing 102206, China
| |
Collapse
|
4
|
Fullerton AM, Vickers DA, Luke R, Billing AN, McAlpine D, Hernandez-Perez H, Peelle JE, Monaghan JJM, McMahon CM. Cross-modal functional connectivity supports speech understanding in cochlear implant users. Cereb Cortex 2023; 33:3350-3371. [PMID: 35989307 PMCID: PMC10068270 DOI: 10.1093/cercor/bhac277] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 06/10/2022] [Accepted: 06/11/2022] [Indexed: 11/12/2022] Open
Abstract
Sensory deprivation can lead to cross-modal cortical changes, whereby sensory brain regions deprived of input may be recruited to perform atypical function. Enhanced cross-modal responses to visual stimuli observed in auditory cortex of postlingually deaf cochlear implant (CI) users are hypothesized to reflect increased activation of cortical language regions, but it is unclear if this cross-modal activity is "adaptive" or "mal-adaptive" for speech understanding. To determine if increased activation of language regions is correlated with better speech understanding in CI users, we assessed task-related activation and functional connectivity of auditory and visual cortices to auditory and visual speech and non-speech stimuli in CI users (n = 14) and normal-hearing listeners (n = 17) and used functional near-infrared spectroscopy to measure hemodynamic responses. We used visually presented speech and non-speech to investigate neural processes related to linguistic content and observed that CI users show beneficial cross-modal effects. Specifically, an increase in connectivity between the left auditory and visual cortices-presumed primary sites of cortical language processing-was positively correlated with CI users' abilities to understand speech in background noise. Cross-modal activity in auditory cortex of postlingually deaf CI users may reflect adaptive activity of a distributed, multimodal speech network, recruited to enhance speech understanding.
Collapse
Affiliation(s)
- Amanda M Fullerton
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
| | - Deborah A Vickers
- Cambridge Hearing Group, Sound Lab, Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 OSZ, United Kingdom
- Speech, Hearing and Phonetic Sciences, University College London, London WC1N 1PF, United Kingdom
| | - Robert Luke
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
| | - Addison N Billing
- Institute of Cognitive Neuroscience, University College London, London WCIN 3AZ, United Kingdom
- DOT-HUB, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom
| | - David McAlpine
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
| | - Heivet Hernandez-Perez
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO 63110, United States
| | - Jessica J M Monaghan
- National Acoustic Laboratories, Australian Hearing Hub, Sydney 2109, Australia
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
| | - Catherine M McMahon
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
- HEAR Centre, Macquarie University, Sydney 2109, Australia
| |
Collapse
|
5
|
Van Engen KJ, Dey A, Sommers MS, Peelle JE. Audiovisual speech perception: Moving beyond McGurk. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3216. [PMID: 36586857 PMCID: PMC9894660 DOI: 10.1121/10.0015262] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 10/26/2022] [Accepted: 11/05/2022] [Indexed: 05/29/2023]
Abstract
Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.
Collapse
Affiliation(s)
- Kristin J Van Engen
- Department of Psychological and Brain Sciences, Washington University, St. Louis, Missouri 63130, USA
| | - Avanti Dey
- PLOS ONE, 1265 Battery Street, San Francisco, California 94111, USA
| | - Mitchell S Sommers
- Department of Psychological and Brain Sciences, Washington University, St. Louis, Missouri 63130, USA
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University, St. Louis, Missouri 63130, USA
| |
Collapse
|