1
|
Mai G, Jiang Z, Wang X, Tachtsidis I, Howell P. Neuroplasticity of Speech-in-Noise Processing in Older Adults Assessed by Functional Near-Infrared Spectroscopy (fNIRS). Brain Topogr 2024:10.1007/s10548-024-01070-2. [PMID: 39042322 DOI: 10.1007/s10548-024-01070-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 07/13/2024] [Indexed: 07/24/2024]
Abstract
Functional near-infrared spectroscopy (fNIRS), a non-invasive optical neuroimaging technique that is portable and acoustically silent, has become a promising tool for evaluating auditory brain functions in hearing-vulnerable individuals. This study, for the first time, used fNIRS to evaluate neuroplasticity of speech-in-noise processing in older adults. Ten older adults, most of whom had moderate-to-mild hearing loss, participated in a 4-week speech-in-noise training. Their speech-in-noise performances and fNIRS brain responses to speech (auditory sentences in noise), non-speech (spectrally-rotated speech in noise) and visual (flashing chequerboards) stimuli were evaluated pre- (T0) and post-training (immediately after training, T1; and after a 4-week retention, T2). Behaviourally, speech-in-noise performances were improved after retention (T2 vs. T0) but not immediately after training (T1 vs. T0). Neurally, we intriguingly found brain responses to speech vs. non-speech decreased significantly in the left auditory cortex after retention (T2 vs. T0 and T2 vs. T1) for which we interpret as suppressed processing of background noise during speech listening alongside the significant behavioural improvements. Meanwhile, functional connectivity within and between multiple regions of temporal, parietal and frontal lobes was significantly enhanced in the speech condition after retention (T2 vs. T0). We also found neural changes before the emergence of significant behavioural improvements. Compared to pre-training, responses to speech vs. non-speech in the left frontal/prefrontal cortex were decreased significantly both immediately after training (T1 vs. T0) and retention (T2 vs. T0), reflecting possible alleviation of listening efforts. Finally, connectivity was significantly decreased between auditory and higher-level non-auditory (parietal and frontal) cortices in response to visual stimuli immediately after training (T1 vs. T0), indicating decreased cross-modal takeover of speech-related regions during visual processing. The results thus showed that neuroplasticity can be observed not only at the same time with, but also before, behavioural changes in speech-in-noise perception. To our knowledge, this is the first fNIRS study to evaluate speech-based auditory neuroplasticity in older adults. It thus provides important implications for current research by illustrating the promises of detecting neuroplasticity using fNIRS in hearing-vulnerable individuals.
Collapse
Affiliation(s)
- Guangting Mai
- National Institute for Health and Care Research Nottingham Biomedical Research Centre, Nottingham, UK.
- Academic Unit of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK.
- Division of Psychology and Language Sciences, University College London, London, UK.
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK.
| | - Zhizhao Jiang
- Division of Psychology and Language Sciences, University College London, London, UK
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Xinran Wang
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Ilias Tachtsidis
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Peter Howell
- Division of Psychology and Language Sciences, University College London, London, UK
| |
Collapse
|
2
|
Xiang H, Fessler JA, Noll DC. Model-based reconstruction for looping-star MRI. Magn Reson Med 2024; 91:2104-2113. [PMID: 38282253 PMCID: PMC10950512 DOI: 10.1002/mrm.29927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 10/27/2023] [Accepted: 10/30/2023] [Indexed: 01/30/2024]
Abstract
PURPOSE The aim of this study was to develop a reconstruction method that more fully models the signals and reconstructs gradient echo (GRE) images without sacrificing the signal to noise ratio and spatial resolution, compared to conventional gridding and model-based image reconstruction method. METHODS By modeling the trajectories for every spoke and simplifying the scenario to only echo-in and echo-out mixture, the approach explicitly models the overlapping echoes. After modeling the overlapping echoes with two system matrices, we use the conjugate gradient algorithm (CG-SENSE) with the nonuniform FFT (NUFFT) to optimize the image reconstruction cost function. RESULTS The proposed method is demonstrated in phantoms and in-vivo volunteer experiments for three-dimensional, high-resolution T2*-weighted imaging and functional MRI tasks. Compared to the gridding method, the high resolution protocol exhibits improved spatial resolution and reduced signal loss as a result of less intra-voxel dephasing. The fMRI task shows that the proposed model-based method produced images with reduced artifacts and blurring as well as more stable and prominent time courses. CONCLUSION The proposed model-based reconstruction results shows improved spatial resolution and reduced artifacts. The fMRI task shows improved time series and activation map due to the reduced overlapping echoes and under-sampling artifacts.
Collapse
Affiliation(s)
| | - Jeffrey A. Fessler
- EECS, University of Michigan, Michigan, USA
- Biomedical Engineering, University of Michigan, Michigan, USA
| | - Douglas C. Noll
- Biomedical Engineering, University of Michigan, Michigan, USA
| |
Collapse
|
3
|
Olszewska AM, Gaca M, Droździel D, Widlarz A, Herman AM, Marchewka A. Understanding functional brain reorganization for naturalistic piano playing in novice pianists. J Neurosci Res 2024; 102:e25312. [PMID: 38400578 DOI: 10.1002/jnr.25312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 01/26/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
Learning to play the piano is a unique complex task, integrating multiple sensory modalities and higher order cognitive functions. Longitudinal neuroimaging studies on adult novice musicians show training-related functional changes in music perception tasks. The reorganization of brain activity while actually playing an instrument was studied only on a very short time frame of a single fMRI session, and longer interventions have not yet been performed. Thus, our aim was to investigate the dynamic complexity of functional brain reorganization while playing the piano within the first half year of musical training. We scanned 24 novice keyboard learners (female, 18-23 years old) using fMRI while they played increasingly complex musical pieces after 1, 6, 13, and 26 weeks of training. Playing music evoked responses bilaterally in the auditory, inferior frontal, and supplementary motor areas, and the left sensorimotor cortex. The effect of training over time, however, invoked widespread changes encompassing the right sensorimotor cortex, cerebellum, superior parietal cortex, anterior insula and hippocampus, among others. As the training progressed, the activation of these regions decreased while playing music. Post hoc analysis revealed region-specific time-courses for independent auditory and motor regions of interest. These results suggest that while the primary sensory, motor, and frontal regions are associated with playing music, the training decreases the involvement of higher order cognitive control and integrative regions, and basal ganglia. Moreover, training might affect distinct brain regions in different ways, providing evidence in favor of the dynamic nature of brain plasticity.
Collapse
Affiliation(s)
- Alicja M Olszewska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Maciej Gaca
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Dawid Droździel
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Agnieszka Widlarz
- Department of Choir Conducting and Singing, Music Education and Rhythmics, The Chopin University of Music, Warsaw, Poland
| | - Aleksandra M Herman
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
4
|
Brewer AA, Barton B. Cortical field maps across human sensory cortex. Front Comput Neurosci 2023; 17:1232005. [PMID: 38164408 PMCID: PMC10758003 DOI: 10.3389/fncom.2023.1232005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 11/07/2023] [Indexed: 01/03/2024] Open
Abstract
Cortical processing pathways for sensory information in the mammalian brain tend to be organized into topographical representations that encode various fundamental sensory dimensions. Numerous laboratories have now shown how these representations are organized into numerous cortical field maps (CMFs) across visual and auditory cortex, with each CFM supporting a specialized computation or set of computations that underlie the associated perceptual behaviors. An individual CFM is defined by two orthogonal topographical gradients that reflect two essential aspects of feature space for that sense. Multiple adjacent CFMs are then organized across visual and auditory cortex into macrostructural patterns termed cloverleaf clusters. CFMs within cloverleaf clusters are thought to share properties such as receptive field distribution, cortical magnification, and processing specialization. Recent measurements point to the likely existence of CFMs in the other senses, as well, with topographical representations of at least one sensory dimension demonstrated in somatosensory, gustatory, and possibly olfactory cortical pathways. Here we discuss the evidence for CFM and cloverleaf cluster organization across human sensory cortex as well as approaches used to identify such organizational patterns. Knowledge of how these topographical representations are organized across cortex provides us with insight into how our conscious perceptions are created from our basic sensory inputs. In addition, studying how these representations change during development, trauma, and disease serves as an important tool for developing improvements in clinical therapies and rehabilitation for sensory deficits.
Collapse
Affiliation(s)
- Alyssa A. Brewer
- mindSPACE Laboratory, Departments of Cognitive Sciences and Language Science (by Courtesy), Center for Hearing Research, University of California, Irvine, Irvine, CA, United States
| | - Brian Barton
- mindSPACE Laboratory, Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
5
|
Olszewska AM, Droździel D, Gaca M, Kulesza A, Obrębski W, Kowalewski J, Widlarz A, Marchewka A, Herman AM. Unlocking the musical brain: A proof-of-concept study on playing the piano in MRI scanner with naturalistic stimuli. Heliyon 2023; 9:e17877. [PMID: 37501960 PMCID: PMC10368778 DOI: 10.1016/j.heliyon.2023.e17877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 04/26/2023] [Accepted: 06/29/2023] [Indexed: 07/29/2023] Open
Abstract
Music is a universal human phenomenon, and can be studied for itself or as a window into the understanding of the brain. Few neuroimaging studies investigate actual playing in the MRI scanner, likely because of the lack of available experimental hardware and analysis tools. Here, we offer an innovative paradigm that addresses this issue in neuromusicology using naturalistic, polyphonic musical stimuli, presents a commercially available MRI-compatible piano, and a flexible approach to quantify participant's performance. We show how making errors while playing can be investigated using an altered auditory feedback paradigm. In the spirit of open science, we make our experimental paradigms and analysis tools available to other researchers studying pianists in MRI. Altogether, we present a proof-of-concept study which shows the feasibility of playing the novel piano in MRI, and a step towards using more naturalistic stimuli.
Collapse
Affiliation(s)
- Alicja M. Olszewska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Dawid Droździel
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Maciej Gaca
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Agnieszka Kulesza
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Wojciech Obrębski
- Department of Nuclear and Medical Electronics, Faculty of Electronics and Information Technology, Warsaw University of Technology, 1 Politechniki Square, 00-661 Warsaw, Poland
- 10 Murarska Street, 08-110 Siedlce, Poland
| | | | - Agnieszka Widlarz
- Chair of Rhythmics and Piano Improvisation, Department of Choir Conducting and Singing, Music Education and Rhythmics, The Chopin University of Music, Okolnik 2 Street, 00–368 Warsaw, Poland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Aleksandra M. Herman
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| |
Collapse
|
6
|
Shatzer HE, Russo FA. Brightening the Study of Listening Effort with Functional Near-Infrared Spectroscopy: A Scoping Review. Semin Hear 2023; 44:188-210. [PMID: 37122884 PMCID: PMC10147513 DOI: 10.1055/s-0043-1766105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
Abstract
Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.
Collapse
Affiliation(s)
- Hannah E. Shatzer
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
7
|
Bálint A, Szabó Á, Andics A, Gácsi M. Dog and human neural sensitivity to voicelikeness: A comparative fMRI study. Neuroimage 2023; 265:119791. [PMID: 36476565 DOI: 10.1016/j.neuroimage.2022.119791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 12/01/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022] Open
Abstract
Voice-sensitivity in the auditory cortex of a range of mammals has been proposed to be determined primarily by tuning to conspecific auditory stimuli, but recent human findings indicate a role for a more general tuning to voicelikeness. Vocal emotional valence, a central characteristic of vocalisations, has been linked to the same basic acoustic parameters across species. Comparative neuroimaging revealed that during voice perception, such acoustic parameters modulate emotional valence-sensitivity in auditory cortical regions in both family dogs and humans. To explore the role of voicelikeness in auditory emotional valence-sensitivity across species, here we constructed artificial emotional sounds in two sound categories: voice-like vs. sine-wave sounds, parametrically modulating two main acoustic parameters, f0 and call length. We hypothesised that if mammalian auditory systems are characterised by a general tuning to voicelikeness, voice-like sounds will be processed preferentially, and acoustic parameters for voice-like sounds will be processed differently than for sine-wave sounds - both in dogs and humans. We found cortical areas in both species that responded stronger to voice-like than to sine-wave stimuli, while there were no regions responding stronger to sine-wave sounds in either species. Additionally, we found that in bilateral primary and emotional valence-sensitive auditory regions of both species, the processing of voice-like and sine-wave sounds are modulated by f0 in opposite ways. These results reveal functional similarities between evolutionarily distant mammals for processing voicelikeness and its effect on processing basic acoustic cues of vocal emotions.
Collapse
Affiliation(s)
- Anna Bálint
- ELKH-ELTE Comparative Ethology Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary.
| | - Ádám Szabó
- Department of Neuroradiology at the Medical Imaging Centre of the Semmelweis University, H-1082 Budapest, Üllői út 78a, Hungary
| | - Attila Andics
- Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; MTA-ELTE 'Lendület' Neuroethology of Communication Research Group, Hungarian Academy of Sciences - Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; ELTE NAP Canine Brain Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| | - Márta Gácsi
- ELKH-ELTE Comparative Ethology Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| |
Collapse
|
8
|
Noyce AL, Kwasa JAC, Shinn-Cunningham BG. Defining attention from an auditory perspective. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2023; 14:e1610. [PMID: 35642475 PMCID: PMC9712589 DOI: 10.1002/wcs.1610] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 04/24/2022] [Accepted: 04/29/2022] [Indexed: 01/17/2023]
Abstract
Attention prioritizes certain information at the expense of other information in ways that are similar across vision, audition, and other sensory modalities. It influences how-and even what-information is represented and processed, affecting brain activity at every level. Much of the core research into cognitive and neural mechanisms of attention has used visual tasks. However, the same top-down, object-based, and bottom-up attentional processes shape auditory perception, largely through the same underlying, cognitive networks. This article is categorized under: Psychology > Attention.
Collapse
|
9
|
O'Brien AM, Perrachione TK, Wisman Weil L, Sanchez Araujo Y, Halverson K, Harris A, Ostrovskaya I, Kjelgaard M, Kenneth Wexler, Tager-Flusberg H, Gabrieli JDE, Qi Z. Altered engagement of the speech motor network is associated with reduced phonological working memory in autism. Neuroimage Clin 2022; 37:103299. [PMID: 36584426 PMCID: PMC9830373 DOI: 10.1016/j.nicl.2022.103299] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 12/13/2022] [Accepted: 12/20/2022] [Indexed: 12/24/2022]
Abstract
Nonword repetition, a common clinical measure of phonological working memory, involves component processes of speech perception, working memory, and speech production. Autistic children often show behavioral challenges in nonword repetition, as do many individuals with communication disorders. It is unknown which subprocesses of phonological working memory are vulnerable in autistic individuals, and whether the same brain processes underlie the transdiagnostic difficulty with nonword repetition. We used functional magnetic resonance imaging (fMRI) to investigate the brain bases for nonword repetition challenges in autism. We compared activation during nonword repetition in functional brain networks subserving speech perception, working memory, and speech production between neurotypical and autistic children. Autistic children performed worse than neurotypical children on nonword repetition and had reduced activation in response to increasing phonological working memory load in the supplementary motor area. Multivoxel pattern analysis within the speech production network classified shorter vs longer nonword-repetition trials less accurately for autistic than neurotypical children. These speech production motor-specific differences were not observed in a group of children with reading disability who had similarly reduced nonword repetition behavior. These findings suggest that atypical function in speech production brain regions may contribute to nonword repetition difficulties in autism.
Collapse
Affiliation(s)
- Amanda M O'Brien
- Program in Speech and Hearing Bioscience and Technology, Harvard University, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, USA.
| | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, USA
| | - Lisa Wisman Weil
- Department of Communication Sciences and Disorders, Emerson College, USA
| | | | - Kelly Halverson
- Department of Clinical Psychology, University of Houston, USA
| | - Adrianne Harris
- The Carolina Institute for Developmental Disabilities, University of North Carolina School of Medicine, USA
| | | | - Margaret Kjelgaard
- Department of Communication Sciences and Disorders, Bridgewater State University, USA
| | - Kenneth Wexler
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA; Department of Linguistics and Philosophy, Massachusetts Institute of Technology, USA
| | | | - John D E Gabrieli
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA
| | - Zhenghan Qi
- Department of Communication Sciences and Disorders & Department of Psychology, Northeastern University, USA
| |
Collapse
|
10
|
Adaptation of stimulation duration to enhance auditory response in fNIRS block design. Hear Res 2022; 424:108593. [DOI: 10.1016/j.heares.2022.108593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 07/19/2022] [Accepted: 07/26/2022] [Indexed: 11/04/2022]
|
11
|
Human Taste-Perception: Brain Computer Interface (BCI) and Its Application as an Engineering Tool for Taste-Driven Sensory Studies. FOOD ENGINEERING REVIEWS 2022. [DOI: 10.1007/s12393-022-09308-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
12
|
Eckert MA, Teubner-Rhodes S, Vaden KI, Ahlstrom JB, McClaskey CM, Dubno JR. Unique patterns of hearing loss and cognition in older adults' neural responses to cues for speech recognition difficulty. Brain Struct Funct 2022; 227:203-218. [PMID: 34632538 PMCID: PMC9044122 DOI: 10.1007/s00429-021-02398-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 09/26/2021] [Indexed: 01/31/2023]
Abstract
Older adults with hearing loss experience significant difficulties understanding speech in noise, perhaps due in part to limited benefit from supporting executive functions that enable the use of environmental cues signaling changes in listening conditions. Here we examined the degree to which 41 older adults (60.56-86.25 years) exhibited cortical responses to informative listening difficulty cues that communicated the listening difficulty for each trial compared to neutral cues that were uninformative of listening difficulty. Word recognition was significantly higher for informative compared to uninformative cues in a + 10 dB signal-to-noise ratio (SNR) condition, and response latencies were significantly shorter for informative cues in the + 10 dB SNR and the more-challenging + 2 dB SNR conditions. Informative cues were associated with elevated blood oxygenation level-dependent contrast in visual and parietal cortex. A cue-SNR interaction effect was observed in the cingulo-opercular (CO) network, such that activity only differed between SNR conditions when an informative cue was presented. That is, participants used the informative cues to prepare for changes in listening difficulty from one trial to the next. This cue-SNR interaction effect was driven by older adults with more low-frequency hearing loss and was not observed for those with more high-frequency hearing loss, poorer set-shifting task performance, and lower frontal operculum gray matter volume. These results suggest that proactive strategies for engaging CO adaptive control may be important for older adults with high-frequency hearing loss to optimize speech recognition in changing and challenging listening conditions.
Collapse
Affiliation(s)
- Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | | | - Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Carolyn M. McClaskey
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| |
Collapse
|
13
|
Resting state network connectivity is attenuated by fMRI acoustic noise. Neuroimage 2021; 247:118791. [PMID: 34920084 DOI: 10.1016/j.neuroimage.2021.118791] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 10/21/2021] [Accepted: 12/07/2021] [Indexed: 12/11/2022] Open
Abstract
INTRODUCTION During the past decades there has been an increasing interest in tracking brain network fluctuations in health and disease by means of resting state functional magnetic resonance imaging (rs-fMRI). Rs-fMRI however does not provide the ideal environmental setting, as participants are continuously exposed to noise generated by MRI coils during acquisition of Echo Planar Imaging (EPI). We investigated the effect of EPI noise on resting state activity and connectivity using magnetoencephalography (MEG), by reproducing the acoustic characteristics of rs-fMRI environment during the recordings. As compared to fMRI, MEG has little sensitivity to brain activity generated in deep brain structures, but has the advantage to capture both the dynamic of cortical magnetic oscillations with high temporal resolution and the slow magnetic fluctuations highly correlated with BOLD signal. METHODS Thirty healthy subjects were enrolled in a counterbalanced design study including three conditions: a) silent resting state (Silence), b) resting state upon EPI noise (fMRI), and c) resting state upon white noise (White). White noise was employed to test the specificity of fMRI noise effect. The amplitude envelope correlation (AEC) in alpha band measured the connectivity of seven Resting State Networks (RSN) of interest (default mode network, dorsal attention network, language, left and right auditory and left and right sensory-motor). Vigilance dynamic was estimated from power spectral activity. RESULTS fMRI and White acoustic noise consistently reduced connectivity of cortical networks. The effects were widespread, but noise and network specificities were also present. For fMRI noise, decreased connectivity was found in the right auditory and sensory-motor networks. Progressive increase of slow theta-delta activity related to drowsiness was found in all conditions, but was significantly higher for fMRI . Theta-delta significantly and positively correlated with variations of cortical connectivity. DISCUSSION rs-fMRI connectivity is biased by unavoidable environmental factors during scanning, which warrant more careful control and improved experimental designs. MEG is free from acoustic noise and allows a sensitive estimation of resting state connectivity in cortical areas. Although underutilized, MEG could overcome issues related to noise during fMRI, in particular when investigation of motor and auditory networks is needed.
Collapse
|
14
|
Moerel M, Yacoub E, Gulban OF, Lage-Castellanos A, De Martino F. Using high spatial resolution fMRI to understand representation in the auditory network. Prog Neurobiol 2021; 207:101887. [PMID: 32745500 PMCID: PMC7854960 DOI: 10.1016/j.pneurobio.2020.101887] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 05/27/2020] [Accepted: 07/15/2020] [Indexed: 12/23/2022]
Abstract
Following rapid methodological advances, ultra-high field (UHF) functional and anatomical magnetic resonance imaging (MRI) has been repeatedly and successfully used for the investigation of the human auditory system in recent years. Here, we review this work and argue that UHF MRI is uniquely suited to shed light on how sounds are represented throughout the network of auditory brain regions. That is, the provided gain in spatial resolution at UHF can be used to study the functional role of the small subcortical auditory processing stages and details of cortical processing. Further, by combining high spatial resolution with the versatility of MRI contrasts, UHF MRI has the potential to localize the primary auditory cortex in individual hemispheres. This is a prerequisite to study how sound representation in higher-level auditory cortex evolves from that in early (primary) auditory cortex. Finally, the access to independent signals across auditory cortical depths, as afforded by UHF, may reveal the computations that underlie the emergence of an abstract, categorical sound representation based on low-level acoustic feature processing. Efforts on these research topics are underway. Here we discuss promises as well as challenges that come with studying these research questions using UHF MRI, and provide a future outlook.
Collapse
Affiliation(s)
- Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, the Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands.
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA.
| | - Omer Faruk Gulban
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA; Brain Innovation B.V., Maastricht, the Netherlands.
| | - Agustin Lage-Castellanos
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Department of NeuroInformatics, Cuban Center for Neuroscience, Cuba.
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA.
| |
Collapse
|
15
|
Fuglsang SA, Madsen KH, Puonti O, Hjortkjær J, Siebner HR. Mapping cortico-subcortical sensitivity to 4 Hz amplitude modulation depth in human auditory system with functional MRI. Neuroimage 2021; 246:118745. [PMID: 34808364 DOI: 10.1016/j.neuroimage.2021.118745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 11/17/2021] [Accepted: 11/18/2021] [Indexed: 10/19/2022] Open
Abstract
Temporal modulations in the envelope of acoustic waveforms at rates around 4 Hz constitute a strong acoustic cue in speech and other natural sounds. It is often assumed that the ascending auditory pathway is increasingly sensitive to slow amplitude modulation (AM), but sensitivity to AM is typically considered separately for individual stages of the auditory system. Here, we used blood oxygen level dependent (BOLD) fMRI in twenty human subjects (10 male) to measure sensitivity of regional neural activity in the auditory system to 4 Hz temporal modulations. Participants were exposed to AM noise stimuli varying parametrically in modulation depth to characterize modulation-depth effects on BOLD responses. A Bayesian hierarchical modeling approach was used to model potentially nonlinear relations between AM depth and group-level BOLD responses in auditory regions of interest (ROIs). Sound stimulation activated the auditory brainstem and cortex structures in single subjects. BOLD responses to noise exposure in core and belt auditory cortices scaled positively with modulation depth. This finding was corroborated by whole-brain cluster-level inference. Sensitivity to AM depth variations was particularly pronounced in the Heschl's gyrus but also found in higher-order auditory cortical regions. None of the sound-responsive subcortical auditory structures showed a BOLD response profile that reflected the parametric variation in AM depth. The results are compatible with the notion that early auditory cortical regions play a key role in processing low-rate modulation content of sounds in the human auditory system.
Collapse
Affiliation(s)
- Søren A Fuglsang
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark.
| | - Kristoffer H Madsen
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Jens Hjortkjær
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg and Frederiksberg, Copenhagen, Denmark; Department of Clinical Medicine, Faculty of Medical and Health Sciences, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
16
|
Lin Y, Zhou X, Naya Y, Gardner JL, Sun P. Voxel-Wise Linearity Analysis of Increments and Decrements in BOLD Responses in Human Visual Cortex Using a Contrast Adaptation Paradigm. Front Hum Neurosci 2021; 15:541314. [PMID: 34531731 PMCID: PMC8439421 DOI: 10.3389/fnhum.2021.541314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2020] [Accepted: 08/09/2021] [Indexed: 11/13/2022] Open
Abstract
The linearity of BOLD responses is a fundamental presumption in most analysis procedures for BOLD fMRI studies. Previous studies have examined the linearity of BOLD signal increments, but less is known about the linearity of BOLD signal decrements. The present study assessed the linearity of both BOLD signal increments and decrements in the human primary visual cortex using a contrast adaptation paradigm. Results showed that both BOLD signal increments and decrements kept linearity to long stimuli (e.g., 3 s, 6 s), yet, deviated from linearity to transient stimuli (e.g., 1 s). Furthermore, a voxel-wise analysis showed that the deviation patterns were different for BOLD signal increments and decrements: while the BOLD signal increments demonstrated a consistent overestimation pattern, the patterns for BOLD signal decrements varied from overestimation to underestimation. Our results suggested that corrections to deviations from linearity of transient responses should consider the different effects of BOLD signal increments and decrements.
Collapse
Affiliation(s)
- Yun Lin
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
| | - Xi Zhou
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
| | - Yuji Naya
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Justin L Gardner
- Department of Psychology, Stanford University, Stanford, CA, United States
| | - Pei Sun
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing, China.,Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China.,Laboratory for Cognitive Brain Mapping, RIKEN Center for Brain Sciences, Wako, Japan
| |
Collapse
|
17
|
Khosla M, Ngo GH, Jamison K, Kuceyeski A, Sabuncu MR. Cortical response to naturalistic stimuli is largely predictable with deep neural networks. SCIENCE ADVANCES 2021; 7:7/22/eabe7547. [PMID: 34049888 PMCID: PMC8163078 DOI: 10.1126/sciadv.abe7547] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 04/12/2021] [Indexed: 05/08/2023]
Abstract
Naturalistic stimuli, such as movies, activate a substantial portion of the human brain, invoking a response shared across individuals. Encoding models that predict neural responses to arbitrary stimuli can be very useful for studying brain function. However, existing models focus on limited aspects of naturalistic stimuli, ignoring the dynamic interactions of modalities in this inherently context-rich paradigm. Using movie-watching data from the Human Connectome Project, we build group-level models of neural activity that incorporate several inductive biases about neural information processing, including hierarchical processing, temporal assimilation, and auditory-visual interactions. We demonstrate how incorporating these biases leads to remarkable prediction performance across large areas of the cortex, beyond the sensory-specific cortices into multisensory sites and frontal cortex. Furthermore, we illustrate that encoding models learn high-level concepts that generalize to task-bound paradigms. Together, our findings underscore the potential of encoding models as powerful tools for studying brain function in ecologically valid conditions.
Collapse
Affiliation(s)
- Meenakshi Khosla
- School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA
| | - Gia H Ngo
- School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA
| | - Keith Jamison
- Department of Radiology, Weill Cornell Medicine, New York, NY, USA
| | - Amy Kuceyeski
- Department of Radiology, Weill Cornell Medicine, New York, NY, USA
- Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY, USA
| | - Mert R Sabuncu
- School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA.
- Department of Radiology, Weill Cornell Medicine, New York, NY, USA
- Nancy E. & Peter C. Meinig School of Biomedical Engineering, Cornell University, Ithaca, NY, USA
| |
Collapse
|
18
|
Brewster KK, Golub JS, Rutherford BR. Neural circuits and behavioral pathways linking hearing loss to affective dysregulation in older adults. NATURE AGING 2021; 1:422-429. [PMID: 37118018 PMCID: PMC10154034 DOI: 10.1038/s43587-021-00065-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 04/12/2021] [Indexed: 04/30/2023]
Abstract
Substantial evidence now links age-related hearing loss to incident major depressive disorder in older adults. However, research examining the neural circuits and behavioral mechanisms by which age-related hearing loss leads to depression is at an early phase. It is known that hearing loss has adverse structural and functional brain consequences, is associated with reduced social engagement and loneliness, and often results in tinnitus, which can independently affect cognitive control and emotion processing circuits. While pathways leading from these sequelae of hearing loss to affective dysregulation and depression are intuitive to hypothesize, few studies have yet been designed to provide conclusive evidence for specific pathophysiological mechanisms. Here we review the neurobiological and behavioral consequences of age-related hearing loss, present a model linking them to increased risk for major depressive disorder and suggest how future studies may facilitate the development of rationally designed therapeutic interventions for older adults with impaired hearing to reduce risk for depression and/or ameliorate depressive symptoms.
Collapse
Affiliation(s)
- Katharine K Brewster
- Department of Psychiatry, Vagelos College of Physicians and Surgeons, Columbia University, New York, NY, USA.
- New York State Psychiatric Institute, New York, NY, USA.
| | - Justin S Golub
- Department of Otolaryngology-Head and Neck Surgery, Vagelos College of Physicians and Surgeons, Columbia University, New York, NY, USA
| | - Bret R Rutherford
- Department of Psychiatry, Vagelos College of Physicians and Surgeons, Columbia University, New York, NY, USA
- New York State Psychiatric Institute, New York, NY, USA
| |
Collapse
|
19
|
Heard M, Li X, Lee YS. Hybrid auditory fMRI: In pursuit of increasing data acquisition while decreasing the impact of scanner noise. J Neurosci Methods 2021; 358:109198. [PMID: 33901568 DOI: 10.1016/j.jneumeth.2021.109198] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 03/28/2021] [Accepted: 04/16/2021] [Indexed: 11/17/2022]
Abstract
BACKGROUND Two challenges in auditory fMRI include the loud scanner noise during sound presentation and slow data acquisition. Here, we introduce a new auditory imaging protocol, termed "hybrid", that alleviates these obstacles. NEW METHOD We designed a within-subject experiment (N = 14) wherein language-driven activity was measured by hybrid, interleaved silent (ISSS), and continuous multiband acquisition. To determine the advantage of noise attenuation during sound presentation, hybrid was compared to multiband. To identify the benefits of increased temporal resolution, hybrid was compared to ISSS. Data were evaluated by whole-brain univariate general linear modeling (GLM) and multivariate pattern analysis (MVPA). RESULTS Comparison with existing methods: CONCLUSIONS: Our data revealed that hybrid imaging restored neural activity in the canonical language network that was absent due to the loud noise or slow sampling in the conventional imaging protocols. With its noise-attenuated sound presentation windows and increased acquisition speed, the hybrid protocol is well-suited for auditory fMRI research tracking neural activity pertaining to fast, time-varying acoustic events.
Collapse
Affiliation(s)
- Matthew Heard
- School of Behavioral and Brain Sciences, University of Texas at Dallas, United States
| | - Xiangrui Li
- Center for Cognitive and Behavioral Brain Imaging, The Ohio State University, United States
| | - Yune S Lee
- School of Behavioral and Brain Sciences, University of Texas at Dallas, United States; Center for BrainHealth, University of Texas at Dallas, United States.
| |
Collapse
|
20
|
Boenniger MM, Diers K, Herholz SC, Shahid M, Stöcker T, Breteler MMB, Huijbers W. A Functional MRI Paradigm for Efficient Mapping of Memory Encoding Across Sensory Conditions. Front Hum Neurosci 2021; 14:591721. [PMID: 33551773 PMCID: PMC7859438 DOI: 10.3389/fnhum.2020.591721] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 12/02/2020] [Indexed: 11/13/2022] Open
Abstract
We introduce a new and time-efficient memory-encoding paradigm for functional magnetic resonance imaging (fMRI). This paradigm is optimized for mapping multiple contrasts using a mixed design, using auditory (environmental/vocal) and visual (scene/face) stimuli. We demonstrate that the paradigm evokes robust neuronal activity in typical sensory and memory networks. We were able to detect auditory and visual sensory-specific encoding activities in auditory and visual cortices. Also, we detected stimulus-selective activation in environmental-, voice-, scene-, and face-selective brain regions (parahippocampal place and fusiform face area). A subsequent recognition task allowed the detection of sensory-specific encoding success activity (ESA) in both auditory and visual cortices, as well as sensory-unspecific positive ESA in the hippocampus. Further, sensory-unspecific negative ESA was observed in the precuneus. Among others, the parallel mixed design enabled sustained and transient activity comparison in contrast to rest blocks. Sustained and transient activations showed great overlap in most sensory brain regions, whereas several regions, typically associated with the default-mode network, showed transient rather than sustained deactivation. We also show that the use of a parallel mixed model had relatively little influence on positive or negative ESA. Together, these results demonstrate a feasible, versatile, and brief memory-encoding task, which includes multiple sensory stimuli to guarantee a comprehensive measurement. This task is especially suitable for large-scale clinical or population studies, which aim to test task-evoked sensory-specific and sensory-unspecific memory-encoding performance as well as broad sensory activity across the life span within a very limited time frame.
Collapse
Affiliation(s)
- Meta M. Boenniger
- Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Kersten Diers
- Image Analysis Group, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Sibylle C. Herholz
- Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Mohammad Shahid
- Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Tony Stöcker
- MR Physics, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Monique M. B. Breteler
- Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
- Institute for Medical Biometry, Informatics and Epidemiology (IMBIE), Faculty of Medicine, University of Bonn, Bonn, Germany
| | - Willem Huijbers
- Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
21
|
Nakai T, Koide-Majima N, Nishimoto S. Correspondence of categorical and feature-based representations of music in the human brain. Brain Behav 2021; 11:e01936. [PMID: 33164348 PMCID: PMC7821620 DOI: 10.1002/brb3.1936] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 09/24/2020] [Accepted: 10/21/2020] [Indexed: 01/11/2023] Open
Abstract
INTRODUCTION Humans tend to categorize auditory stimuli into discrete classes, such as animal species, language, musical instrument, and music genre. Of these, music genre is a frequently used dimension of human music preference and is determined based on the categorization of complex auditory stimuli. Neuroimaging studies have reported that the superior temporal gyrus (STG) is involved in response to general music-related features. However, there is considerable uncertainty over how discrete music categories are represented in the brain and which acoustic features are more suited for explaining such representations. METHODS We used a total of 540 music clips to examine comprehensive cortical representations and the functional organization of music genre categories. For this purpose, we applied a voxel-wise modeling approach to music-evoked brain activity measured using functional magnetic resonance imaging. In addition, we introduced a novel technique for feature-brain similarity analysis and assessed how discrete music categories are represented based on the cortical response pattern to acoustic features. RESULTS Our findings indicated distinct cortical organizations for different music genres in the bilateral STG, and they revealed representational relationships between different music genres. On comparing different acoustic feature models, we found that these representations of music genres could be explained largely by a biologically plausible spectro-temporal modulation-transfer function model. CONCLUSION Our findings have elucidated the quantitative representation of music genres in the human cortex, indicating the possibility of modeling this categorization of complex auditory stimuli based on brain activity.
Collapse
Affiliation(s)
- Tomoya Nakai
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan.,Graduate School of Frontier Biosciences, Osaka University, Suita, Japan
| | - Naoko Koide-Majima
- Graduate School of Frontier Biosciences, Osaka University, Suita, Japan.,AI Science Research and Development Promotion Center, National Institute of Information and Communications Technology, Suita, Japan
| | - Shinji Nishimoto
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan.,Graduate School of Frontier Biosciences, Osaka University, Suita, Japan.,Graduate School of Medicine, Osaka University, Suita, Japan
| |
Collapse
|
22
|
Rienäcker F, Van Gerven PWM, Jacobs HIL, Eck J, Van Heugten CM, Guerreiro MJS. The Neural Correlates of Visual and Auditory Cross-Modal Selective Attention in Aging. Front Aging Neurosci 2020; 12:498978. [PMID: 33304265 PMCID: PMC7693624 DOI: 10.3389/fnagi.2020.498978] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Accepted: 10/27/2020] [Indexed: 11/13/2022] Open
Abstract
Age-related deficits in selective attention have been demonstrated to depend on the sensory modality through which targets and distractors are presented. Some of these investigations suggest a specific impairment of cross-modal auditory selective attention. For the first time, this study is taking on a whole brain approach while including a passive perception baseline, to investigate the neural underpinnings of selective attention across age groups, and taking the sensory modality of relevant and irrelevant (i.e., distracting) stimuli into account. Sixteen younger (mean age = 23.3 years) and 14 older (mean age = 65.3 years), healthy participants performed a series of delayed match-to-sample tasks, in which participants had to selectively attend to visual stimuli, selectively attend to auditory stimuli, or passively view and hear both types of stimuli, while undergoing 3T fMRI. The imaging analyses showed that areas recruited by cross-modal visual and auditory selective attention in both age groups included parts of the dorsal attention and frontoparietal control networks (i.e., intraparietal sulcus, insula, fusiform gyrus, anterior cingulate, and inferior frontal cortex). Most importantly, activation throughout the brain did not differ across age groups, suggesting intact brain function during cross-modal selective attention in older adults. Moreover, stronger brain activation during cross-modal visual vs. cross-modal auditory selective attention was found in both age groups, which is consistent with earlier accounts of visual dominance. In conclusion, these results do not support the hypothesized age-related deficit of cross-modal auditory selective attention. Instead, they suggest that the underlying neural correlates of cross-modal selective attention are similar in younger and older adults.
Collapse
Affiliation(s)
- Franziska Rienäcker
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Pascal W M Van Gerven
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education (SHE), Maastricht University, Maastricht, Netherlands
| | - Heidi I L Jacobs
- Department of Psychiatry and Neuropsychology, Faculty of Health, Medicine and Life Sciences, School of Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands.,Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Boston, MA, United States
| | - Judith Eck
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Caroline M Van Heugten
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Department of Psychiatry and Neuropsychology, Faculty of Health, Medicine and Life Sciences, School of Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Maria J S Guerreiro
- Biological Psychology and Neuropsychology, Institute for Psychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
23
|
Floegel M, Fuchs S, Kell CA. Differential contributions of the two cerebral hemispheres to temporal and spectral speech feedback control. Nat Commun 2020; 11:2839. [PMID: 32503986 PMCID: PMC7275068 DOI: 10.1038/s41467-020-16743-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 05/21/2020] [Indexed: 11/16/2022] Open
Abstract
Proper speech production requires auditory speech feedback control. Models of speech production associate this function with the right cerebral hemisphere while the left hemisphere is proposed to host speech motor programs. However, previous studies have investigated only spectral perturbations of the auditory speech feedback. Since auditory perception is known to be lateralized, with right-lateralized analysis of spectral features and left-lateralized processing of temporal features, it is unclear whether the observed right-lateralization of auditory speech feedback processing reflects a preference for speech feedback control or for spectral processing in general. Here we use a behavioral speech adaptation experiment with dichotically presented altered auditory feedback and an analogous fMRI experiment with binaurally presented altered feedback to confirm a right hemisphere preference for spectral feedback control and to reveal a left hemisphere preference for temporal feedback control during speaking. These results indicate that auditory feedback control involves both hemispheres with differential contributions along the spectro-temporal axis.
Collapse
Affiliation(s)
- Mareike Floegel
- Cognitive Neuroscience Group, Brain Imaging Center and Department of Neurology, Goethe University, Schleusenweg 2-16, 60528, Frankfurt, Germany
| | - Susanne Fuchs
- Leibniz-Centre General Linguistics (ZAS), Schuetzenstr. 18, 10117, Berlin, Germany
| | - Christian A Kell
- Cognitive Neuroscience Group, Brain Imaging Center and Department of Neurology, Goethe University, Schleusenweg 2-16, 60528, Frankfurt, Germany.
| |
Collapse
|
24
|
Alhazmi FH. White-matter integrity and hearing acuity decline in healthy subjects: Magnetic resonance tractography. Neuroradiol J 2020; 33:236-243. [PMID: 32216576 DOI: 10.1177/1971400920913868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
AIM The association between hearing acuity and white-matter (WM) microstructure integrity was evaluated in a normal healthy population with a variety of hearing acuity using an automated tractography technique known as TRACULA (TRActs Constrained by UnderLying Anatomy) in order to investigate whether hearing acuity decline is correlated with brain structural connectivity. METHODS Forty healthy controls were recruited to this study, which used a Siemens 3T Trio with a standard eight-channel head coil. Hearing acuity was assessed using pure-tone air conduction audiometry (Amplivox 2160, with Audiocups to eliminate noise and allow accurate pure-tone audiometry). Handedness and anxiety and depression were assessed for all participants in this study using the Edinburgh Handedness Inventory and the Hospital Anxiety and Depression Scale, respectively. RESULTS This study showed a significant reduction in WM volume of the left cingulum angular bundle (CAB; t = 2.32, p = 0.02) in the mild to moderate hearing-loss group (238 ± 223 mm2) compared to the group with normal hearing (105 ± 121 mm2). The WM integrity of the left CAB was found to be significantly different (t = 2.06, p = 0.04) in the mild to moderate hearing-loss group (0.18 ± 0.06 mm2/s) compared to the group with normal hearing (0.22 ± 0.05 mm2/s). The WM integrity of the left anterior thalamic radiation (ATR) was found to be significantly different (t = 2.58, p = 0.014) in the mild to moderate hearing-loss group (0.33 ± 0.05 mm2/s) compared to the group with normal hearing (0.37 ± 0.03 mm2/s). A significant negative correlation was found between age and the WM integrity of the right ATR (r = -0.33, p = 0.038), and between hearing acuity and the WM integrity of the right ATR (r = -0.38, p = 0.013) and left CAB (r = -0.36, p = 0.019). Discussion and conclusion: An important finding in this study is that brain structural connectivity changes in the left hemisphere seem to be associated with age-related hearing loss found mainly in the ATR and CAB tracts.
Collapse
Affiliation(s)
- Fahad H Alhazmi
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah Univeristy, Madinah, Saudi Arabia.,Institute of Translational Medicine, Faculty of Health and Life Sciences, University of Liverpool, UK
| |
Collapse
|
25
|
Fei N, Ge J, Wang Y, Gao JH. Aging-related differences in the cortical network subserving intelligible speech. BRAIN AND LANGUAGE 2020; 201:104713. [PMID: 31759299 DOI: 10.1016/j.bandl.2019.104713] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 10/29/2019] [Accepted: 10/30/2019] [Indexed: 06/10/2023]
Abstract
Language communication is crucial throughout the lifespan. The current study investigated how aging affects the brain network subserving intelligible speech. Using functional magnetic resonance imaging, we compared brain responses to intelligible and unintelligible speech between older and young adults. Univariate and multivariate analyses revealed reduced brain activation and lower regional pattern distinctions in response to intelligible versus unintelligible speech in the left anterior superior temporal gyrus (aSTG) and the left inferior frontal gyrus (IFG) in the older compared with young adults. Notably, the functional connectivity between the left IFG and the left angular gyrus (AG) was increased and a significantly enhanced bidirectional effective connectivity between the left aSTG and the left AG was observed in the older adults for processing speech intelligibility. Our study revealed aging-related differences in the cortical activity for intelligible speech and suggested that increased frontal-temporal-parietal functional integration may help facilitate spoken language processing in older adults.
Collapse
Affiliation(s)
- Nanxi Fei
- Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China; Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Jianqiao Ge
- Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China; Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.
| | - Yi Wang
- Public Health Science and Engineering College, Tianjin University of Traditional Chinese Medicine, Tianjin, China
| | - Jia-Hong Gao
- Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China; Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China; McGovern Institute for Brain Research, Peking University, Beijing, China.
| |
Collapse
|
26
|
Chen X, Tong C, Han Z, Zhang K, Bo B, Feng Y, Liang Z. Sensory evoked fMRI paradigms in awake mice. Neuroimage 2020; 204:116242. [DOI: 10.1016/j.neuroimage.2019.116242] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 09/08/2019] [Accepted: 10/02/2019] [Indexed: 01/25/2023] Open
|
27
|
Peelle JE. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Hear 2019; 39:204-214. [PMID: 28938250 PMCID: PMC5821557 DOI: 10.1097/aud.0000000000000494] [Citation(s) in RCA: 309] [Impact Index Per Article: 61.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 07/28/2017] [Indexed: 02/04/2023]
Abstract
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, Saint Louis, Missouri, USA
| |
Collapse
|
28
|
Ogg M, Moraczewski D, Kuchinsky SE, Slevc LR. Separable neural representations of sound sources: Speaker identity and musical timbre. Neuroimage 2019; 191:116-126. [PMID: 30731247 DOI: 10.1016/j.neuroimage.2019.01.075] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 12/14/2018] [Accepted: 01/30/2019] [Indexed: 11/28/2022] Open
Abstract
Human listeners can quickly and easily recognize different sound sources (objects and events) in their environment. Understanding how this impressive ability is accomplished can improve signal processing and machine intelligence applications along with assistive listening technologies. However, it is not clear how the brain represents the many sounds that humans can recognize (such as speech and music) at the level of individual sources, categories and acoustic features. To examine the cortical organization of these representations, we used patterns of fMRI responses to decode 1) four individual speakers and instruments from one another (separately, within each category), 2) the superordinate category labels associated with each stimulus (speech or instrument), and 3) a set of simple synthesized sounds that could be differentiated entirely on their acoustic features. Data were collected using an interleaved silent steady state sequence to increase the temporal signal-to-noise ratio, and mitigate issues with auditory stimulus presentation in fMRI. Largely separable clusters of voxels in the temporal lobes supported the decoding of individual speakers and instruments from other stimuli in the same category. Decoding the superordinate category of each sound was more accurate and involved a larger portion of the temporal lobes. However, these clusters all overlapped with areas that could decode simple, acoustically separable stimuli. Thus, individual sound sources from different sound categories are represented in separate regions of the temporal lobes that are situated within regions implicated in more general acoustic processes. These results bridge an important gap in our understanding of cortical representations of sounds and their acoustics.
Collapse
Affiliation(s)
- Mattson Ogg
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Department of Psychology, University of Maryland, College Park, MD, 20742, USA.
| | - Dustin Moraczewski
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Department of Psychology, University of Maryland, College Park, MD, 20742, USA
| | - Stefanie E Kuchinsky
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Center for Advanced Study of Language, University of Maryland, College Park, MD, 20742, USA; Maryland Neuroimaging Center, University of Maryland, College Park, MD, 20742, USA
| | - L Robert Slevc
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Department of Psychology, University of Maryland, College Park, MD, 20742, USA
| |
Collapse
|
29
|
Macedonia M, Repetto C, Ischebeck A, Mueller K. Depth of Encoding Through Observed Gestures in Foreign Language Word Learning. Front Psychol 2019; 10:33. [PMID: 30761033 PMCID: PMC6361807 DOI: 10.3389/fpsyg.2019.00033] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 01/08/2019] [Indexed: 11/13/2022] Open
Abstract
Word learning is basic to foreign language acquisition, however time consuming and not always successful. Empirical studies have shown that traditional (visual) word learning can be enhanced by gestures. The gesture benefit has been attributed to depth of encoding. Gestures can lead to depth of encoding because they trigger semantic processing and sensorimotor enrichment of the novel word. However, the neural underpinning of depth of encoding is still unclear. Here, we combined an fMRI and a behavioral study to investigate word encoding online. In the scanner, participants encoded 30 novel words of an artificial language created for experimental purposes and their translation into the subjects' native language. Participants encoded the words three times: visually, audiovisually, and by additionally observing semantically related gestures performed by an actress. Hemodynamic activity during word encoding revealed the recruitment of cortical areas involved in stimulus processing. In this study, depth of encoding can be spelt out in terms of sensorimotor brain networks that grow larger the more sensory modalities are linked to the novel word. Word retention outside the scanner documented a positive effect of gestures in a free recall test in the short term.
Collapse
Affiliation(s)
- Manuela Macedonia
- Department of Information Engineering, Johannes Kepler University Linz, Linz, Austria.,Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Claudia Repetto
- Department of Psychology, Università Cattolica Sacro Cuore, Milan, Italy
| | - Anja Ischebeck
- Group Cognitive Psychology and Neuroscience, University of Graz, Graz, Austria
| | - Karsten Mueller
- Nuclear Magnetic Resonance Unit, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
30
|
Bortfeld H. Functional near-infrared spectroscopy as a tool for assessing speech and spoken language processing in pediatric and adult cochlear implant users. Dev Psychobiol 2018; 61:430-443. [PMID: 30588618 DOI: 10.1002/dev.21818] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Revised: 11/04/2018] [Accepted: 11/16/2018] [Indexed: 11/11/2022]
Abstract
Much of what is known about the course of auditory learning in following cochlear implantation is based on behavioral indicators that users are able to perceive sound. Both prelingually deafened children and postlingually deafened adults who receive cochlear implants display highly variable speech and language processing outcomes, although the basis for this is poorly understood. To date, measuring neural activity within the auditory cortex of implant recipients of all ages has been challenging, primarily because the use of traditional neuroimaging techniques is limited by the implant itself. Functional near-infrared spectroscopy (fNIRS) is an imaging technology that works with implant users of all ages because it is non-invasive, compatible with implant devices, and not subject to electrical artifacts. Thus, fNIRS can provide insight into processing factors that contribute to variations in spoken language outcomes in implant users, both children and adults. There are important considerations to be made when using fNIRS, particularly with children, to maximize the signal-to-noise ratio and to best identify and interpret cortical responses. This review considers these issues, recent data, and future directions for using fNIRS as a tool to understand spoken language processing in children and adults who hear through a cochlear implant.
Collapse
Affiliation(s)
- Heather Bortfeld
- Psychological Sciences, University of California, Merced, Merced, California
| |
Collapse
|
31
|
Whitehead JC, Armony JL. Singing in the brain: Neural representation of music and voice as revealed by fMRI. Hum Brain Mapp 2018; 39:4913-4924. [PMID: 30120854 PMCID: PMC6866591 DOI: 10.1002/hbm.24333] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 05/25/2018] [Accepted: 07/15/2018] [Indexed: 12/13/2022] Open
Abstract
The ubiquity of music across cultures as a means of emotional expression, and its proposed evolutionary relation to speech, motivated researchers to attempt a characterization of its neural representation. Several neuroimaging studies have reported that specific regions in the anterior temporal lobe respond more strongly to music than to other auditory stimuli, including spoken voice. Nonetheless, because most studies have employed instrumental music, which has important acoustic distinctions from human voice, questions still exist as to the specificity of the observed "music-preferred" areas. Here, we sought to address this issue by testing 24 healthy young adults with fast, high-resolution fMRI, to record neural responses to a large and varied set of musical stimuli, which, critically, included a capella singing, as well as purely instrumental excerpts. Our results confirmed that music; vocal or instrumental, preferentially engaged regions in the superior STG, particularly in the anterior planum polare, bilaterally. In contrast, human voice, either spoken or sung, activated more strongly a large area along the superior temporal sulcus. Findings were consistent between univariate and multivariate analyses, as well as with the use of a "silent" sparse acquisition sequence that minimizes any potential influence of scanner noise on the resulting activations. Activity in music-preferred regions could not be accounted for by any basic acoustic parameter tested, suggesting these areas integrate, likely in a nonlinear fashion, a combination of acoustic attributes that, together, result in the perceived musicality of the stimuli, consistent with proposed hierarchical processing of complex auditory information within the temporal lobes.
Collapse
Affiliation(s)
- Jocelyne C. Whitehead
- Douglas Mental Health University InstituteVerdunCanada
- BRAMS LaboratoryCentre for Research on Brain, Language and MusicMontrealCanada
- Integrated Program in NeuroscienceMcGill UniversityMontrealCanada
| | - Jorge L. Armony
- Douglas Mental Health University InstituteVerdunCanada
- BRAMS LaboratoryCentre for Research on Brain, Language and MusicMontrealCanada
- Department of PsychiatryMcGill UniversityMontrealCanada
| |
Collapse
|
32
|
Gennari SP, Millman RE, Hymers M, Mattys SL. Anterior paracingulate and cingulate cortex mediates the effects of cognitive load on speech sound discrimination. Neuroimage 2018; 178:735-743. [DOI: 10.1016/j.neuroimage.2018.06.035] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 06/07/2018] [Accepted: 06/10/2018] [Indexed: 11/28/2022] Open
|
33
|
Differences in Hearing Acuity among "Normal-Hearing" Young Adults Modulate the Neural Basis for Speech Comprehension. eNeuro 2018; 5:eN-NWR-0263-17. [PMID: 29911176 PMCID: PMC6001266 DOI: 10.1523/eneuro.0263-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 04/17/2018] [Accepted: 04/18/2018] [Indexed: 12/11/2022] Open
Abstract
In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18–41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause. All participants self-reported normal hearing, confirmed by audiometric testing, with some variation within a clinically normal range. We found that participants showed activity related to sentence processing in a left-lateralized frontotemporal network. Although accuracy was generally high, participants still made some errors, which were associated with increased activity in bilateral cingulo-opercular and frontoparietal attention networks. A whole-brain regression analysis revealed that activity in a right anterior middle frontal gyrus (aMFG) component of the frontoparietal attention network was related to individual differences in hearing acuity, such that listeners with poorer hearing showed greater recruitment of this region when successfully understanding a sentence. The activity in right aMFGs for listeners with poor hearing did not differ as a function of sentence type, suggesting a general mechanism that is independent of linguistic processing demands. Our results suggest that even modest variations in hearing ability impact the systems supporting auditory speech comprehension, and that auditory sentence comprehension entails the coordination of a left perisylvian network that is sensitive to linguistic variation with an executive attention network that responds to acoustic challenge.
Collapse
|
34
|
Nettekoven C, Reck N, Goldbrunner R, Grefkes C, Weiß Lucas C. Short- and long-term reliability of language fMRI. Neuroimage 2018; 176:215-225. [PMID: 29704615 DOI: 10.1016/j.neuroimage.2018.04.050] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 03/23/2018] [Accepted: 04/22/2018] [Indexed: 12/22/2022] Open
Abstract
When using functional magnetic resonance imaging (fMRI) for mapping important language functions, a high test-retest reliability is mandatory, both in basic scientific research and for clinical applications. We, therefore, systematically tested the short- and long-term reliability of fMRI in a group of healthy subjects using a picture naming task and a sparse-sampling fMRI protocol. We hypothesized that test-retest reliability might be higher for (i) speech-related motor areas than for other language areas and for (ii) the short as compared to the long intersession interval. 16 right-handed subjects (mean age: 29 years) participated in three sessions separated by 2-6 (session 1 and 2, short-term) and 21-34 days (session 1 and 3, long-term). Subjects were asked to perform the same overt picture naming task in each fMRI session (50 black-white images per session). Reliability was tested using the following measures: (i) Euclidean distances (ED) between local activation maxima and Centers of Gravity (CoGs), (ii) overlap volumes and (iii) voxel-wise intraclass correlation coefficients (ICCs). Analyses were performed for three regions of interest which were chosen based on whole-brain group data: primary motor cortex (M1), superior temporal gyrus (STG) and inferior frontal gyrus (IFG). Our results revealed that the activation centers were highly reliable, independent of the time interval, ROI or hemisphere with significantly smaller ED for the local activation maxima (6.45 ± 1.36 mm) as compared to the CoGs (8.03 ± 2.01 mm). In contrast, the extent of activation revealed rather low reliability values with overlaps ranging from 24% (IFG) to 56% (STG). Here, the left hemisphere showed significantly higher overlap volumes than the right hemisphere. Although mean ICCs ranged between poor (ICC<0.5) and moderate (ICC 0.5-0.74) reliability, highly reliable voxels (ICC>0.75) were found for all ROIs. Voxel-wise reliability of the different ROIs was influenced by the intersession interval. Taken together, we could show that, despite of considerable ROI-dependent variations of the extent of activation over time, highly reliable centers of activation can be identified using an overt picture naming paradigm.
Collapse
Affiliation(s)
- Charlotte Nettekoven
- Center of Neurosurgery, Cologne University Hospital, 50924, Cologne, Germany; Department of Neurology, Cologne University Hospital, 50924, Cologne, Germany
| | - Nicola Reck
- Center of Neurosurgery, Cologne University Hospital, 50924, Cologne, Germany
| | - Roland Goldbrunner
- Center of Neurosurgery, Cologne University Hospital, 50924, Cologne, Germany
| | - Christian Grefkes
- Department of Neurology, Cologne University Hospital, 50924, Cologne, Germany; Institute of Neuroscience and Medicine (INM-3), Juelich Research Centre, 52428, Juelich, Germany
| | - Carolin Weiß Lucas
- Center of Neurosurgery, Cologne University Hospital, 50924, Cologne, Germany.
| |
Collapse
|
35
|
Hutter J, Price AN, Cordero‐Grande L, Malik S, Ferrazzi G, Gaspar A, Hughes EJ, Christiaens D, McCabe L, Schneider T, Rutherford MA, Hajnal JV. Quiet echo planar imaging for functional and diffusion MRI. Magn Reson Med 2018; 79:1447-1459. [PMID: 28653363 PMCID: PMC5836719 DOI: 10.1002/mrm.26810] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2017] [Revised: 05/30/2017] [Accepted: 05/31/2017] [Indexed: 11/19/2022]
Abstract
PURPOSE To develop a purpose-built quiet echo planar imaging capability for fetal functional and diffusion scans, for which acoustic considerations often compromise efficiency and resolution as well as angular/temporal coverage. METHODS The gradient waveforms in multiband-accelerated single-shot echo planar imaging sequences have been redesigned to minimize spectral content. This includes a sinusoidal read-out with a single fundamental frequency, a constant phase encoding gradient, overlapping smoothed CAIPIRINHA blips, and a novel strategy to merge the crushers in diffusion MRI. These changes are then tuned in conjunction with the gradient system frequency response function. RESULTS Maintained image quality, SNR, and quantitative diffusion values while reducing acoustic noise up to 12 dB (A) is illustrated in two adult experiments. Fetal experiments in 10 subjects covering a range of parameters depict the adaptability and increased efficiency of quiet echo planar imaging. CONCLUSION Purpose-built for highly efficient multiband fetal echo planar imaging studies, the presented framework reduces acoustic noise for all echo planar imaging-based sequences. Full optimization by tuning to the gradient frequency response functions allows for a maximally time-efficient scan within safe limits. This allows ambitious in-utero studies such as functional brain imaging with high spatial/temporal resolution and diffusion scans with high angular/spatial resolution to be run in a highly efficient manner at acceptable sound levels. Magn Reson Med 79:1447-1459, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
Collapse
Affiliation(s)
- Jana Hutter
- Centre for the Developing BrainKing's College LondonLondonUK
- Biomedical Engineering DepartmentKing's College LondonLondonUK
| | - Anthony N. Price
- Centre for the Developing BrainKing's College LondonLondonUK
- Biomedical Engineering DepartmentKing's College LondonLondonUK
| | - Lucilio Cordero‐Grande
- Centre for the Developing BrainKing's College LondonLondonUK
- Biomedical Engineering DepartmentKing's College LondonLondonUK
| | - Shaihan Malik
- Centre for the Developing BrainKing's College LondonLondonUK
- Biomedical Engineering DepartmentKing's College LondonLondonUK
| | - Giulio Ferrazzi
- Centre for the Developing BrainKing's College LondonLondonUK
- Biomedical Engineering DepartmentKing's College LondonLondonUK
| | - Andreia Gaspar
- Centre for the Developing BrainKing's College LondonLondonUK
- Biomedical Engineering DepartmentKing's College LondonLondonUK
| | - Emer J. Hughes
- Centre for the Developing BrainKing's College LondonLondonUK
- Biomedical Engineering DepartmentKing's College LondonLondonUK
| | - Daan Christiaens
- Centre for the Developing BrainKing's College LondonLondonUK
- Biomedical Engineering DepartmentKing's College LondonLondonUK
| | - Laura McCabe
- Centre for the Developing BrainKing's College LondonLondonUK
| | | | | | - Joseph V. Hajnal
- Centre for the Developing BrainKing's College LondonLondonUK
- Biomedical Engineering DepartmentKing's College LondonLondonUK
| |
Collapse
|
36
|
Wijayasiri P, Hartley DE, Wiggins IM. Brain activity underlying the recovery of meaning from degraded speech: A functional near-infrared spectroscopy (fNIRS) study. Hear Res 2017; 351:55-67. [DOI: 10.1016/j.heares.2017.05.010] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 05/11/2017] [Accepted: 05/23/2017] [Indexed: 11/30/2022]
|
37
|
Abstract
In this review I introduce the historical context and methods of optical neuroimaging, leading to the modern use of functional near-infrared spectroscopy (fNIRS) and high-density diffuse optical tomography (HD-DOT) to study human brain function. In its most frequent application, optical neuroimaging measures a hemodynamically-mediated signal indirectly related to neural processing, similar to that captured by fMRI. Compared to other approaches to measuring human brain function, optical imaging has many advantages: it is noninvasive, frequently portable, acoustically silent, robust to motion and muscle movement, and appropriate in many situations in which fMRI is not possible (for example, due to implanted medical devices). Challenges include producing a full-brain field of view, homogenous spatial resolution, and accurate source localization. Experimentally, optical neuroimaging has been used to study phoneme, word, and sentence processing in a variety of paradigms. With continuing technical and methodological improvements the future of optical neuroimaging is increasingly bright.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis MO USA
| |
Collapse
|
38
|
Andoh J, Ferreira M, Leppert I, Matsushita R, Pike B, Zatorre R. How restful is it with all that noise? Comparison of Interleaved silent steady state (ISSS) and conventional imaging in resting-state fMRI. Neuroimage 2017; 147:726-735. [DOI: 10.1016/j.neuroimage.2016.11.065] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Revised: 11/03/2016] [Accepted: 11/26/2016] [Indexed: 01/24/2023] Open
|
39
|
Rogers JC, Davis MH. Inferior Frontal Cortex Contributions to the Recognition of Spoken Words and Their Constituent Speech Sounds. J Cogn Neurosci 2017; 29:919-936. [PMID: 28129061 DOI: 10.1162/jocn_a_01096] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Speech perception and comprehension are often challenged by the need to recognize speech sounds that are degraded or ambiguous. Here, we explore the cognitive and neural mechanisms involved in resolving ambiguity in the identity of speech sounds using syllables that contain ambiguous phonetic segments (e.g., intermediate sounds between /b/ and /g/ as in "blade" and "glade"). We used an audio-morphing procedure to create a large set of natural sounding minimal pairs that contain phonetically ambiguous onset or offset consonants (differing in place, manner, or voicing). These ambiguous segments occurred in different lexical contexts (i.e., in words or pseudowords, such as blade-glade or blem-glem) and in different phonological environments (i.e., with neighboring syllables that differed in lexical status, such as blouse-glouse). These stimuli allowed us to explore the impact of phonetic ambiguity on the speed and accuracy of lexical decision responses (Experiment 1), semantic categorization responses (Experiment 2), and the magnitude of BOLD fMRI responses during attentive comprehension (Experiment 3). For both behavioral and neural measures, observed effects of phonetic ambiguity were influenced by lexical context leading to slower responses and increased activity in the left inferior frontal gyrus for high-ambiguity syllables that distinguish pairs of words, but not for equivalent pseudowords. These findings suggest lexical involvement in the resolution of phonetic ambiguity. Implications for speech perception and the role of inferior frontal regions are discussed.
Collapse
Affiliation(s)
- Jack C Rogers
- MRC Cognition & Brain Sciences Unit, Cambridge, UK.,University of Birmingham
| | | |
Collapse
|
40
|
Quinn C, Taylor JSH, Davis MH. Learning and retrieving holistic and componential visual-verbal associations in reading and object naming. Neuropsychologia 2016; 98:68-84. [PMID: 27720949 PMCID: PMC5407349 DOI: 10.1016/j.neuropsychologia.2016.09.025] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2016] [Revised: 08/05/2016] [Accepted: 09/28/2016] [Indexed: 11/27/2022]
Abstract
Understanding the neural processes that underlie learning to read can provide a scientific foundation for literacy education but studying these processes in real-world contexts remains challenging. We present behavioural data from adult participants learning to read artificial words and name artificial objects over two days. Learning profiles and generalisation confirmed that componential learning of visual-verbal associations distinguishes reading from object naming. Functional MRI data collected on the second day allowed us to identify the neural systems that support componential reading as distinct from systems supporting holistic visual-verbal associations in object naming. Results showed increased activation in posterior ventral occipitotemporal (vOT), parietal, and frontal cortices when reading an artificial orthography compared to naming artificial objects, and the reverse profile in anterior vOT regions. However, activation differences between trained and untrained words were absent, suggesting a lack of cortical representations for whole words. Despite this, hippocampal responses provided some evidence for overnight consolidation of both words and objects learned on day 1. The comparison between neural activity for artificial words and objects showed extensive overlap with systems differentially engaged for real object naming and English word/pseudoword reading in the same participants. These findings therefore provide evidence that artificial learning paradigms offer an alternative method for studying the neural systems supporting language and literacy. Implications for literacy acquisition are discussed. Generalisation of novel orthography shows componential processing in reading. Real and artificial words and objects rely upon the same neural systems. Different neural systems support reading novel orthography and naming novel objects. No evidence of whole-word cortical representations for artificial written words. Reduced hippocampal responses suggest overnight consolidation of artificial items.
Collapse
Affiliation(s)
- Connor Quinn
- MRC Cognition and Brain Sciences Unit, Cambridge, UK; Department of Theoretical and Applied Linguistics, University of Cambridge, UK.
| | - J S H Taylor
- Department of Psychology, Royal Holloway University of London, Egham, Surrey, UK
| | | |
Collapse
|
41
|
Wiggins IM, Anderson CA, Kitterick PT, Hartley DEH. Speech-evoked activation in adult temporal cortex measured using functional near-infrared spectroscopy (fNIRS): Are the measurements reliable? Hear Res 2016; 339:142-54. [PMID: 27451015 PMCID: PMC5026156 DOI: 10.1016/j.heares.2016.07.007] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Revised: 07/13/2016] [Accepted: 07/18/2016] [Indexed: 11/19/2022]
Abstract
Functional near-infrared spectroscopy (fNIRS) is a silent, non-invasive neuroimaging technique that is potentially well suited to auditory research. However, the reliability of auditory-evoked activation measured using fNIRS is largely unknown. The present study investigated the test-retest reliability of speech-evoked fNIRS responses in normally-hearing adults. Seventeen participants underwent fNIRS imaging in two sessions separated by three months. In a block design, participants were presented with auditory speech, visual speech (silent speechreading), and audiovisual speech conditions. Optode arrays were placed bilaterally over the temporal lobes, targeting auditory brain regions. A range of established metrics was used to quantify the reproducibility of cortical activation patterns, as well as the amplitude and time course of the haemodynamic response within predefined regions of interest. The use of a signal processing algorithm designed to reduce the influence of systemic physiological signals was found to be crucial to achieving reliable detection of significant activation at the group level. For auditory speech (with or without visual cues), reliability was good to excellent at the group level, but highly variable among individuals. Temporal-lobe activation in response to visual speech was less reliable, especially in the right hemisphere. Consistent with previous reports, fNIRS reliability was improved by averaging across a small number of channels overlying a cortical region of interest. Overall, the present results confirm that fNIRS can measure speech-evoked auditory responses in adults that are highly reliable at the group level, and indicate that signal processing to reduce physiological noise may substantially improve the reliability of fNIRS measurements.
Collapse
Affiliation(s)
- Ian M Wiggins
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, 113 The Ropewalk, Nottingham, NG1 5DU, United Kingdom; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, United Kingdom; Medical Research Council (MRC) Institute of Hearing Research, University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom.
| | - Carly A Anderson
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, 113 The Ropewalk, Nottingham, NG1 5DU, United Kingdom; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, United Kingdom
| | - Pádraig T Kitterick
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, 113 The Ropewalk, Nottingham, NG1 5DU, United Kingdom; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, United Kingdom
| | - Douglas E H Hartley
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, 113 The Ropewalk, Nottingham, NG1 5DU, United Kingdom; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, United Kingdom; Medical Research Council (MRC) Institute of Hearing Research, University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom; Nottingham University Hospitals NHS Trust, Derby Road, Nottingham, NG7 2UH, United Kingdom
| |
Collapse
|
42
|
Cardin V. Effects of Aging and Adult-Onset Hearing Loss on Cortical Auditory Regions. Front Neurosci 2016; 10:199. [PMID: 27242405 PMCID: PMC4862970 DOI: 10.3389/fnins.2016.00199] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Accepted: 04/22/2016] [Indexed: 11/13/2022] Open
Abstract
Hearing loss is a common feature in human aging. It has been argued that dysfunctions in central processing are important contributing factors to hearing loss during older age. Aging also has well documented consequences for neural structure and function, but it is not clear how these effects interact with those that arise as a consequence of hearing loss. This paper reviews the effects of aging and adult-onset hearing loss in the structure and function of cortical auditory regions. The evidence reviewed suggests that aging and hearing loss result in atrophy of cortical auditory regions and stronger engagement of networks involved in the detection of salient events, adaptive control and re-allocation of attention. These cortical mechanisms are engaged during listening in effortful conditions in normal hearing individuals. Therefore, as a consequence of aging and hearing loss, all listening becomes effortful and cognitive load is constantly high, reducing the amount of available cognitive resources. This constant effortful listening and reduced cognitive spare capacity could be what accelerates cognitive decline in older adults with hearing loss.
Collapse
Affiliation(s)
- Velia Cardin
- Department of Experimental Psychology, Deafness, Cognition and Language Research Centre, University College LondonLondon, UK; Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
| |
Collapse
|
43
|
Angenstein N, Stadler J, Brechmann A. Auditory intensity processing: Effect of MRI background noise. Hear Res 2016; 333:87-92. [DOI: 10.1016/j.heares.2016.01.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2015] [Revised: 01/07/2016] [Accepted: 01/13/2016] [Indexed: 10/22/2022]
|
44
|
Acoustic richness modulates the neural networks supporting intelligible speech processing. Hear Res 2015; 333:108-117. [PMID: 26723103 DOI: 10.1016/j.heares.2015.12.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 12/07/2015] [Accepted: 12/10/2015] [Indexed: 11/20/2022]
Abstract
The information contained in a sensory signal plays a critical role in determining what neural processes are engaged. Here we used interleaved silent steady-state (ISSS) functional magnetic resonance imaging (fMRI) to explore how human listeners cope with different degrees of acoustic richness during auditory sentence comprehension. Twenty-six healthy young adults underwent scanning while hearing sentences that varied in acoustic richness (high vs. low spectral detail) and syntactic complexity (subject-relative vs. object-relative center-embedded clause structures). We manipulated acoustic richness by presenting the stimuli as unprocessed full-spectrum speech, or noise-vocoded with 24 channels. Importantly, although the vocoded sentences were spectrally impoverished, all sentences were highly intelligible. These manipulations allowed us to test how intelligible speech processing was affected by orthogonal linguistic and acoustic demands. Acoustically rich speech showed stronger activation than acoustically less-detailed speech in a bilateral temporoparietal network with more pronounced activity in the right hemisphere. By contrast, listening to sentences with greater syntactic complexity resulted in increased activation of a left-lateralized network including left posterior lateral temporal cortex, left inferior frontal gyrus, and left dorsolateral prefrontal cortex. Significant interactions between acoustic richness and syntactic complexity occurred in left supramarginal gyrus, right superior temporal gyrus, and right inferior frontal gyrus, indicating that the regions recruited for syntactic challenge differed as a function of acoustic properties of the speech. Our findings suggest that the neural systems involved in speech perception are finely tuned to the type of information available, and that reducing the richness of the acoustic signal dramatically alters the brain's response to spoken language, even when intelligibility is high.
Collapse
|
45
|
Evans S, McGettigan C, Agnew ZK, Rosen S, Scott SK. Getting the Cocktail Party Started: Masking Effects in Speech Perception. J Cogn Neurosci 2015; 28:483-500. [PMID: 26696297 DOI: 10.1162/jocn_a_00913] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Spoken conversations typically take place in noisy environments, and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous fMRI, while they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioral task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment; activity was found within right lateralized frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise.
Collapse
Affiliation(s)
| | | | - Zarinah K Agnew
- University College London.,University of California, San Francisco
| | | | | |
Collapse
|
46
|
Neural underpinnings of background acoustic noise in normal aging and mild cognitive impairment. Neuroscience 2015; 310:410-21. [DOI: 10.1016/j.neuroscience.2015.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Revised: 08/27/2015] [Accepted: 09/10/2015] [Indexed: 01/10/2023]
|
47
|
Scharinger M, Bendixen A, Herrmann B, Henry MJ, Mildner T, Obleser J. Predictions interact with missing sensory evidence in semantic processing areas. Hum Brain Mapp 2015; 37:704-16. [PMID: 26583355 DOI: 10.1002/hbm.23060] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 11/06/2015] [Accepted: 11/08/2015] [Indexed: 11/07/2022] Open
Abstract
Human brain function draws on predictive mechanisms that exploit higher-level context during lower-level perception. These mechanisms are particularly relevant for situations in which sensory information is compromised or incomplete, as for example in natural speech where speech segments may be omitted due to sluggish articulation. Here, we investigate which brain areas support the processing of incomplete words that were predictable from semantic context, compared with incomplete words that were unpredictable. During functional magnetic resonance imaging (fMRI), participants heard sentences that orthogonally varied in predictability (semantically predictable vs. unpredictable) and completeness (complete vs. incomplete, i.e. missing their final consonant cluster). The effects of predictability and completeness interacted in heteromodal semantic processing areas, including left angular gyrus and left precuneus, where activity did not differ between complete and incomplete words when they were predictable. The same regions showed stronger activity for incomplete than for complete words when they were unpredictable. The interaction pattern suggests that for highly predictable words, the speech signal does not need to be complete for neural processing in semantic processing areas. Hum Brain Mapp 37:704-716, 2016. © 2015 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Mathias Scharinger
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Alexandra Bendixen
- Department of Physics, School of Natural Sciences, Chemnitz University of Technology, Chemnitz, Germany
| | - Björn Herrmann
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Department of Psychology, Brain and Mind Institute, University of Western Ontario, London, Canada
| | - Molly J Henry
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Department of Psychology, Brain and Mind Institute, University of Western Ontario, London, Canada
| | - Toralf Mildner
- Nuclear Magnetic Resonance Unit, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
48
|
Mapping cortical responses to speech using high-density diffuse optical tomography. Neuroimage 2015; 117:319-26. [PMID: 26026816 DOI: 10.1016/j.neuroimage.2015.05.058] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2015] [Revised: 04/16/2015] [Accepted: 05/20/2015] [Indexed: 11/21/2022] Open
Abstract
The functional neuroanatomy of speech processing has been investigated using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) for more than 20years. However, these approaches have relatively poor temporal resolution and/or challenges of acoustic contamination due to the constraints of echoplanar fMRI. Furthermore, these methods are contraindicated because of safety concerns in longitudinal studies and research with children (PET) or in studies of patients with metal implants (fMRI). High-density diffuse optical tomography (HD-DOT) permits presenting speech in a quiet acoustic environment, has excellent temporal resolution relative to the hemodynamic response, and provides noninvasive and metal-compatible imaging. However, the performance of HD-DOT in imaging the brain regions involved in speech processing is not fully established. In the current study, we use an auditory sentence comprehension task to evaluate the ability of HD-DOT to map the cortical networks supporting speech processing. Using sentences with two levels of linguistic complexity, along with a control condition consisting of unintelligible noise-vocoded speech, we recovered a hierarchically organized speech network that matches the results of previous fMRI studies. Specifically, hearing intelligible speech resulted in increased activity in bilateral temporal cortex and left frontal cortex, with syntactically complex speech leading to additional activity in left posterior temporal cortex and left inferior frontal gyrus. These results demonstrate the feasibility of using HD-DOT to map spatially distributed brain networks supporting higher-order cognitive faculties such as spoken language.
Collapse
|
49
|
Prediction and constraint in audiovisual speech perception. Cortex 2015; 68:169-81. [PMID: 25890390 DOI: 10.1016/j.cortex.2015.03.006] [Citation(s) in RCA: 115] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 01/28/2015] [Accepted: 03/08/2015] [Indexed: 11/23/2022]
Abstract
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms.
Collapse
|
50
|
Hymers M, Prendergast G, Liu C, Schulze A, Young ML, Wastling SJ, Barker GJ, Millman RE. Neural mechanisms underlying song and speech perception can be differentiated using an illusory percept. Neuroimage 2014; 108:225-33. [PMID: 25512041 DOI: 10.1016/j.neuroimage.2014.12.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2014] [Revised: 10/27/2014] [Accepted: 12/04/2014] [Indexed: 11/16/2022] Open
Abstract
The issue of whether human perception of speech and song recruits integrated or dissociated neural systems is contentious. This issue is difficult to address directly since these stimulus classes differ in their physical attributes. We therefore used a compelling illusion (Deutsch et al. 2011) in which acoustically identical auditory stimuli are perceived as either speech or song. Deutsch's illusion was used in a functional MRI experiment to provide a direct, within-subject investigation of the brain regions involved in the perceptual transformation from speech into song, independent of the physical characteristics of the presented stimuli. An overall differential effect resulting from the perception of song compared with that of speech was revealed in right midposterior superior temporal sulcus/right middle temporal gyrus. A left frontotemporal network, previously implicated in higher-level cognitive analyses of music and speech, was found to co-vary with a behavioural measure of the subjective vividness of the illusion, and this effect was driven by the illusory transformation. These findings provide evidence that illusory song perception is instantiated by a network of brain regions that are predominantly shared with the speech perception network.
Collapse
Affiliation(s)
- Mark Hymers
- York Neuroimaging Centre, University of York, York Science Park, YO10 5NY, United Kingdom.
| | - Garreth Prendergast
- York Neuroimaging Centre, University of York, York Science Park, YO10 5NY, United Kingdom; Audiology and Deafness Group, School of Psychological Sciences, University of Manchester, Manchester, M13 9PL, UK
| | - Can Liu
- Department of Psychology, University of York, YO10 5DD, United Kingdom
| | - Anja Schulze
- Department of Psychology, University of York, YO10 5DD, United Kingdom
| | - Michellie L Young
- Department of Psychology, University of York, YO10 5DD, United Kingdom
| | | | - Gareth J Barker
- Institute of Psychiatry, King's College London, SE5 8AF, United Kingdom
| | - Rebecca E Millman
- York Neuroimaging Centre, University of York, York Science Park, YO10 5NY, United Kingdom
| |
Collapse
|