1
|
Choudhari V, Han C, Bickel S, Mehta AD, Schevon C, McKhann GM, Mesgarani N. Brain-Controlled Augmented Hearing for Spatially Moving Conversations in Multi-Talker Environments. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2401379. [PMID: 39248654 PMCID: PMC11538705 DOI: 10.1002/advs.202401379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 06/17/2024] [Indexed: 09/10/2024]
Abstract
Focusing on a specific conversation amidst multiple interfering talkers is challenging, especially for those with hearing loss. Brain-controlled assistive hearing devices aim to alleviate this problem by enhancing the attended speech based on the listener's neural signals using auditory attention decoding (AAD). Departing from conventional AAD studies that relied on oversimplified scenarios with stationary talkers, a realistic AAD task that involves multiple talkers taking turns as they continuously move in space in background noise is presented. Invasive electroencephalography (iEEG) data are collected from three neurosurgical patients as they focused on one of the two moving conversations. An enhanced brain-controlled assistive hearing system that combines AAD and a binaural speaker-independent speech separation model is presented. The separation model unmixes talkers while preserving their spatial location and provides talker trajectories to the neural decoder to improve AAD accuracy. Subjective and objective evaluations show that the proposed system enhances speech intelligibility and facilitates conversation tracking while maintaining spatial cues and voice quality in challenging acoustic environments. This research demonstrates the potential of this approach in real-world scenarios and marks a significant step toward developing assistive hearing technologies that adapt to the intricate dynamics of everyday auditory experiences.
Collapse
Affiliation(s)
- Vishal Choudhari
- Department of Electrical EngineeringColumbia UniversityNew YorkNY10027USA
- Mortimer B. Zuckerman Mind Brain Behavior InstituteNew YorkNY10027USA
| | - Cong Han
- Department of Electrical EngineeringColumbia UniversityNew YorkNY10027USA
- Mortimer B. Zuckerman Mind Brain Behavior InstituteNew YorkNY10027USA
| | - Stephan Bickel
- Hofstra Northwell School of MedicineUniondaleNY11549USA
- The Feinstein Institutes for Medical ResearchManhassetNY11030USA
| | - Ashesh D. Mehta
- Hofstra Northwell School of MedicineUniondaleNY11549USA
- The Feinstein Institutes for Medical ResearchManhassetNY11030USA
| | | | - Guy M. McKhann
- Department of Neurological Surgery, Vagelos College of Physicians and SurgeonsColumbia University, New YorkNew YorkNY10027USA
| | - Nima Mesgarani
- Department of Electrical EngineeringColumbia UniversityNew YorkNY10027USA
- Mortimer B. Zuckerman Mind Brain Behavior InstituteNew YorkNY10027USA
| |
Collapse
|
2
|
Perez-Heydrich CA, Padova D, Kutten K, Ceritoglu C, Faria A, Ratnanather JT, Agrawal Y. Automatic Segmentation of Heschl Gyrus and Planum Temporale by MRICloud. OTOLOGY & NEUROTOLOGY OPEN 2024; 4:e056. [PMID: 39328866 PMCID: PMC11424062 DOI: 10.1097/ono.0000000000000056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 05/20/2024] [Indexed: 09/28/2024]
Abstract
Objectives This study used a cloud-based program, MRICloud, which parcellates T1 MRI brain scans using a probabilistic classification based on manually labeled multi-atlas, to create a tool to segment Heschl gyrus (HG) and the planum temporale (PT). Methods MRICloud is an online platform that can automatically segment structural MRIs into 287 labeled brain regions. A 31-brain multi-atlas was manually resegmented to include tags for the HG and PT. This modified atlas set with additional manually labeled regions of interest acted as a new multi-atlas set and was uploaded to MRICloud. This new method of automated segmentation of HG and PT was then compared to manual segmentation of HG and PT in MRIs of 10 healthy adults using Dice similarity coefficient (DSC), Hausdorff distance (HD), and intraclass correlation coefficient (ICC). Results This multi-atlas set was uploaded to MRICloud for public use. When compared to reference manual segmentations of the HG and PT, there was an average DSC for HG and PT of 0.62 ± 0.07, HD of 8.10 ± 3.47 mm, and an ICC for these regions of 0.83 (0.68-0.91), consistent with an appropriate automatic segmentation accuracy. Conclusion This multi-atlas can alleviate the manual segmentation effort and the difficulty in choosing an HG and PT anatomical definition. This protocol is limited by the morphology of the MRI scans needed to make the MRICloud atlas set. Future work will apply this multi-atlas to observe MRI changes in hearing-associated disorders.
Collapse
Affiliation(s)
- Carlos A Perez-Heydrich
- Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Dominic Padova
- Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Kwame Kutten
- Department of Biomedical Engineering, Johns Hopkins University, Center for Imaging Science and Institute for Computational Medicine, Baltimore, MD
| | - Can Ceritoglu
- Department of Biomedical Engineering, Johns Hopkins University, Center for Imaging Science and Institute for Computational Medicine, Baltimore, MD
| | - Andreia Faria
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD
| | - J Tilak Ratnanather
- Department of Biomedical Engineering, Johns Hopkins University, Center for Imaging Science and Institute for Computational Medicine, Baltimore, MD
| | - Yuri Agrawal
- Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD
| |
Collapse
|
3
|
Basile GA, Tatti E, Bertino S, Milardi D, Genovese G, Bruno A, Muscatello MRA, Ciurleo R, Cerasa A, Quartarone A, Cacciola A. Neuroanatomical correlates of peripersonal space: bridging the gap between perception, action, emotion and social cognition. Brain Struct Funct 2024; 229:1047-1072. [PMID: 38683211 PMCID: PMC11147881 DOI: 10.1007/s00429-024-02781-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 02/22/2024] [Indexed: 05/01/2024]
Abstract
Peripersonal space (PPS) is a construct referring to the portion of space immediately surrounding our bodies, where most of the interactions between the subject and the environment, including other individuals, take place. Decades of animal and human neuroscience research have revealed that the brain holds a separate representation of this region of space: this distinct spatial representation has evolved to ensure proper relevance to stimuli that are close to the body and prompt an appropriate behavioral response. The neural underpinnings of such construct have been thoroughly investigated by different generations of studies involving anatomical and electrophysiological investigations in animal models, and, recently, neuroimaging experiments in human subjects. Here, we provide a comprehensive anatomical overview of the anatomical circuitry underlying PPS representation in the human brain. Gathering evidence from multiple areas of research, we identified cortical and subcortical regions that are involved in specific aspects of PPS encoding.We show how these regions are part of segregated, yet integrated functional networks within the brain, which are in turn involved in higher-order integration of information. This wide-scale circuitry accounts for the relevance of PPS encoding in multiple brain functions, including not only motor planning and visuospatial attention but also emotional and social cognitive aspects. A complete characterization of these circuits may clarify the derangements of PPS representation observed in different neurological and neuropsychiatric diseases.
Collapse
Affiliation(s)
- Gianpaolo Antonio Basile
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy.
| | - Elisa Tatti
- Department of Molecular, Cellular & Biomedical Sciences, CUNY, School of Medicine, New York, NY, 10031, USA
| | - Salvatore Bertino
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy
- Department of Clinical and Experimental Medicine, University of Messina, Messina, Italy
| | - Demetrio Milardi
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy
| | | | - Antonio Bruno
- Psychiatry Unit, University Hospital "G. Martino", Messina, Italy
- Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy
| | - Maria Rosaria Anna Muscatello
- Psychiatry Unit, University Hospital "G. Martino", Messina, Italy
- Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy
| | | | - Antonio Cerasa
- S. Anna Institute, Crotone, Italy
- Institute for Biomedical Research and Innovation (IRIB), National Research Council of Italy, Messina, Italy
- Pharmacotechnology Documentation and Transfer Unit, Preclinical and Translational Pharmacology, Department of Pharmacy, Health Science and Nutrition, University of Calabria, Rende, Italy
| | | | - Alberto Cacciola
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy.
| |
Collapse
|
4
|
Undurraga JA, Luke R, Van Yper L, Monaghan JJM, McAlpine D. The neural representation of an auditory spatial cue in the primate cortex. Curr Biol 2024; 34:2162-2174.e5. [PMID: 38718798 DOI: 10.1016/j.cub.2024.04.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 02/14/2024] [Accepted: 04/12/2024] [Indexed: 05/23/2024]
Abstract
Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.
Collapse
Affiliation(s)
- Jaime A Undurraga
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Interacoustics Research Unit, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark.
| | - Robert Luke
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; The Bionics Institute, 384-388 Albert St., East Melbourne, VIC 3002, Australia
| | - Lindsey Van Yper
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Institute of Clinical Research, University of Southern Denmark, 5230 Odense, Denmark; Research Unit for ORL, Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, 5230 Odense, Denmark
| | - Jessica J M Monaghan
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; National Acoustic Laboratories, Australian Hearing Hub, 16 University Avenue, Sydney, NSW 2109, Australia
| | - David McAlpine
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Macquarie University Hearing and the Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia.
| |
Collapse
|
5
|
Peter N, Treyer V, Probst R, Kleinjung T. Auditory Cortical Plasticity in Patients with Single-Sided Deafness Before and After Cochlear Implantation. J Assoc Res Otolaryngol 2024; 25:79-88. [PMID: 38253897 PMCID: PMC10907329 DOI: 10.1007/s10162-024-00928-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 01/03/2024] [Indexed: 01/24/2024] Open
Abstract
PURPOSE This study investigated neuroplastic changes induced by postlingual single-sided deafness (SSD) and the effects of a cochlear implantation for the deaf ear. Neural processing of acoustic signals from the normal hearing ear to the brain was studied before and after implantation using a positron emission tomography (PET)/CT scanner. METHODS Eight patients with postlingual SSD received a cochlear implant (CI) in a prospective clinical trial. Dynamic imaging was performed in a PET/CT scanner using radioactively labeled water ([15O]H2O) to localize changes in the regional cerebral blood flow (rCBF) with and without an auditory task of logatomes containing speech-like elements without meaningful context. The normal hearing ear was stimulated before implantation and after the use of the cochlear implant for at least 8 months (mean 13.5, range 8.1-26.6). Eight age- and gender-matched subjects with normal hearing on both sides served as healthy control subjects (HCS). RESULTS When the normal hearing ear of SSD patients was stimulated before CI implantation, the [15O]H2O-PET showed a more symmetrical rCBF in the auditory regions of both hemispheres in comparison to the HCS. The use of CI increased the asymmetry index (AI) in six of eight patients indicating an increase of activity of the contralateral hemisphere. Non-parametric statistics revealed a significant difference in the AI between patients before CI implantation and HCS (p < .01), which disappeared after CI implantation (p = .195). CONCLUSION The functional neuroimaging data showed a tendency towards normalization of neuronal activity after CI implantation, which supports the effectiveness of CI in SSD patients. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT01749592, December 13, 2012.
Collapse
Affiliation(s)
- Nicole Peter
- Department of Otorhinolaryngology, Head & Neck Surgery, University Hospital Zurich, University of Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland.
| | - Valerie Treyer
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, Zurich, Switzerland
- Institute for Regenerative Medicine, University of Zurich, Zurich, Switzerland
| | - Rudolf Probst
- Department of Otorhinolaryngology, Head & Neck Surgery, University Hospital Zurich, University of Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| | - Tobias Kleinjung
- Department of Otorhinolaryngology, Head & Neck Surgery, University Hospital Zurich, University of Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| |
Collapse
|
6
|
Schneider P, Engelmann D, Groß C, Bernhofs V, Hofmann E, Christiner M, Benner J, Bücher S, Ludwig A, Serrallach BL, Zeidler BM, Turker S, Parncutt R, Seither-Preisler A. Neuroanatomical Disposition, Natural Development, and Training-Induced Plasticity of the Human Auditory System from Childhood to Adulthood: A 12-Year Study in Musicians and Nonmusicians. J Neurosci 2023; 43:6430-6446. [PMID: 37604688 PMCID: PMC10500984 DOI: 10.1523/jneurosci.0274-23.2023] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 07/11/2023] [Accepted: 07/20/2023] [Indexed: 08/23/2023] Open
Abstract
Auditory perception is fundamental to human development and communication. However, no long-term studies have been performed on the plasticity of the auditory system as a function of musical training from childhood to adulthood. The long-term interplay between developmental and training-induced neuroplasticity of auditory processing is still unknown. We present results from AMseL (Audio and Neuroplasticity of Musical Learning), the first longitudinal study on the development of the human auditory system from primary school age until late adolescence. This 12-year project combined neurologic and behavioral methods including structural magnetic resonance imaging (MRI), magnetoencephalography (MEG), and auditory tests. A cohort of 112 typically developing participants (51 male, 61 female), classified as "musicians" (n = 66) and "nonmusicians" (n = 46), was tested at five measurement timepoints. We found substantial, stable differences in the morphology of auditory cortex (AC) between musicians and nonmusicians even at the earliest ages, suggesting that musical aptitude is manifested in macroscopic neuroanatomical characteristics. Maturational plasticity led to a continuous increase in white matter myelination and systematic changes of the auditory evoked P1-N1-P2 complex (decreasing latencies, synchronization effects between hemispheres, and amplitude changes) regardless of musical expertise. Musicians showed substantial training-related changes at the neurofunctional level, in particular more synchronized P1 responses and bilaterally larger P2 amplitudes. Musical training had a positive influence on elementary auditory perception (frequency, tone duration, onset ramp) and pattern recognition (rhythm, subjective pitch). The observed interplay between "nature" (stable biological dispositions and natural maturation) and "nurture" (learning-induced plasticity) is integrated into a novel neurodevelopmental model of the human auditory system.Significance Statement We present results from AMseL (Audio and Neuroplasticity of Musical Learning), a 12-year longitudinal study on the development of the human auditory system from childhood to adulthood that combined structural magnetic resonance imaging (MRI), magnetoencephalography (MEG), and auditory discrimination and pattern recognition tests. A total of 66 musicians and 46 nonmusicians were tested at five timepoints. Substantial, stable differences in the morphology of auditory cortex (AC) were found between the two groups even at the earliest ages, suggesting that musical aptitude is manifested in macroscopic neuroanatomical characteristics. We also observed neuroplastic and perceptual changes with age and musical practice. This interplay between "nature" (stable biological dispositions and natural maturation) and "nurture" (learning-induced plasticity) is integrated into a novel neurodevelopmental model of the human auditory system.
Collapse
Affiliation(s)
- Peter Schneider
- Centre for Systematic Musicology, University of Graz, Graz A-8010, Austria
- Department of Neurology, Section of Biomagnetism, University of Heidelberg Medical School, Heidelberg D-69120, Germany
- Division of Neuroradiology, University of Heidelberg Medical School, Heidelberg D-69120, Germany
- Latvian Academy of Music, Riga LV-1050, Latvia
| | - Dorte Engelmann
- Department of Neurology, Section of Biomagnetism, University of Heidelberg Medical School, Heidelberg D-69120, Germany
- Division of Neuroradiology, University of Heidelberg Medical School, Heidelberg D-69120, Germany
| | - Christine Groß
- Department of Neurology, Section of Biomagnetism, University of Heidelberg Medical School, Heidelberg D-69120, Germany
- Latvian Academy of Music, Riga LV-1050, Latvia
| | | | - Elke Hofmann
- School of Life Sciences, Muttenz, University of Applied Sciences and Arts Northwestern Switzerland (FHNW), Switzerland CH-4132
| | - Markus Christiner
- Centre for Systematic Musicology, University of Graz, Graz A-8010, Austria
- Department of Neurology, Section of Biomagnetism, University of Heidelberg Medical School, Heidelberg D-69120, Germany
- Latvian Academy of Music, Riga LV-1050, Latvia
| | - Jan Benner
- Division of Neuroradiology, University of Heidelberg Medical School, Heidelberg D-69120, Germany
| | - Steffen Bücher
- Department of Neurology, Section of Biomagnetism, University of Heidelberg Medical School, Heidelberg D-69120, Germany
| | - Alexander Ludwig
- Division of Neuroradiology, University of Heidelberg Medical School, Heidelberg D-69120, Germany
| | - Bettina L Serrallach
- Department of Neurology, Section of Biomagnetism, University of Heidelberg Medical School, Heidelberg D-69120, Germany
- Division of Neuroradiology, University of Heidelberg Medical School, Heidelberg D-69120, Germany
| | - Bettina M Zeidler
- Centre for Systematic Musicology, University of Graz, Graz A-8010, Austria
- Division of Neuroradiology, University of Heidelberg Medical School, Heidelberg D-69120, Germany
| | - Sabrina Turker
- Lise Meitner Research Group 'Cognition and Plasticity,' Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig D-04103, Germany
| | - Richard Parncutt
- Centre for Systematic Musicology, University of Graz, Graz A-8010, Austria
| | - Annemarie Seither-Preisler
- Centre for Systematic Musicology, University of Graz, Graz A-8010, Austria
- BioTechMed, Graz A-8010, Austria
| |
Collapse
|
7
|
Cossette-Roberge H, Li J, Citherlet D, Nguyen DK. Localizing and lateralizing value of auditory phenomena in seizures. Epilepsy Behav 2023; 145:109327. [PMID: 37422934 DOI: 10.1016/j.yebeh.2023.109327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 06/11/2023] [Accepted: 06/15/2023] [Indexed: 07/11/2023]
Abstract
BACKGROUND Auditory seizures (AS) are a rare type of focal seizures. AS are classically thought to involve a seizure onset zone (SOZ) in the temporal lobe, but there remain uncertainties about their localizing and lateralizing value. We conducted a narrative literature review with the aim of providing an up-to-date description of the lateralizing and localizing value of AS. METHODS The databases PubMed, Scopus, and Google Scholar were searched for literature on AS in December 2022. All cortical stimulation studies, case reports, and case series were analyzed to assess for auditory phenomena that were suggestive of AS and to evaluate if the lateralization and/or localization of the SOZ could be determined. We classified AS according to their semiology (e.g., simple hallucination versus complex hallucination) and the level of evidence with which the SOZ could be predicted. RESULTS A total of 174 cases comprising 200 AS were analyzed from 70 articles. Across all studies, the SOZ of AS were more often in the left (62%) than in the right (38%) hemisphere. AS heard bilaterally followed this trend. Unilaterally heard AS were more often due to a SOZ in the contralateral hemisphere (74%), although they could also be ipsilateral (26%). The SOZ for AS was not limited to the auditory cortex, nor to the temporal lobe. The areas more frequently involved in the temporal lobe were the superior temporal gyrus (STG) and mesiotemporal structures. Extratemporal locations included parietal, frontal, insular, and rarely occipital structures. CONCLUSION Our review highlighted the complexity of AS and their importance in the identification of the SOZ. Due to the limited data and heterogeneous presentation of AS in the literature, the patterns associated with different AS semiologies warrant further research.
Collapse
Affiliation(s)
- Hélène Cossette-Roberge
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada; Neurology Division, Centre Hospitalier de l'Université de Sherbrooke (CHUS), Sherbrooke, QC, Canada.
| | - Jimmy Li
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada; Neurology Division, Centre Hospitalier de l'Université de Sherbrooke (CHUS), Sherbrooke, QC, Canada
| | - Daphné Citherlet
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada
| | - Dang Khoa Nguyen
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montreal, QC, Canada; Department of Neurosciences, Université de Montréal, Montreal, QC, Canada; Neurology Division, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| |
Collapse
|
8
|
Yan Y, Li M, Jia H, Fu L, Qiu J, Yang W. Amygdala-based functional connectivity mediates the relationship between thought control ability and trait anxiety. Brain Cogn 2023; 168:105976. [PMID: 37086555 DOI: 10.1016/j.bandc.2023.105976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 04/02/2023] [Accepted: 04/04/2023] [Indexed: 04/24/2023]
Abstract
Thought control ability (TCA) refers to the ability to exclude unwanted thoughts. There has been consistent evidence on the protective effect of TCA on anxiety, that higher TCA is associated with lower anxiety. However, the underlying neural mechanism remains unclear. In this study, with a large sample (N = 495), we investigated how seed-based resting-state functional connectivity (RSFC) mediates the relationship between TCA and anxiety. Our behaviour results replicated previous findings that TCA is negatively associated with trait anxiety after controlling for gender, age, and depression. More importantly, the RSFC results revealed that TCA is negatively associated with the left amygdala - left frontal pole (LA-LFP), left amygdala - left inferior temporal gyrus (LA-LITG), and left hippocampus - left inferior frontal gyrus (LH-LIFG) connectivity. In addition, a mediation analysis demonstrated that the LA-LFP and LA-LITG connectivity in particular mediated the influence of TCA on trait anxiety. Overall, our study extends previous research by revealing the neural bases underlying the protective effect of TCA on anxiety and pinpointing specific mediating RSFC pathways. Future studies could explore whether targeted TCA training (behavioural or neural) can help alleviate anxiety.
Collapse
Affiliation(s)
- Yuchi Yan
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing 400715, China; Faculty of Psychology, Southwest University (SWU), Chongqing 400715, China
| | - Min Li
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing 400715, China; Faculty of Psychology, Southwest University (SWU), Chongqing 400715, China
| | - Hui Jia
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing 400715, China; Faculty of Psychology, Southwest University (SWU), Chongqing 400715, China
| | - Lei Fu
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing 400715, China; Faculty of Psychology, Southwest University (SWU), Chongqing 400715, China
| | - Jiang Qiu
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing 400715, China; Faculty of Psychology, Southwest University (SWU), Chongqing 400715, China.
| | - Wenjing Yang
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing 400715, China; Faculty of Psychology, Southwest University (SWU), Chongqing 400715, China.
| |
Collapse
|
9
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
10
|
Schneider P, Groß C, Bernhofs V, Christiner M, Benner J, Turker S, Zeidler BM, Seither‐Preisler A. Short-term plasticity of neuro-auditory processing induced by musical active listening training. Ann N Y Acad Sci 2022; 1517:176-190. [PMID: 36114664 PMCID: PMC9826140 DOI: 10.1111/nyas.14899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Although there is strong evidence for the positive effects of musical training on auditory perception, processing, and training-induced neuroplasticity, there is still little knowledge on the auditory and neurophysiological short-term plasticity through listening training. In a sample of 37 adolescents (20 musicians and 17 nonmusicians) that was compared to a control group matched for age, gender, and musical experience, we conducted a 2-week active listening training (AULOS: Active IndividUalized Listening OptimizationS). Using magnetoencephalography and psychoacoustic tests, the short-term plasticity of auditory evoked fields and auditory skills were examined in a pre-post design, adapted to the individual neuro-auditory profiles. We found bilateral, but more pronounced plastic changes in the right auditory cortex. Moreover, we observed synchronization of the auditory evoked P1, N1, and P2 responses and threefold larger amplitudes of the late P2 response, similar to the reported effects of musical long-term training. Auditory skills and thresholds benefited largely from the AULOS training. Remarkably, after training, the mean thresholds improved by 12 dB for bone conduction and by 3-4 dB for air conduction. Thus, our findings indicate a strong positive influence of active listening training on neural auditory processing and perception in adolescence, when the auditory system is still developing.
Collapse
Affiliation(s)
- Peter Schneider
- Division of NeuroradiologyUniversity of Heidelberg Medical SchoolHeidelbergGermany,Department of Neurology, Section of BiomagnetismUniversity of Heidelberg Medical SchoolHeidelbergGermany,Jazeps Vitols Latvian Academy of MusicRigaLatvia,Centre for Systematic MusicologyUniversity of GrazGrazAustria
| | - Christine Groß
- Division of NeuroradiologyUniversity of Heidelberg Medical SchoolHeidelbergGermany,Jazeps Vitols Latvian Academy of MusicRigaLatvia
| | | | - Markus Christiner
- Jazeps Vitols Latvian Academy of MusicRigaLatvia,Centre for Systematic MusicologyUniversity of GrazGrazAustria
| | - Jan Benner
- Division of NeuroradiologyUniversity of Heidelberg Medical SchoolHeidelbergGermany,Department of Neurology, Section of BiomagnetismUniversity of Heidelberg Medical SchoolHeidelbergGermany
| | - Sabrina Turker
- Lise Meitner Research Group “Cognition and Plasticity”Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | | | | |
Collapse
|
11
|
Tian X, Liu Y, Guo Z, Cai J, Tang J, Chen F, Zhang H. Cerebral Representation of Sound Localization Using Functional Near-Infrared Spectroscopy. Front Neurosci 2022; 15:739706. [PMID: 34970110 PMCID: PMC8712652 DOI: 10.3389/fnins.2021.739706] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 11/09/2021] [Indexed: 11/30/2022] Open
Abstract
Sound localization is an essential part of auditory processing. However, the cortical representation of identifying the direction of sound sources presented in the sound field using functional near-infrared spectroscopy (fNIRS) is currently unknown. Therefore, in this study, we used fNIRS to investigate the cerebral representation of different sound sources. Twenty-five normal-hearing subjects (aged 26 ± 2.7, male 11, female 14) were included and actively took part in a block design task. The test setup for sound localization was composed of a seven-speaker array spanning a horizontal arc of 180° in front of the participants. Pink noise bursts with two intensity levels (48 dB/58 dB) were randomly applied via five loudspeakers (–90°/–30°/–0°/+30°/+90°). Sound localization task performances were collected, and simultaneous signals from auditory processing cortical fields were recorded for analysis by using a support vector machine (SVM). The results showed a classification accuracy of 73.60, 75.60, and 77.40% on average at –90°/0°, 0°/+90°, and –90°/+90° with high intensity, and 70.60, 73.6, and 78.6% with low intensity. The increase of oxyhemoglobin was observed in the bilateral non-primary auditory cortex (AC) and dorsolateral prefrontal cortex (dlPFC). In conclusion, the oxyhemoglobin (oxy-Hb) response showed different neural activity patterns between the lateral and front sources in the AC and dlPFC. Our results may serve as a basic contribution for further research on the use of fNIRS in spatial auditory studies.
Collapse
Affiliation(s)
- Xuexin Tian
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yimeng Liu
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Zengzhi Guo
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jieqing Cai
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Jie Tang
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Department of Physiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Hearing Research Center, Southern Medical University, Guangzhou, China.,Key Laboratory of Mental Health of the Ministry of Education, Southern Medical University, Guangzhou, China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongzheng Zhang
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Hearing Research Center, Southern Medical University, Guangzhou, China
| |
Collapse
|
12
|
de la Piedra Walter M, Notbohm A, Eling P, Hildebrandt H. Audiospatial evoked potentials for the assessment of spatial attention deficits in patients with severe cerebrovascular accidents. J Clin Exp Neuropsychol 2021; 43:623-636. [PMID: 34592915 DOI: 10.1080/13803395.2021.1984397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
INTRODUCTION Neuropsychological assessment of spatial orientation in post-acute patients with large brain lesions is often limited due to additional cognitive disorders like aphasia, apraxia, or reduced responsiveness. METHODS To cope with these limitations, we developed a paradigm using passive audiospatial event-related potentials (pAERPs): Participants were requested to merely listen over headphones to horizontally moving tones followed by a short tone ("target"), presented either on the side to which the cue moved or on the opposite side. Two runs of 120 trials were presented and we registered AERPs with two electrodes, mounted at C3 and C4. Nine sub-acute patients with large left hemisphere (LH) or right hemisphere (RH) lesions and nine controls participated. RESULTS Patients had no problems completing the assessment. RH patients showed a reduced N100 for left-sided targets in all conditions. LH patients showed a diminished N100 for invalid trials and contralesional targets. CONCLUSION Measuring AERPs for moving auditory cues and with two electrodes allows investigating spatial attentional deficits in patients with large RH and LH lesions, who are often unable to perform clinical tests. Our procedure can be implemented easily in an acute and rehabilitation setting and might enable investigating spatial attentional processes even in patients with minimal conscious awareness.
Collapse
Affiliation(s)
| | - Annika Notbohm
- Department of Neurology, Klinikum Bremen-Ost, Bremen, Germany
| | - Paul Eling
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, The Netherlands
| | - Helmut Hildebrandt
- Department of Neurology, Klinikum Bremen-Ost, Bremen, Germany.,Institute of Psychology, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
13
|
Wang C, Wang Z, Xie B, Shi X, Yang P, Liu L, Qu T, Qin Q, Xing Y, Zhu W, Teipel SJ, Jia J, Zhao G, Li L, Tang Y. Binaural processing deficit and cognitive impairment in Alzheimer's disease. Alzheimers Dement 2021; 18:1085-1099. [PMID: 34569690 DOI: 10.1002/alz.12464] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/07/2021] [Accepted: 08/05/2021] [Indexed: 01/08/2023]
Abstract
Speech comprehension in noisy environments depends on central auditory functions, which are vulnerable in Alzheimer's disease (AD). Binaural processing exploits two ear sounds to optimally process degraded sound information; its characteristics are poorly understood in AD. We studied behavioral and electrophysiological alterations in binaural processing among 121 participants (AD = 27; amnestic mild cognitive impairment [aMCI] = 33; subjective cognitive decline [SCD] = 30; cognitively normal [CN] = 31). We observed impairment of binaural processing in AD and aMCI, and detected a U-shaped curve change in phase synchrony (declining from CN to SCD and to aMCI, but increasing from aMCI to AD). This improvement in phase synchrony accompanying more severe cognitive stages could reflect neural adaptation for binaural processing. Moreover, increased phase synchrony is associated with worse memory during the stages when neural adaptation apparently occurs. These findings support a hypothesis that neural adaptation for binaural processing deficit may exacerbate cognitive impairment, which could help identify biomarkers and therapeutic targets in AD.
Collapse
Affiliation(s)
- Changming Wang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Zhibin Wang
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Beijia Xie
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Xinrui Shi
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Pengcheng Yang
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Speech and Hearing Research Center, Peking University, Beijing, China
| | - Lei Liu
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Speech and Hearing Research Center, Peking University, Beijing, China
| | - Tianshu Qu
- Speech and Hearing Research Center, Peking University, Beijing, China.,Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China
| | - Qi Qin
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Yi Xing
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China.,Key Laboratory of Neurodegenerative Diseases, Ministry of Education of the People's Republic of China, Beijing, China
| | - Wei Zhu
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Stefan J Teipel
- Department of Psychosomatic Medicine, University Medicine Rostock, Rostock, Germany.,DZNE, German Center for Neurodegenerative Diseases, Rostock, Germany
| | - Jianping Jia
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China.,Key Laboratory of Neurodegenerative Diseases, Ministry of Education of the People's Republic of China, Beijing, China.,Center of Alzheimer's Disease, Beijing Institute for Brain Disorders, Beijing, China.,Beijing Key Laboratory of Geriatric Cognitive Disorders, Beijing, China.,National Clinical Research Center for Geriatric Disorders, Beijing, China
| | - Guoguang Zhao
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Speech and Hearing Research Center, Peking University, Beijing, China.,Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China.,Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Beijing Institute for Brain Disorders, Beijing, China
| | - Yi Tang
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China.,Key Laboratory of Neurodegenerative Diseases, Ministry of Education of the People's Republic of China, Beijing, China
| |
Collapse
|
14
|
Shestopalova LB, Petropavlovskaia EA, Semenova VV, Nikitin NI. Brain oscillations evoked by sound motion. Brain Res 2020; 1752:147232. [PMID: 33385379 DOI: 10.1016/j.brainres.2020.147232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/27/2020] [Accepted: 11/30/2020] [Indexed: 11/25/2022]
Abstract
The present study investigates the event-related oscillations underlying the motion-onset response (MOR) evoked by sounds moving at different velocities. EEG was recorded for stationary sounds and for three patterns of sound motion produced by changes in interaural time differences. We explored the effect of motion velocity on the MOR potential, and also on the event-related spectral perturbation (ERSP) and inter-trial phase coherence (ITC) calculated from the time-frequency decomposition of EEG signals. The phase coherence of slow oscillations increased with an increase in motion velocity similarly to the magnitude of cN1 and cP2 components of the MOR response. The delta-to-alpha inter-trial spectral power remained at the same level up to, but not including, the highest velocity, suggesting that gradual spatial changes within the sound did not induce non-coherent activity. Conversely, the abrupt sound displacement induced theta-alpha oscillations which had low phase consistency. The findings suggest that the MOR potential could be mainly generated by the phase resetting of slow oscillations, and the degree of phase coherence may be considered as a neurophysiological indicator of sound motion processing.
Collapse
Affiliation(s)
- Lidia B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| | | | - Varvara V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| | - Nikolay I Nikitin
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| |
Collapse
|
15
|
St George BV, Cone B. Perceptual and Electrophysiological Correlates of Fixed Versus Moving Sound Source Lateralization. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3176-3194. [PMID: 32812839 DOI: 10.1044/2020_jslhr-19-00289] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The aims of the study were (a) to evaluate the effects of systematically varied factors of stimulus duration, interaural-level difference (ILD), and direction on perceptual and electrophysiological metrics of lateralization for fixed versus moving targets and (b) to evaluate the hemispheric activity underlying perception of fixed versus moving auditory targets. Method Twelve normal-hearing, young adult listeners were evaluated using perceptual and P300 tests of lateralization. Both perceptual and P300 tests utilized stimuli that varied for type (fixed and moving), direction (right and left), duration (100 and 500 ms), and magnitude of ILD (9 and 18 dB). Listeners provided laterality judgments and stimulus-type discrimination (fixed vs. moving) judgments for all combinations of acoustic factors. During P300 recordings, listeners discriminated between left- versus right-directed targets, as the other acoustic parameters were varied. Results ILD magnitude and stimulus type had statistically significant effects on laterality ratings, with larger magnitude ILDs and fixed type resulting in greater lateralization. Discriminability between fixed versus moving targets was dependent on stimulus duration and ILD magnitude. ILD magnitude was a significant predictor of P300 amplitude. There was a statistically significant inverse relationship between the perceived velocity of targets and P300 latency. Lateralized targets evoked contralateral hemispheric P300 activity. Moreover, a right-hemisphere enhancement was observed for fixed-type lateralized deviant stimuli. Conclusions Perceptual and P300 findings indicate that lateralization of auditory movement is highly dependent on temporal integration. Both the behavioral and physiological findings of this study suggest that moving auditory targets with ecologically valid velocities are processed by the central auditory nervous system within a window of temporal integration that is greater than that for fixed auditory targets. Furthermore, these findings lend support for a left hemispatial perceptual bias and right hemispheric dominance for spatial listening.
Collapse
Affiliation(s)
| | - Barbara Cone
- Department of Speech, Language, and Hearing Sciences, The University of Arizona, Tucson
| |
Collapse
|
16
|
Effects of prism adaptation on auditory spatial attention in patients with left unilateral spatial neglect: a non-randomized pilot trial. Int J Rehabil Res 2020; 43:228-234. [PMID: 32776764 DOI: 10.1097/mrr.0000000000000413] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
A short period of adaptation to a prismatic shift of the visual field to the right briefly but significantly improves left unilateral spatial neglect. Additionally, prism adaptation affects multiple modalities, including processes of vision, auditory spatial attention, and sound localization. This non-randomized, single-center, controlled trial aimed to examine the immediate effects of prism adaptation on the sound-localization abilities of patients with left unilateral spatial neglect using a simple source localization test. Subjects were divided by self-allocation into a prism-adaptation group (n = 11) and a control group (n = 12). At baseline, patients with left unilateral spatial neglect showed a rightward deviation tendency in the left space. This tendency to right-sided bias in the left space was attenuated after prism adaptation. However, no changes were observed in the right space of patients with left unilateral spatial neglect after prism adaptation, or in the control group. Our results suggest that prism adaptation improves not only vision and proprioception but also auditory attention in the left space of patients with left unilateral spatial neglect. Our findings demonstrate that a single session of prism adaptation can significantly improve sound localization in patients with left unilateral spatial neglect. However, in this study, it was not possible to accurately determine whether the mechanism was a chronic change in head orientation or a readjustment of the spatial representation of the brain; thus, further studies need to be considered.
Collapse
|
17
|
Kopco N, Doreswamy KK, Huang S, Rossi S, Ahveninen J. Cortical auditory distance representation based on direct-to-reverberant energy ratio. Neuroimage 2020; 208:116436. [PMID: 31809885 PMCID: PMC6997045 DOI: 10.1016/j.neuroimage.2019.116436] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 11/30/2019] [Accepted: 12/02/2019] [Indexed: 11/26/2022] Open
Abstract
Auditory distance perception and its neuronal mechanisms are poorly understood, mainly because 1) it is difficult to separate distance processing from intensity processing, 2) multiple intensity-independent distance cues are often available, and 3) the cues are combined in a context-dependent way. A recent fMRI study identified human auditory cortical area representing intensity-independent distance for sources presented along the interaural axis (Kopco et al. PNAS, 109, 11019-11024). For these sources, two intensity-independent cues are available, interaural level difference (ILD) and direct-to-reverberant energy ratio (DRR). Thus, the observed activations may have been contributed by not only distance-related, but also direction-encoding neuron populations sensitive to ILD. Here, the paradigm from the previous study was used to examine DRR-based distance representation for sounds originating in front of the listener, where ILD is not available. In a virtual environment, we performed behavioral and fMRI experiments, combined with computational analyses to identify the neural representation of distance based on DRR. The stimuli varied in distance (15-100 cm) while their received intensity was varied randomly and independently of distance. Behavioral performance showed that intensity-independent distance discrimination is accurate for frontal stimuli, even though it is worse than for lateral stimuli. fMRI activations for sounds varying in frontal distance, as compared to varying only in intensity, increased bilaterally in the posterior banks of Heschl's gyri, the planum temporale, and posterior superior temporal gyrus regions. Taken together, these results suggest that posterior human auditory cortex areas contain neuron populations that are sensitive to distance independent of intensity and of binaural cues relevant for directional hearing.
Collapse
Affiliation(s)
- Norbert Kopco
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA; Institute of Computer Science, P. J. Šafárik University, Košice, 04001, Slovakia; Hearing Research Center, Boston University, Boston, MA, 02215, USA.
| | - Keerthi Kumar Doreswamy
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA; Institute of Computer Science, P. J. Šafárik University, Košice, 04001, Slovakia
| | - Samantha Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA
| | - Stephanie Rossi
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA
| |
Collapse
|
18
|
Shestopalova LB, Petropavlovskaia EA, Semenova VV, Nikitin NI. Lateralization of brain responses to auditory motion: A study using single-trial analysis. Neurosci Res 2020; 162:31-44. [PMID: 32001322 DOI: 10.1016/j.neures.2020.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 12/17/2019] [Accepted: 01/10/2020] [Indexed: 11/19/2022]
Abstract
The present study investigates hemispheric asymmetry of the ERPs and low-frequency oscillatory responses evoked in both hemispheres of the brain by the sound stimuli with delayed onset of motion. EEG was recorded for three patterns of sound motion produced by changes in interaural time differences. Event-related spectral perturbation (ERSP) and inter-trial phase coherence (ITC) were computed from the time-frequency decomposition of EEG signals. The participants either read books of their choice (passive listening) or indicated the sound trajectories perceived using a graphic tablet (active listening). Our goal was to find out whether the lateralization of the motion-onset response (MOR) and oscillatory responses to sound motion were more consistent with the right-hemispheric dominance, contralateral or neglect model of interhemispheric asymmetry. Apparent dominance of the right hemisphere was found only in the ERSP responses. Stronger contralaterality of the left hemisphere corresponding to the "neglect model" of asymmetry was shown by the MOR components and by the phase coherence of the delta-alpha oscillations. Velocity and attention did not change consistently the interhemispheric asymmetry of both the MOR and the oscillatory responses. Our findings demonstrate how the lateralization pattern shown by the MOR potential was interrelated with that of the motion-related single-trial measures.
Collapse
Affiliation(s)
- L B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - E A Petropavlovskaia
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - V V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - N I Nikitin
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| |
Collapse
|
19
|
Joint Representation of Spatial and Phonetic Features in the Human Core Auditory Cortex. Cell Rep 2020; 24:2051-2062.e2. [PMID: 30134167 DOI: 10.1016/j.celrep.2018.07.076] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Revised: 04/09/2018] [Accepted: 07/22/2018] [Indexed: 12/12/2022] Open
Abstract
The human auditory cortex simultaneously processes speech and determines the location of a speaker in space. Neuroimaging studies in humans have implicated core auditory areas in processing the spectrotemporal and the spatial content of sound; however, how these features are represented together is unclear. We recorded directly from human subjects implanted bilaterally with depth electrodes in core auditory areas as they listened to speech from different directions. We found local and joint selectivity to spatial and spectrotemporal speech features, where the spatial and spectrotemporal features are organized independently of each other. This representation enables successful decoding of both spatial and phonetic information. Furthermore, we found that the location of the speaker does not change the spectrotemporal tuning of the electrodes but, rather, modulates their mean response level. Our findings contribute to defining the functional organization of responses in the human auditory cortex, with implications for more accurate neurophysiological models of speech processing.
Collapse
|
20
|
Liang Y, Liu B, Li X, Wang P, Wang B. Revealing the differences of the representations of sounds from different directions in the human brain using functional connectivity. Neurosci Lett 2020; 718:134746. [PMID: 31923522 DOI: 10.1016/j.neulet.2020.134746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Revised: 01/03/2020] [Accepted: 01/06/2020] [Indexed: 11/24/2022]
Abstract
Many studies have focused on the processing mechanism of sound directions in the human brain, however, as far as we know, it remains unclear whether the representations of sounds from different directions are different. In the present study, 28 subjects were scanned while listening to sounds from different directions. We used the whole-brain functional connectivity (FC) analysis to explore which brain regions had significant changes. Our results revealed that sounds from different directions affected the FC in the widely distributed regions. Importantly, all regions showed significant differences in FC between the central and eccentric directions, while few regions showed a difference between the left and right directions. These findings revealed the differences in the representations of sounds from different directions.
Collapse
Affiliation(s)
- Yaping Liang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, PR China; College of Intelligence and Computing, Tianjin University, Tianjin, 300350, PR China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, PR China.
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, PR China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, Shandong, 264003, PR China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, PR China
| |
Collapse
|
21
|
Kalyvas A, Koutsarnakis C, Komaitis S, Karavasilis E, Christidi F, Skandalakis GP, Liouta E, Papakonstantinou O, Kelekis N, Duffau H, Stranjalis G. Mapping the human middle longitudinal fasciculus through a focused anatomo-imaging study: shifting the paradigm of its segmentation and connectivity pattern. Brain Struct Funct 2019; 225:85-119. [PMID: 31773331 DOI: 10.1007/s00429-019-01987-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 11/14/2019] [Indexed: 12/11/2022]
Abstract
Τhe middle longitudinal fasciculus (MdLF) was initially identified in humans as a discrete subcortical pathway connecting the superior temporal gyrus (STG) to the angular gyrus (AG). Further anatomo-imaging studies, however, proposed more sophisticated but conflicting connectivity patterns and have created a vague perception on its functional anatomy. Our aim was, therefore, to investigate the ambiguous structural architecture of this tract through focused cadaveric dissections augmented by a tailored DTI protocol in healthy participants from the Human Connectome dataset. Three segments and connectivity patterns were consistently recorded: the MdLF-I, connecting the dorsolateral Temporal Pole (TP) and STG to the Superior Parietal Lobule/Precuneus, through the Heschl's gyrus; the MdLF-II, connecting the dorsolateral TP and the STG with the Parieto-occipital area through the posterior transverse gyri and the MdLF-III connecting the most anterior part of the TP to the posterior border of the occipital lobe through the AG. The lack of an established termination pattern to the AG and the fact that no significant leftward asymmetry is disclosed tend to shift the paradigm away from language function. Conversely, the theory of "where" and "what" auditory pathways, the essential relationship of the MdLF with the auditory cortex and the functional role of the cortical areas implicated in its connectivity tend to shift the paradigm towards auditory function. Allegedly, the MdLF-I and MdLF-II segments could underpin the perception of auditory representations; whereas, the MdLF-III could potentially subserve the integration of auditory and visual information.
Collapse
Affiliation(s)
- Aristotelis Kalyvas
- Athens Microneurosurgery Laboratory, Evangelismos Hospital, Athens, Greece.,Department of Neurosurgery, Evangelismos Hospital, National and Kapodistrian University of Athens, Athens, Greece.,Department of Anatomy, Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Christos Koutsarnakis
- Athens Microneurosurgery Laboratory, Evangelismos Hospital, Athens, Greece. .,Department of Neurosurgery, Evangelismos Hospital, National and Kapodistrian University of Athens, Athens, Greece. .,Department of Anatomy, Medical School, National and Kapodistrian University of Athens, Athens, Greece.
| | - Spyridon Komaitis
- Athens Microneurosurgery Laboratory, Evangelismos Hospital, Athens, Greece.,Department of Neurosurgery, Evangelismos Hospital, National and Kapodistrian University of Athens, Athens, Greece.,Department of Anatomy, Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Efstratios Karavasilis
- Second Department of Radiology, Attikon Hospital, National and Kapodistrian University of Athens, Athens, Greece
| | - Foteini Christidi
- First Department of Neurology, Aeginition Hospital, National and Kapodistrian University of Athens, Athens, Greece
| | - Georgios P Skandalakis
- Athens Microneurosurgery Laboratory, Evangelismos Hospital, Athens, Greece.,Department of Anatomy, Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Evangelia Liouta
- Athens Microneurosurgery Laboratory, Evangelismos Hospital, Athens, Greece.,Hellenic Center for Neurosurgical Research, "PetrosKokkalis", Athens, Greece
| | - Olympia Papakonstantinou
- Second Department of Radiology, Attikon Hospital, National and Kapodistrian University of Athens, Athens, Greece
| | - Nikolaos Kelekis
- Second Department of Radiology, Attikon Hospital, National and Kapodistrian University of Athens, Athens, Greece
| | - Hugues Duffau
- Department of Neurosurgery, Montpellier University Medical Center, Gui de Chauliac Hospital, Montpellier, France
| | - George Stranjalis
- Athens Microneurosurgery Laboratory, Evangelismos Hospital, Athens, Greece.,Department of Neurosurgery, Evangelismos Hospital, National and Kapodistrian University of Athens, Athens, Greece.,Hellenic Center for Neurosurgical Research, "PetrosKokkalis", Athens, Greece
| |
Collapse
|
22
|
Bednar A, Lalor EC. Where is the cocktail party? Decoding locations of attended and unattended moving sound sources using EEG. Neuroimage 2019; 205:116283. [PMID: 31629828 DOI: 10.1016/j.neuroimage.2019.116283] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 10/08/2019] [Accepted: 10/14/2019] [Indexed: 11/18/2022] Open
Abstract
Recently, we showed that in a simple acoustic scene with one sound source, auditory cortex tracks the time-varying location of a continuously moving sound. Specifically, we found that both the delta phase and alpha power of the electroencephalogram (EEG) can be used to reconstruct the sound source azimuth. However, in natural settings, we are often presented with a mixture of multiple competing sounds and so we must focus our attention on the relevant source in order to segregate it from the competing sources e.g. 'cocktail party effect'. While many studies have examined this phenomenon in the context of sound envelope tracking by the cortex, it is unclear how we process and utilize spatial information in complex acoustic scenes with multiple sound sources. To test this, we created an experiment where subjects listened to two concurrent sound stimuli that were moving within the horizontal plane over headphones while we recorded their EEG. Participants were tasked with paying attention to one of the two presented stimuli. The data were analyzed by deriving linear mappings, temporal response functions (TRF), between EEG data and attended as well unattended sound source trajectories. Next, we used these TRFs to reconstruct both trajectories from previously unseen EEG data. In a first experiment we used noise stimuli and included the task involved spatially localizing embedded targets. Then, in a second experiment, we employed speech stimuli and a non-spatial speech comprehension task. Results showed the trajectory of an attended sound source can be reliably reconstructed from both delta phase and alpha power of EEG even in the presence of distracting stimuli. Moreover, the reconstruction was robust to task and stimulus type. The cortical representation of the unattended source position was below detection level for the noise stimuli, but we observed weak tracking of the unattended source location for the speech stimuli by the delta phase of EEG. In addition, we demonstrated that the trajectory reconstruction method can in principle be used to decode selective attention on a single-trial basis, however, its performance was inferior to envelope-based decoders. These results suggest a possible dissociation of delta phase and alpha power of EEG in the context of sound trajectory tracking. Moreover, the demonstrated ability to localize and determine the attended speaker in complex acoustic environments is particularly relevant for cognitively controlled hearing devices.
Collapse
Affiliation(s)
- Adam Bednar
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland.
| | - Edmund C Lalor
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland; Department of Biomedical Engineering, Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
23
|
Abstract
Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.
Collapse
|
24
|
Kinukawa T, Takeuchi N, Sugiyama S, Nishihara M, Nishiwaki K, Inui K. Properties of echoic memory revealed by auditory-evoked magnetic fields. Sci Rep 2019; 9:12260. [PMID: 31439871 PMCID: PMC6706430 DOI: 10.1038/s41598-019-48796-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 08/12/2019] [Indexed: 11/09/2022] Open
Abstract
We used auditory-evoked magnetic fields to investigate the properties of echoic memory. The sound stimulus was a repeated 1-ms click at 100 Hz for 500 ms, presented every 800 ms. The phase of the sound was shifted by inserting an interaural time delay of 0.49 ms to each side. Therefore, there were two sounds, lateralized to the left and right. According to the preceding sound, each sound was labeled as D (preceded by a different sound) or S (by the same sound). The D sounds were further grouped into 1D, 2D, and 3D, according to the number of preceding different sounds. The S sounds were similarly grouped to 1S and 2S. The results showed that the preceding event significantly affected the amplitude of the cortical response; although there was no difference between 1S and 2S, the amplitudes for D sounds were greater than those for S sounds. Most importantly, there was a significant amplitude difference between 1S and 1D. These results suggested that sensory memory was formed by a single sound, and was immediately replaced by new information. The constantly-updating nature of sensory memory is considered to enable it to act as a real-time monitor for new information.
Collapse
Affiliation(s)
- Tomoaki Kinukawa
- Department of Anesthesiology, Nagoya University Graduate School of Medicine, Nagoya, 466-8550, Japan.
| | - Nobuyuki Takeuchi
- Neuropsychiatric Department, Aichi Medical University, Nagakute, 480-1195, Japan
| | - Shunsuke Sugiyama
- Department of Psychiatry and Psychotherapy, , Gifu University, Gifu, 501-1193, Japan
| | - Makoto Nishihara
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, 480-1195, Japan
| | - Kimitoshi Nishiwaki
- Department of Anesthesiology, Nagoya University Graduate School of Medicine, Nagoya, 466-8550, Japan
| | - Koji Inui
- Department of Functioning and Disability, Institute for Developmental Research, Kasugai, 480-0392, Japan.,Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, 444-8585, Japan
| |
Collapse
|
25
|
Ozmeral EJ, Eddins DA, Eddins AC. Electrophysiological responses to lateral shifts are not consistent with opponent-channel processing of interaural level differences. J Neurophysiol 2019; 122:737-748. [PMID: 31242052 DOI: 10.1152/jn.00090.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Cortical encoding of auditory space relies on two major peripheral cues, interaural time difference (ITD) and interaural level difference (ILD) of the sounds arriving at a listener's ears. In much of the precortical auditory pathway, ITD and ILD cues are processed independently, and it is assumed that cue integration is a higher order process. However, there remains debate on how ITDs and ILDs are encoded in the cortex and whether they share a common mechanism. The present study used electroencephalography (EEG) to measure evoked cortical potentials from narrowband noise stimuli with imposed binaural cue changes. Previous studies have similarly tested ITD shifts to demonstrate that neural populations broadly favor one spatial hemifield over the other, which is consistent with an opponent-channel model that computes the relative activity between broadly tuned neural populations. However, it is still a matter of debate whether the same coding scheme applies to ILDs and, if so, whether processing the two binaural cues is distributed across similar regions of the cortex. The results indicate that ITD and ILD cues have similar neural signatures with respect to the monotonic responses to shift magnitude; however, the direction of the shift did not elicit responses equally across cues. Specifically, ITD shifts evoked greater responses for outward than inward shifts, independently of the spatial hemifield of the shift, whereas ILD-shift responses were dependent on the hemifield in which the shift occurred. Active cortical structures showed only minor overlap between responses to cues, suggesting the two are not represented by the same pathway.NEW & NOTEWORTHY Interaural time differences (ITDs) and interaural level differences (ILDs) are critical to locating auditory sources in the horizontal plane. The higher order perceptual feature of auditory space is thought to be encoded together by these binaural differences, yet evidence of their integration in cortex remains elusive. Although present results show some common effects between the two cues, key differences were observed that are not consistent with an ITD-like opponent-channel process for ILD encoding.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida.,Department of Chemical and Biomedical Engineering, University of South Florida, Tampa, Florida
| | - Ann Clock Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| |
Collapse
|
26
|
Auditory Localization and Spatial Release From Masking in Children With Suspected Auditory Processing Disorder. Ear Hear 2019; 40:1187-1196. [PMID: 30870241 DOI: 10.1097/aud.0000000000000703] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES We sought to investigate whether children referred to our audiology clinic with a complaint of listening difficulty, that is, suspected of auditory processing disorder (APD), have difficulties localizing sounds in noise and whether they have reduced benefit from spatial release from masking. DESIGN Forty-seven typically hearing children in the age range of 7 to 17 years took part in the study. Twenty-one typically developing (TD) children served as controls, and the other 26 children, referred to our audiology clinic with listening problems, were the study group: suspected APD (sAPD). The ability to localize a speech target (the word "baseball") was measured in quiet, broadband noise, and speech-babble in a hemi-anechoic chamber. Participants stood at the center of a loudspeaker array that delivered the target in a diffused noise-field created by presenting independent noise from four loudspeakers spaced 90° apart starting at 45°. In the noise conditions, the signal-to-noise ratio was varied between -12 and 0 dB in 6-dB steps by keeping the noise level constant at 66 dB SPL and varying the target level. Localization ability was indexed by two metrics, one assessing variability in lateral plane [lateral scatter (Lscat)] and the other accuracy in the front/back dimension [front/back percent correct (FBpc)]. Spatial release from masking (SRM) was measured using a modified version of the Hearing in Noise Test (HINT). In this HINT paradigm, speech targets were always presented from the loudspeaker at 0°, and a single noise source was presented either at 0°, 90°, or 270° at 65 dB A. The SRM was calculated as the difference between the 50% correct HINT speech reception threshold obtained when both speech and noise were collocated at 0° and when the noise was presented at either 90° or 270°. RESULTS As expected, in both groups, localization in noise improved as a function of signal-to-noise ratio. Broadband noise caused significantly larger disruption in FBpc than in Lscat when compared with speech babble. There were, however, no group effects or group interactions, suggesting that the children in the sAPD group did not differ significantly from TD children in either localization metric (Lscat and FBpc). While a significant SRM was observed in both groups, there were no group effects or group interactions. Collectively, the data suggest that children in the sAPD group did not differ significantly from the TD group for either binaural measure investigated in the study. CONCLUSIONS As is evident from a few poor performers, some children with listening difficulties may have difficulty in localizing sounds and may not benefit from spatial separation of speech and noise. However, the heterogeneity in APD and the variability in our data do not support the notion that localization is a global APD problem. Future studies that employ a case study design might provide more insights.
Collapse
|
27
|
Tissieres I, Crottaz-Herbette S, Clarke S. Implicit representation of the auditory space: contribution of the left and right hemispheres. Brain Struct Funct 2019; 224:1569-1582. [PMID: 30848352 DOI: 10.1007/s00429-019-01853-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Accepted: 02/25/2019] [Indexed: 11/24/2022]
Abstract
Spatial cues contribute to the ability to segregate sound sources and thus facilitate their detection and recognition. This implicit use of spatial cues can be preserved in cases of cortical spatial deafness, suggesting that partially distinct neural networks underlie the explicit sound localization and the implicit use of spatial cues. We addressed this issue by assessing 40 patients, 20 patients with left and 20 patients with right hemispheric damage, for their ability to use auditory spatial cues implicitly in a paradigm of spatial release from masking (SRM) and explicitly in sound localization. The anatomical correlates of their performance were determined with voxel-based lesion-symptom mapping (VLSM). During the SRM task, the target was always presented at the centre, whereas the masker was presented at the centre or at one of the two lateral positions on the right or left side. The SRM effect was absent in some but not all patients; the inability to perceive the target when the masker was at one of the lateral positions correlated with lesions of the left temporo-parieto-frontal cortex or of the right inferior parietal lobule and the underlying white matter. As previously reported, sound localization depended critically on the right parietal and opercular cortex. Thus, explicit and implicit use of spatial cues depends on at least partially distinct neural networks. Our results suggest that the implicit use may rely on the left-dominant position-linked representation of sound objects, which has been demonstrated in previous EEG and fMRI studies.
Collapse
Affiliation(s)
- Isabel Tissieres
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland.
| |
Collapse
|
28
|
Tissieres I, Crottaz-Herbette S, Clarke S. Exploring auditory neglect: Anatomo-clinical correlations of auditory extinction. Ann Phys Rehabil Med 2018; 61:386-394. [DOI: 10.1016/j.rehab.2018.05.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Revised: 05/05/2018] [Accepted: 05/06/2018] [Indexed: 11/26/2022]
|
29
|
Active Sound Localization Sharpens Spatial Tuning in Human Primary Auditory Cortex. J Neurosci 2018; 38:8574-8587. [PMID: 30126968 DOI: 10.1523/jneurosci.0587-18.2018] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 07/09/2018] [Accepted: 07/19/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing sensitivity in humans is dynamic and task-dependent, but the mechanisms in human auditory cortex that enable dynamic sound location encoding remain unclear. Using functional magnetic resonance imaging (fMRI), we assessed how active behavior affects encoding of sound location (azimuth) in primary auditory cortical areas and planum temporale (PT). According to the hierarchical model of auditory processing and cortical functional specialization, PT is implicated in sound location ("where") processing. Yet, our results show that spatial tuning profiles in primary auditory cortical areas (left primary core and right caudo-medial belt) sharpened during a sound localization ("where") task compared with a sound identification ("what") task. In contrast, spatial tuning in PT was sharp but did not vary with task performance. We further applied a population pattern decoder to the measured fMRI activity patterns, which confirmed the task-dependent effects in the left core: sound location estimates from fMRI patterns measured during active sound localization were most accurate. In PT, decoding accuracy was not modulated by task performance. These results indicate that changes of population activity in human primary auditory areas reflect dynamic and task-dependent processing of sound location. As such, our findings suggest that the hierarchical model of auditory processing may need to be revised to include an interaction between primary and functionally specialized areas depending on behavioral requirements.SIGNIFICANCE STATEMENT According to a purely hierarchical view, cortical auditory processing consists of a series of analysis stages from sensory (acoustic) processing in primary auditory cortex to specialized processing in higher-order areas. Posterior-dorsal cortical auditory areas, planum temporale (PT) in humans, are considered to be functionally specialized for spatial processing. However, this model is based mostly on passive listening studies. Our results provide compelling evidence that active behavior (sound localization) sharpens spatial selectivity in primary auditory cortex, whereas spatial tuning in functionally specialized areas (PT) is narrow but task-invariant. These findings suggest that the hierarchical view of cortical functional specialization needs to be extended: our data indicate that active behavior involves feedback projections from higher-order regions to primary auditory cortex.
Collapse
|
30
|
Neural tracking of auditory motion is reflected by delta phase and alpha power of EEG. Neuroimage 2018; 181:683-691. [PMID: 30053517 DOI: 10.1016/j.neuroimage.2018.07.054] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 07/10/2018] [Accepted: 07/23/2018] [Indexed: 12/29/2022] Open
Abstract
It is of increasing practical interest to be able to decode the spatial characteristics of an auditory scene from electrophysiological signals. However, the cortical representation of auditory space is not well characterized, and it is unclear how cortical activity reflects the time-varying location of a moving sound. Recently, we demonstrated that cortical response measures to discrete noise bursts can be decoded to determine their origin in space. Here we build on these findings to investigate the cortical representation of a continuously moving auditory stimulus using scalp recorded electroencephalography (EEG). In a first experiment, subjects listened to pink noise over headphones which was spectro-temporally modified to be perceived as randomly moving on a semi-circular trajectory in the horizontal plane. While subjects listened to the stimuli, we recorded their EEG using a 128-channel acquisition system. The data were analysed by 1) building a linear regression model (decoder) mapping the relationship between the stimulus location and a training set of EEG data, and 2) using the decoder to reconstruct an estimate of the time-varying sound source azimuth from the EEG data. The results showed that we can decode sound trajectory with a reconstruction accuracy significantly above chance level. Specifically, we found that the phase of delta (<2 Hz) and power of alpha (8-12 Hz) EEG track the dynamics of a moving auditory object. In a follow-up experiment, we replaced the noise with pulse train stimuli containing only interaural level and time differences (ILDs and ITDs respectively). This allowed us to investigate whether our trajectory decoding is sensitive to both acoustic cues. We found that the sound trajectory can be decoded for both ILD and ITD stimuli. Moreover, their neural signatures were similar and even allowed successful cross-cue classification. This supports the notion of integrated processing of ILD and ITD at the cortical level. These results are particularly relevant for application in devices such as cognitively controlled hearing aids and for the evaluation of virtual acoustic environments.
Collapse
|
31
|
The Encoding of Sound Source Elevation in the Human Auditory Cortex. J Neurosci 2018; 38:3252-3264. [PMID: 29507148 DOI: 10.1523/jneurosci.2530-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Revised: 02/11/2018] [Accepted: 02/14/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation.SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source.
Collapse
|
32
|
Salminen NH, Jones SJ, Christianson GB, Marquardt T, McAlpine D. A common periodic representation of interaural time differences in mammalian cortex. Neuroimage 2018; 167:95-103. [PMID: 29122721 PMCID: PMC5854251 DOI: 10.1016/j.neuroimage.2017.11.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 10/01/2017] [Accepted: 11/04/2017] [Indexed: 11/16/2022] Open
Abstract
Binaural hearing, the ability to detect small differences in the timing and level of sounds at the two ears, underpins the ability to localize sound sources along the horizontal plane, and is important for decoding complex spatial listening environments into separate objects – a critical factor in ‘cocktail-party listening’. For human listeners, the most important spatial cue is the interaural time difference (ITD). Despite many decades of neurophysiological investigations of ITD sensitivity in small mammals, and computational models aimed at accounting for human perception, a lack of concordance between these studies has hampered our understanding of how the human brain represents and processes ITDs. Further, neural coding of spatial cues might depend on factors such as head-size or hearing range, which differ considerably between humans and commonly used experimental animals. Here, using magnetoencephalography (MEG) in human listeners, and electro-corticography (ECoG) recordings in guinea pig—a small mammal representative of a range of animals in which ITD coding has been assessed at the level of single-neuron recordings—we tested whether processing of ITDs in human auditory cortex accords with a frequency-dependent periodic code of ITD reported in small mammals, or whether alternative or additional processing stages implemented in psychoacoustic models of human binaural hearing must be assumed. Our data were well accounted for by a model consisting of periodically tuned ITD-detectors, and were highly consistent across the two species. The results suggest that the representation of ITD in human auditory cortex is similar to that found in other mammalian species, a representation in which neural responses to ITD are determined by phase differences relative to sound frequency rather than, for instance, the range of ITDs permitted by head size or the absolute magnitude or direction of ITD. ITD tuning is studied in human MEG and guinea pig ECoG with identical stimuli. Auditory cortical tuning to ITD is highly consistent across species. Results are consistent with a periodic, frequency-dependent code.
Collapse
Affiliation(s)
- Nelli H Salminen
- Brain and Mind Laboratory, Dept. of Neuroscience and Biomedical Engineering, MEG Core, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland.
| | - Simon J Jones
- UCL Ear Institute, 332 Gray's Inn Road, London, WC1X 8EE, UK
| | | | | | - David McAlpine
- UCL Ear Institute, 332 Gray's Inn Road, London, WC1X 8EE, UK; Dept of Linguistics, Australian Hearing Hub, Macquarie University, Sydney, NSW 2109, Australia
| |
Collapse
|
33
|
Rauschecker JP. Where, When, and How: Are they all sensorimotor? Towards a unified view of the dorsal pathway in vision and audition. Cortex 2018; 98:262-268. [PMID: 29183630 PMCID: PMC5771843 DOI: 10.1016/j.cortex.2017.10.020] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 08/19/2017] [Accepted: 10/12/2017] [Indexed: 10/18/2022]
Abstract
Dual processing streams in sensory systems have been postulated for a long time. Much experimental evidence has been accumulated from behavioral, neuropsychological, electrophysiological, neuroanatomical and neuroimaging work supporting the existence of largely segregated cortical pathways in both vision and audition. More recently, debate has returned to the question of overlap between these pathways and whether there aren't really more than two processing streams. The present piece defends the dual-system view. Focusing on the functions of the dorsal stream in the auditory and language system I try to reconcile the various models of Where, How and When into one coherent concept of sensorimotor integration. This framework incorporates principles of internal models in feedback control systems and is applicable to the visual system as well.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA; Institute for Advanced Study, Technische Universität München, Garching bei München, Germany.
| |
Collapse
|
34
|
Anisotropy of lateral peripersonal space is linked to handedness. Exp Brain Res 2017; 236:609-618. [DOI: 10.1007/s00221-017-5158-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 12/14/2017] [Indexed: 11/27/2022]
|
35
|
The role of auditory cortex in the spatial ventriloquism aftereffect. Neuroimage 2017; 162:257-268. [PMID: 28889003 DOI: 10.1016/j.neuroimage.2017.09.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Revised: 08/15/2017] [Accepted: 09/01/2017] [Indexed: 11/21/2022] Open
Abstract
Cross-modal recalibration allows the brain to maintain coherent sensory representations of the world. Using functional magnetic resonance imaging (fMRI), the present study aimed at identifying the neural mechanisms underlying recalibration in an audiovisual ventriloquism aftereffect paradigm. Participants performed a unimodal sound localization task, before and after they were exposed to adaptation blocks, in which sounds were paired with spatially disparate visual stimuli offset by 14° to the right. Behavioral results showed a significant rightward shift in sound localization following adaptation, indicating a ventriloquism aftereffect. Regarding fMRI results, left and right planum temporale (lPT/rPT) were found to respond more to contralateral sounds than to central sounds at pretest. Contrasting posttest with pretest blocks revealed significantly enhanced fMRI-signals in space-sensitive lPT after adaptation, matching the behavioral rightward shift in sound localization. Moreover, a region-of-interest analysis in lPT/rPT revealed that the lPT activity correlated positively with the localization shift for right-side sounds, whereas rPT activity correlated negatively with the localization shift for left-side and central sounds. Finally, using functional connectivity analysis, we observed enhanced coupling of the lPT with left and right inferior parietal areas as well as left motor regions following adaptation and a decoupling of lPT/rPT with contralateral auditory cortex, which scaled with participants' degree of adaptation. Together, the fMRI results suggest that cross-modal spatial recalibration is accomplished by an adjustment of unisensory representations in low-level auditory cortex. Such persistent adjustments of low-level sensory representations seem to be mediated by the interplay with higher-level spatial representations in parietal cortex.
Collapse
|
36
|
Evidence for cue-independent spatial representation in the human auditory cortex during active listening. Proc Natl Acad Sci U S A 2017; 114:E7602-E7611. [PMID: 28827357 DOI: 10.1073/pnas.1707522114] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
Collapse
|
37
|
Poirier C, Baumann S, Dheerendra P, Joly O, Hunter D, Balezeau F, Sun L, Rees A, Petkov CI, Thiele A, Griffiths TD. Auditory motion-specific mechanisms in the primate brain. PLoS Biol 2017; 15:e2001379. [PMID: 28472038 PMCID: PMC5417421 DOI: 10.1371/journal.pbio.2001379] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 04/07/2017] [Indexed: 12/25/2022] Open
Abstract
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.
Collapse
Affiliation(s)
- Colline Poirier
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| | - Simon Baumann
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Pradeep Dheerendra
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Olivier Joly
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - David Hunter
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Fabien Balezeau
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Li Sun
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Christopher I. Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Timothy D. Griffiths
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| |
Collapse
|
38
|
Cortical Representation of Interaural Time Difference Is Impaired by Deafness in Development: Evidence from Children with Early Long-term Access to Sound through Bilateral Cochlear Implants Provided Simultaneously. J Neurosci 2017; 37:2349-2361. [PMID: 28123078 DOI: 10.1523/jneurosci.2538-16.2017] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Revised: 12/21/2016] [Accepted: 01/18/2017] [Indexed: 11/21/2022] Open
Abstract
Accurate use of interaural time differences (ITDs) for spatial hearing may require access to bilateral auditory input during sensitive periods in human development. Providing bilateral cochlear implants (CIs) simultaneously promotes symmetrical development of bilateral auditory pathways but does not support normal ITD sensitivity. Thus, although binaural interactions are established by bilateral CIs in the auditory brainstem, potential deficits in cortical processing of ITDs remain. Cortical ITD processing in children with simultaneous bilateral CIs and normal hearing with similar time-in-sound was explored in the present study. Cortical activity evoked by bilateral stimuli with varying ITDs (0, ±0.4, ±1 ms) was recorded using multichannel electroencephalography. Source analyses indicated dominant activity in the right auditory cortex in both groups but limited ITD processing in children with bilateral CIs. In normal-hearing children, adult-like processing patterns were found underlying the immature P1 (∼100 ms) response peak with reduced activity in the auditory cortex ipsilateral to the leading ITD. Further, the left cortex showed a stronger preference than the right cortex for stimuli leading from the contralateral hemifield. By contrast, children with CIs demonstrated reduced ITD-related changes in both auditory cortices. Decreased parieto-occipital activity, possibly involved in spatial processing, was also revealed in children with CIs. Thus, simultaneous bilateral implantation in young children maintains right cortical dominance during binaural processing but does not fully overcome effects of deafness using present CI devices. Protection of bilateral pathways through simultaneous implantation might be capitalized for ITD processing with signal processing advances, which more consistently represent binaural timing cues.SIGNIFICANCE STATEMENT Multichannel electroencephalography demonstrated impairment of binaural processing in children who are deaf despite early access to bilateral auditory input by first finding that foundations for binaural hearing are normally established during early stages of cortical development. Although 4- to 7-year-old children with normal hearing had immature cortical responses, adult patterns in cortical coding of binaural timing cues were measured. Second, children receiving two cochlear implants in the same surgery maintained normal-like input from both ears, but this did not support significant effects of binaural timing cues in either auditory cortex. Deficits in parieto-occiptal areas further suggested impairment in spatial processing. Results indicate that cochlear implants working independently in each ear do not fully overcome deafness-related binaural processing deficits, even after long-term experience.
Collapse
|
39
|
Salmi J, Koistinen OP, Glerean E, Jylänki P, Vehtari A, Jääskeläinen IP, Mäkelä S, Nummenmaa L, Nummi-Kuisma K, Nummi I, Sams M. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex. Neuroimage 2016; 157:108-117. [PMID: 27932074 DOI: 10.1016/j.neuroimage.2016.12.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Revised: 11/02/2016] [Accepted: 12/03/2016] [Indexed: 11/25/2022] Open
Abstract
During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively.
Collapse
Affiliation(s)
- Juha Salmi
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland; Advanced Magnetic Imaging (AMI) Centre, School of Science, Aalto University, Finland; Institute of Behavioural Sciences, Division of Cognitive and Neuropsychology, University of Helsinki, Finland
| | - Olli-Pekka Koistinen
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Pasi Jylänki
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Aki Vehtari
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Iiro P Jääskeläinen
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Sasu Mäkelä
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Lauri Nummenmaa
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland; Turku PET Centre, University of Turku, Finland
| | | | - Ilari Nummi
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland.
| |
Collapse
|
40
|
Undurraga JA, Haywood NR, Marquardt T, McAlpine D. Neural Representation of Interaural Time Differences in Humans-an Objective Measure that Matches Behavioural Performance. J Assoc Res Otolaryngol 2016; 17:591-607. [PMID: 27628539 PMCID: PMC5112218 DOI: 10.1007/s10162-016-0584-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2016] [Accepted: 08/15/2016] [Indexed: 12/22/2022] Open
Abstract
Humans, and many other species, exploit small differences in the timing of sounds at the two ears (interaural time difference, ITD) to locate their source and to enhance their detection in background noise. Despite their importance in everyday listening tasks, however, the neural representation of ITDs in human listeners remains poorly understood, and few studies have assessed ITD sensitivity to a similar resolution to that reported perceptually. Here, we report an objective measure of ITD sensitivity in electroencephalography (EEG) signals to abrupt modulations in the interaural phase of amplitude-modulated low-frequency tones. Specifically, we measured following responses to amplitude-modulated sinusoidal signals (520-Hz carrier) in which the stimulus phase at each ear was manipulated to produce discrete interaural phase modulations at minima in the modulation cycle-interaural phase modulation following responses (IPM-FRs). The depth of the interaural phase modulation (IPM) was defined by the sign and the magnitude of the interaural phase difference (IPD) transition which was symmetric around zero. Seven IPM depths were assessed over the range of ±22 ° to ±157 °, corresponding to ITDs largely within the range experienced by human listeners under natural listening conditions (120 to 841 μs). The magnitude of the IPM-FR was maximal for IPM depths in the range of ±67.6 ° to ±112.6 ° and correlated well with performance in a behavioural experiment in which listeners were required to discriminate sounds containing IPMs from those with only static IPDs. The IPM-FR provides a sensitive measure of binaural processing in the human brain and has a potential to assess temporal binaural processing.
Collapse
Affiliation(s)
- Jaime A Undurraga
- Department Linguistics, The Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW, 2109, Australia.
- UCL Ear Institute, University College London, 332 Gray's Inn Rd., London, WC1X8EE, UK.
| | - Nick R Haywood
- Department Linguistics, The Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW, 2109, Australia
- UCL Ear Institute, University College London, 332 Gray's Inn Rd., London, WC1X8EE, UK
| | - Torsten Marquardt
- UCL Ear Institute, University College London, 332 Gray's Inn Rd., London, WC1X8EE, UK
| | - David McAlpine
- Department Linguistics, The Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW, 2109, Australia
- UCL Ear Institute, University College London, 332 Gray's Inn Rd., London, WC1X8EE, UK
| |
Collapse
|
41
|
Tuning to Binaural Cues in Human Auditory Cortex. J Assoc Res Otolaryngol 2016; 17:37-53. [PMID: 26466943 DOI: 10.1007/s10162-015-0546-4] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 09/25/2015] [Indexed: 10/22/2022] Open
Abstract
Interaural level and time differences (ILD and ITD), the primary binaural cues for sound localization in azimuth, are known to modulate the tuned responses of neurons in mammalian auditory cortex (AC). The majority of these neurons respond best to cue values that favor the contralateral ear, such that contralateral bias is evident in the overall population response and thereby expected in population-level functional imaging data. Human neuroimaging studies, however, have not consistently found contralaterally biased binaural response patterns. Here, we used functional magnetic resonance imaging (fMRI) to parametrically measure ILD and ITD tuning in human AC. For ILD, contralateral tuning was observed, using both univariate and multivoxel analyses, in posterior superior temporal gyrus (pSTG) in both hemispheres. Response-ILD functions were U-shaped, revealing responsiveness to both contralateral and—to a lesser degree—ipsilateral ILD values, consistent with rate coding by unequal populations of contralaterally and ipsilaterally tuned neurons. In contrast, for ITD, univariate analyses showed modest contralateral tuning only in left pSTG, characterized by a monotonic response-ITD function. A multivoxel classifier, however, revealed ITD coding in both hemispheres. Although sensitivity to ILD and ITD was distributed in similar AC regions, the differently shaped response functions and different response patterns across hemispheres suggest that basic ILD and ITD processes are not fully integrated in human AC. The results support opponent-channel theories of ILD but not necessarily ITD coding, the latter of which may involve multiple types of representation that differ across hemispheres.
Collapse
|
42
|
Shestopalova L, Petropavlovskaia E, Vaitulevich S, Nikitin N. Hemispheric asymmetry of ERPs and MMNs evoked by slow, fast and abrupt auditory motion. Neuropsychologia 2016; 91:465-479. [DOI: 10.1016/j.neuropsychologia.2016.09.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 08/25/2016] [Accepted: 09/13/2016] [Indexed: 10/21/2022]
|
43
|
Disrupted brain metabolic connectivity in a 6-OHDA-induced mouse model of Parkinson's disease examined using persistent homology-based analysis. Sci Rep 2016; 6:33875. [PMID: 27650055 PMCID: PMC5030651 DOI: 10.1038/srep33875] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 09/05/2016] [Indexed: 11/26/2022] Open
Abstract
Movement impairments in Parkinson’s disease (PD) are caused by the degeneration of dopaminergic neurons and the consequent disruption of connectivity in the cortico-striatal-thalamic loop. This study evaluated brain metabolic connectivity in a 6-Hydroxydopamine (6-OHDA)-induced mouse model of PD using 18F-fluorodeoxy glucose positron emission tomography (FDG PET). Fourteen PD-model mice and ten control mice were used for the analysis. Voxel-wise t-tests on FDG PET results yielded no significant regional metabolic differences between the PD and control groups. However, the PD group showed lower correlations between the right caudoputamen and the left caudoputamen and right visual cortex. Further network analyses based on the threshold-free persistent homology framework revealed that brain networks were globally disrupted in the PD group, especially between the right auditory cortex and bilateral cortical structures and the left caudoputamen. In conclusion, regional glucose metabolism of PD was preserved, but the metabolic connectivity of the cortico-striatal-thalamic loop was globally impaired in PD.
Collapse
|
44
|
Renvall H, Staeren N, Barz CS, Ley A, Formisano E. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes. Front Neurosci 2016; 10:254. [PMID: 27375416 PMCID: PMC4894904 DOI: 10.3389/fnins.2016.00254] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 05/23/2016] [Indexed: 11/13/2022] Open
Abstract
This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in the auditory cortex, may explain the simultaneous increase of BOLD responses and decrease of MEG responses. These findings highlight the complimentary role of electrophysiological and hemodynamic measures in addressing brain processing of complex stimuli.
Collapse
Affiliation(s)
- Hanna Renvall
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Department of Neuroscience and Biomedical Engineering, Aalto University School of ScienceEspoo, Finland; Aalto Neuroimaging, Magnetoencephalography (MEG) Core, Aalto UniversityEspoo, Finland
| | - Noël Staeren
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Claudia S Barz
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Institute for Neuroscience and Medicine, Research Centre JuelichJuelich, Germany; Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen UniversityAachen, Germany
| | - Anke Ley
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Maastricht Center for Systems Biology (MaCSBio), Maastricht UniversityMaastricht, Netherlands
| |
Collapse
|
45
|
Asymmetries in the representation of space in the human auditory cortex depend on the global stimulus context. Neuroreport 2016; 27:242-6. [DOI: 10.1097/wnr.0000000000000527] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
46
|
Integrated processing of spatial cues in human auditory cortex. Hear Res 2015; 327:143-52. [DOI: 10.1016/j.heares.2015.06.006] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Revised: 05/29/2015] [Accepted: 06/02/2015] [Indexed: 11/17/2022]
|
47
|
Roaring lions and chirruping lemurs: How the brain encodes sound objects in space. Neuropsychologia 2015; 75:304-13. [DOI: 10.1016/j.neuropsychologia.2015.06.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Revised: 06/07/2015] [Accepted: 06/10/2015] [Indexed: 01/29/2023]
|
48
|
Monaural and binaural contributions to interaural-level-difference sensitivity in human auditory cortex. Neuroimage 2015; 120:456-66. [PMID: 26163805 PMCID: PMC4589528 DOI: 10.1016/j.neuroimage.2015.07.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2015] [Revised: 06/08/2015] [Accepted: 07/03/2015] [Indexed: 11/20/2022] Open
Abstract
Whole-brain functional magnetic resonance imaging was used to measure blood-oxygenation-level-dependent (BOLD) responses in human auditory cortex (AC) to sounds with intensity varying independently in the left and right ears. Echoplanar images were acquired at 3 Tesla with sparse image acquisition once per 12-second block of sound stimulation. Combinations of binaural intensity and stimulus presentation rate were varied between blocks, and selected to allow measurement of response-intensity functions in three configurations: monaural 55–85 dB SPL, binaural 55–85 dB SPL with intensity equal in both ears, and binaural with average binaural level of 70 dB SPL and interaural level differences (ILD) ranging ±30 dB (i.e., favoring the left or right ear). Comparison of response functions equated for contralateral intensity revealed that BOLD-response magnitudes (1) generally increased with contralateral intensity, consistent with positive drive of the BOLD response by the contralateral ear, (2) were larger for contralateral monaural stimulation than for binaural stimulation, consistent with negative effects (e.g., inhibition) of ipsilateral input, which were strongest in the left hemisphere, and (3) also increased with ipsilateral intensity when contralateral input was weak, consistent with additional, positive, effects of ipsilateral stimulation. Hemispheric asymmetries in the spatial extent and overall magnitude of BOLD responses were generally consistent with previous studies demonstrating greater bilaterality of responses in the right hemisphere and stricter contralaterality in the left hemisphere. Finally, comparison of responses to fast (40/s) and slow (5/s) stimulus presentation rates revealed significant rate-dependent adaptation of the BOLD response that varied across ILD values.
Collapse
|
49
|
Ding H, Qin W, Liang M, Ming D, Wan B, Li Q, Yu C. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness. Brain 2015; 138:2750-65. [PMID: 26070981 DOI: 10.1093/brain/awv165] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2014] [Accepted: 04/18/2015] [Indexed: 11/13/2022] Open
Abstract
Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article.
Collapse
Affiliation(s)
- Hao Ding
- 1 Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Wen Qin
- 2 Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, People's Republic of China
| | - Meng Liang
- 3 School of Medical Imaging, Tianjin Medical University, Tianjin 300070, People's Republic of China
| | - Dong Ming
- 1 Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Baikun Wan
- 1 Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Qiang Li
- 4 Technical College for the Deaf, Tianjin University of Technology, Tianjin 300384, People's Republic of China
| | - Chunshui Yu
- 2 Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, People's Republic of China
| |
Collapse
|
50
|
Trapeau R, Schönwiesner M. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex. Neuroimage 2015; 118:26-38. [PMID: 26054873 DOI: 10.1016/j.neuroimage.2015.06.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 05/06/2015] [Accepted: 06/02/2015] [Indexed: 11/29/2022] Open
Abstract
The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere.
Collapse
Affiliation(s)
- Régis Trapeau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada; Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|