1
|
Clarke S, Da Costa S, Crottaz-Herbette S. Dual Representation of the Auditory Space. Brain Sci 2024; 14:535. [PMID: 38928534 PMCID: PMC11201621 DOI: 10.3390/brainsci14060535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/19/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024] Open
Abstract
Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked representation of sound objects. Behavioral and imaging studies demonstrated right-hemispheric dominance for explicit sound localization. An early clinical case study documented the dissociation between the explicit sound localizations, which was heavily impaired, and fully preserved use of spatial cues for sound object segregation. The latter involves location-linked encoding of sound objects. We review here evidence pertaining to brain regions involved in location-linked representation of sound objects. Auditory evoked potential (AEP) and functional magnetic resonance imaging (fMRI) studies investigated this aspect by comparing encoding of individual sound objects, which changed their locations or remained stationary. Systematic search identified 1 AEP and 12 fMRI studies. Together with studies of anatomical correlates of impaired of spatial-cue-based sound object segregation after focal brain lesions, the present evidence indicates that the location-linked representation of sound objects involves strongly the left hemisphere and to a lesser degree the right hemisphere. Location-linked encoding of sound objects is present in several early-stage auditory areas and in the specialized temporal voice area. In these regions, emotional valence benefits from location-linked encoding as well.
Collapse
Affiliation(s)
- Stephanie Clarke
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011 Lausanne, Switzerland; (S.D.C.); (S.C.-H.)
| | | | | |
Collapse
|
2
|
Kopal J, Kumar K, Shafighi K, Saltoun K, Modenato C, Moreau CA, Huguet G, Jean-Louis M, Martin CO, Saci Z, Younis N, Douard E, Jizi K, Beauchamp-Chatel A, Kushan L, Silva AI, van den Bree MBM, Linden DEJ, Owen MJ, Hall J, Lippé S, Draganski B, Sønderby IE, Andreassen OA, Glahn DC, Thompson PM, Bearden CE, Zatorre R, Jacquemont S, Bzdok D. Using rare genetic mutations to revisit structural brain asymmetry. Nat Commun 2024; 15:2639. [PMID: 38531844 PMCID: PMC10966068 DOI: 10.1038/s41467-024-46784-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 03/11/2024] [Indexed: 03/28/2024] Open
Abstract
Asymmetry between the left and right hemisphere is a key feature of brain organization. Hemispheric functional specialization underlies some of the most advanced human-defining cognitive operations, such as articulated language, perspective taking, or rapid detection of facial cues. Yet, genetic investigations into brain asymmetry have mostly relied on common variants, which typically exert small effects on brain-related phenotypes. Here, we leverage rare genomic deletions and duplications to study how genetic alterations reverberate in human brain and behavior. We designed a pattern-learning approach to dissect the impact of eight high-effect-size copy number variations (CNVs) on brain asymmetry in a multi-site cohort of 552 CNV carriers and 290 non-carriers. Isolated multivariate brain asymmetry patterns spotlighted regions typically thought to subserve lateralized functions, including language, hearing, as well as visual, face and word recognition. Planum temporale asymmetry emerged as especially susceptible to deletions and duplications of specific gene sets. Targeted analysis of common variants through genome-wide association study (GWAS) consolidated partly diverging genetic influences on the right versus left planum temporale structure. In conclusion, our gene-brain-behavior data fusion highlights the consequences of genetically controlled brain lateralization on uniquely human cognitive capacities.
Collapse
Affiliation(s)
- Jakub Kopal
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada
| | - Kuldeep Kumar
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Kimia Shafighi
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada
| | - Karin Saltoun
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada
| | - Claudia Modenato
- LREN - Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
| | - Clara A Moreau
- Imaging Genetics Center, Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of USC, Marina del Rey, CA, USA
| | - Guillaume Huguet
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | | | | | - Zohra Saci
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Nadine Younis
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Elise Douard
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Khadije Jizi
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Alexis Beauchamp-Chatel
- Institut universitaire en santé mentale de Montréal, University of Montréal, Montréal, Canada
- Department of Psychiatry, University of Montreal, Montréal, Canada
| | - Leila Kushan
- Semel Institute for Neuroscience and Human Behavior, Departments of Psychiatry and Biobehavioral Sciences and Psychology, UCLA, Los Angeles, USA
| | - Ana I Silva
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
| | - Marianne B M van den Bree
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Division of Psychological Medicine and Clinical Neurosciences, School of Medicine, Cardiff University, Cardiff, UK
- Neuroscience and Mental Health Innovation Institute, Cardiff University, Cardiff, UK
| | - David E J Linden
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Neuroscience and Mental Health Innovation Institute, Cardiff University, Cardiff, UK
| | - Michael J Owen
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Division of Psychological Medicine and Clinical Neurosciences, School of Medicine, Cardiff University, Cardiff, UK
| | - Jeremy Hall
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Division of Psychological Medicine and Clinical Neurosciences, School of Medicine, Cardiff University, Cardiff, UK
| | - Sarah Lippé
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Bogdan Draganski
- LREN - Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
- Neurology Department, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ida E Sønderby
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital and University of Oslo, Oslo, Norway
- Department of Medical Genetics, Oslo University Hospital, Oslo, Norway
- KG Jebsen Centre for Neurodevelopmental Disorders, University of Oslo, Oslo, Norway
| | - Ole A Andreassen
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital and University of Oslo, Oslo, Norway
- KG Jebsen Centre for Neurodevelopmental Disorders, University of Oslo, Oslo, Norway
| | - David C Glahn
- Department of Psychiatry, Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
| | - Paul M Thompson
- Imaging Genetics Center, Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of USC, Marina del Rey, CA, USA
| | - Carrie E Bearden
- Semel Institute for Neuroscience and Human Behavior, Departments of Psychiatry and Biobehavioral Sciences and Psychology, UCLA, Los Angeles, USA
| | - Robert Zatorre
- International Laboratory for Brain, Music and Sound Research, Montreal, QC, Canada
- TheNeuro - Montreal Neurological Institute (MNI), McConnell Brain Imaging Centre, Faculty of Medicine, McGill University, Montreal, QC, Canada
| | - Sébastien Jacquemont
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
- Department of Pediatrics, University of Montréal, Montréal, Quebec, Canada
| | - Danilo Bzdok
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada.
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada.
- TheNeuro - Montreal Neurological Institute (MNI), McConnell Brain Imaging Centre, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
3
|
Ning M, Duwadi S, Yücel MA, von Lühmann A, Boas DA, Sen K. fNIRS dataset during complex scene analysis. Front Hum Neurosci 2024; 18:1329086. [PMID: 38576451 PMCID: PMC10991699 DOI: 10.3389/fnhum.2024.1329086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/06/2024] [Indexed: 04/06/2024] Open
Affiliation(s)
- Matthew Ning
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
| | - Sudan Duwadi
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
| | - Meryem A. Yücel
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
| | - Alexander von Lühmann
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
- BIFOLD – Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
- Intelligent Biomedical Sensing (IBS) Lab, Technical University Berlin, Berlin, Germany
| | - David A. Boas
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
| | - Kamal Sen
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
| |
Collapse
|
4
|
Ning M, Duwadi S, Yücel MA, Von Lühmann A, Boas DA, Sen K. fNIRS Dataset During Complex Scene Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.23.576715. [PMID: 38328139 PMCID: PMC10849700 DOI: 10.1101/2024.01.23.576715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
When analyzing complex scenes, humans often focus their attention on an object at a particular spatial location. The ability to decode the attended spatial location would facilitate brain computer interfaces for complex scene analysis (CSA). Here, we investigated capability of functional near-infrared spectroscopy (fNIRS) to decode audio-visual spatial attention in the presence of competing stimuli from multiple locations. We targeted dorsal frontoparietal network including frontal eye field (FEF) and intra-parietal sulcus (IPS) as well as superior temporal gyrus/planum temporal (STG/PT). They all were shown in previous functional magnetic resonance imaging (fMRI) studies to be activated by auditory, visual, or audio-visual spatial tasks. To date, fNIRS has not been applied to decode auditory and visual-spatial attention during CSA, and thus, no such dataset exists yet. This report provides an open-access fNIRS dataset that can be used to develop, test, and compare machine learning algorithms for classifying attended locations based on the fNIRS signals on a single trial basis.
Collapse
Affiliation(s)
- Matthew Ning
- Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Sudan Duwadi
- Neurophotonics Center, Department of Biomedical Engineering, Boston University
| | - Meryem A. Yücel
- Neurophotonics Center, Department of Biomedical Engineering, Boston University
| | - Alexander Von Lühmann
- Neurophotonics Center, Department of Biomedical Engineering, Boston University
- BIFOLD – Berlin Institute for the Foundations of Learning and Data, 10587 Berlin, Germany
- Intelligent Biomedical Sensing (IBS) Lab, Technische Universität Berlin, 10587 Berlin, Germany
| | - David A. Boas
- Neurophotonics Center, Department of Biomedical Engineering, Boston University
| | - Kamal Sen
- Neurophotonics Center, Department of Biomedical Engineering, Boston University
| |
Collapse
|
5
|
Tuckute G, Feather J, Boebinger D, McDermott JH. Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions. PLoS Biol 2023; 21:e3002366. [PMID: 38091351 PMCID: PMC10718467 DOI: 10.1371/journal.pbio.3002366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 10/06/2023] [Indexed: 12/18/2023] Open
Abstract
Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.
Collapse
Affiliation(s)
- Greta Tuckute
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
| | - Jenelle Feather
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
| | - Dana Boebinger
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard, Cambridge, Massachusetts, United States of America
- University of Rochester Medical Center, Rochester, New York, New York, United States of America
| | - Josh H. McDermott
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard, Cambridge, Massachusetts, United States of America
| |
Collapse
|
6
|
Rauschecker JP, Afsahi RK. Anatomy of the auditory cortex then and now. J Comp Neurol 2023; 531:1883-1892. [PMID: 38010215 PMCID: PMC10872810 DOI: 10.1002/cne.25560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 08/29/2023] [Accepted: 10/13/2023] [Indexed: 11/29/2023]
Abstract
Using neuroanatomical investigations in the macaque, Deepak Pandya and his colleagues have established the framework for auditory cortex organization, with subdivisions into core and belt areas. This has aided subsequent neurophysiological and imaging studies in monkeys and humans, and a nomenclature building on Pandya's work has also been adopted by the Human Connectome Project. The foundational work by Pandya and his colleagues is highlighted here in the context of subsequent and ongoing studies on the functional anatomy and physiology of auditory cortex in primates, including humans, and their relevance for understanding cognitive aspects of speech and language.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia, USA
| | - Rosstin K Afsahi
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia, USA
| |
Collapse
|
7
|
Cao E, Ma D, Nayak S, Duong TQ. Deep learning combining FDG-PET and neurocognitive data accurately predicts MCI conversion to Alzheimer's dementia 3-year post MCI diagnosis. Neurobiol Dis 2023; 187:106310. [PMID: 37769746 DOI: 10.1016/j.nbd.2023.106310] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/20/2023] [Accepted: 09/25/2023] [Indexed: 10/03/2023] Open
Abstract
INTRODUCTION This study reports a novel deep learning approach to predict mild cognitive impairment (MCI) conversion to Alzheimer's dementia (AD) within three years using whole-brain fluorodeoxyglucose (FDG) positron emission tomography (PET) and cognitive scores (CS). METHODS This analysis consisted of 150 normal controls (CN), 257 MCI, and 205 AD subjects from ADNI. FDG-PET and CS were obtained at MCI diagnosis to predict AD conversion within three years of MCI diagnosis using convolutional neural networks. RESULTS Neurocognitive scores predicted better than FDG-PET per se, but the best model was a combination of FDG-PET, age, and neurocognitive data, yielding an AUC of 0.785 ± 0.096 and a balanced accuracy of 0.733 ± 0.098. Saliency maps highlighted putamen, thalamus, inferior frontal gyrus, parietal operculum, precuneus cortices, calcarine cortices, temporal gyrus, and planum temporale to be important for prediction. DISCUSSION Deep learning accurately predicts MCI conversion to AD and provides neural correlates of brain regions associated with AD conversion.
Collapse
Affiliation(s)
- Eric Cao
- Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY 10467, United States
| | - Da Ma
- Department of Internal Medicine Section of Gerontology and Geriatric Medicine, Wake Forest, University School of Medicine, Winston-Salam, NC 27109, United States
| | - Siddharth Nayak
- Department of Radiology, Weill Cornell Medicine, New York, 10065, United States
| | - Tim Q Duong
- Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY 10467, United States.
| |
Collapse
|
8
|
Grisendi T, Clarke S, Da Costa S. Emotional sounds in space: asymmetrical representation within early-stage auditory areas. Front Neurosci 2023; 17:1164334. [PMID: 37274197 PMCID: PMC10235458 DOI: 10.3389/fnins.2023.1164334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 04/07/2023] [Indexed: 06/06/2023] Open
Abstract
Evidence from behavioral studies suggests that the spatial origin of sounds may influence the perception of emotional valence. Using 7T fMRI we have investigated the impact of the categories of sound (vocalizations; non-vocalizations), emotional valence (positive, neutral, negative) and spatial origin (left, center, right) on the encoding in early-stage auditory areas and in the voice area. The combination of these different characteristics resulted in a total of 18 conditions (2 categories x 3 valences x 3 lateralizations), which were presented in a pseudo-randomized order in blocks of 11 different sounds (of the same condition) in 12 distinct runs of 6 min. In addition, two localizers, i.e., tonotopy mapping; human vocalizations, were used to define regions of interest. A three-way repeated measure ANOVA on the BOLD responses revealed bilateral significant effects and interactions in the primary auditory cortex, the lateral early-stage auditory areas, and the voice area. Positive vocalizations presented on the left side yielded greater activity in the ipsilateral and contralateral primary auditory cortex than did neutral or negative vocalizations or any other stimuli at any of the three positions. Right, but not left area L3 responded more strongly to (i) positive vocalizations presented ipsi- or contralaterally than to neutral or negative vocalizations presented at the same positions; and (ii) to neutral than positive or negative non-vocalizations presented contralaterally. Furthermore, comparison with a previous study indicates that spatial cues may render emotional valence more salient within the early-stage auditory areas.
Collapse
Affiliation(s)
- Tiffany Grisendi
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Sandra Da Costa
- Centre d’Imagerie Biomédicale, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
9
|
Kopal J, Kumar K, Shafighi K, Saltoun K, Modenato C, Moreau CA, Huguet G, Jean-Louis M, Martin CO, Saci Z, Younis N, Douard E, Jizi K, Beauchamp-Chatel A, Kushan L, Silva AI, van den Bree MBM, Linden DEJ, Owen MJ, Hall J, Lippé S, Draganski B, Sønderby IE, Andreassen OA, Glahn DC, Thompson PM, Bearden CE, Zatorre R, Jacquemont S, Bzdok D. Using rare genetic mutations to revisit structural brain asymmetry. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.17.537199. [PMID: 37131672 PMCID: PMC10153125 DOI: 10.1101/2023.04.17.537199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Asymmetry between the left and right brain is a key feature of brain organization. Hemispheric functional specialization underlies some of the most advanced human-defining cognitive operations, such as articulated language, perspective taking, or rapid detection of facial cues. Yet, genetic investigations into brain asymmetry have mostly relied on common variant studies, which typically exert small effects on brain phenotypes. Here, we leverage rare genomic deletions and duplications to study how genetic alterations reverberate in human brain and behavior. We quantitatively dissected the impact of eight high-effect-size copy number variations (CNVs) on brain asymmetry in a multi-site cohort of 552 CNV carriers and 290 non-carriers. Isolated multivariate brain asymmetry patterns spotlighted regions typically thought to subserve lateralized functions, including language, hearing, as well as visual, face and word recognition. Planum temporale asymmetry emerged as especially susceptible to deletions and duplications of specific gene sets. Targeted analysis of common variants through genome-wide association study (GWAS) consolidated partly diverging genetic influences on the right versus left planum temporale structure. In conclusion, our gene-brain-behavior mapping highlights the consequences of genetically controlled brain lateralization on human-defining cognitive traits.
Collapse
|
10
|
Sun L, Li C, Wang S, Si Q, Lin M, Wang N, Sun J, Li H, Liang Y, Wei J, Zhang X, Zhang J. Left frontal eye field encodes sound locations during passive listening. Cereb Cortex 2023; 33:3067-3079. [PMID: 35858212 DOI: 10.1093/cercor/bhac261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 06/02/2022] [Accepted: 06/04/2022] [Indexed: 11/12/2022] Open
Abstract
Previous studies reported that auditory cortices (AC) were mostly activated by sounds coming from the contralateral hemifield. As a result, sound locations could be encoded by integrating opposite activations from both sides of AC ("opponent hemifield coding"). However, human auditory "where" pathway also includes a series of parietal and prefrontal regions. It was unknown how sound locations were represented in those high-level regions during passive listening. Here, we investigated the neural representation of sound locations in high-level regions by voxel-level tuning analysis, regions-of-interest-level (ROI-level) laterality analysis, and ROI-level multivariate pattern analysis. Functional magnetic resonance imaging data were collected while participants listened passively to sounds from various horizontal locations. We found that opponent hemifield coding of sound locations not only existed in AC, but also spanned over intraparietal sulcus, superior parietal lobule, and frontal eye field (FEF). Furthermore, multivariate pattern representation of sound locations in both hemifields could be observed in left AC, right AC, and left FEF. Overall, our results demonstrate that left FEF, a high-level region along the auditory "where" pathway, encodes sound locations during passive listening in two ways: a univariate opponent hemifield activation representation and a multivariate full-field activation pattern representation.
Collapse
Affiliation(s)
- Liwei Sun
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Chunlin Li
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Songjian Wang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Qian Si
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Meng Lin
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Ningyu Wang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| | - Jun Sun
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Hongjun Li
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Ying Liang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Jing Wei
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Xu Zhang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Juan Zhang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| |
Collapse
|
11
|
Wang Y, Lu L, Zou G, Zheng L, Qin L, Zou Q, Gao JH. Disrupted neural tracking of sound localization during non-rapid eye movement sleep. Neuroimage 2022; 260:119490. [PMID: 35853543 DOI: 10.1016/j.neuroimage.2022.119490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 06/16/2022] [Accepted: 07/15/2022] [Indexed: 11/27/2022] Open
Abstract
Spatial hearing in humans is a high-level auditory process that is crucial to rapid sound localization in the environment. Both neurophysiological models with animals and neuroimaging evidence from human subjects in the wakefulness stage suggest that the localization of auditory objects is mainly located in the posterior auditory cortex. However, whether this cognitive process is preserved during sleep remains unclear. To fill this research gap, we investigated the sleeping brain's capacity to identify sound locations by recording simultaneous electroencephalographic (EEG) and magnetoencephalographic (MEG) signals during wakefulness and non-rapid eye movement (NREM) sleep in human subjects. Using the frequency-tagging paradigm, the subjects were presented with a basic syllable sequence at 5 Hz and a location change that occurred every three syllables, resulting in a sound localization shift at 1.67 Hz. The EEG and MEG signals were used for sleep scoring and neural tracking analyses, respectively. Neural tracking responses at 5 Hz reflecting basic auditory processing were observed during both wakefulness and NREM sleep, although the responses during sleep were weaker than those during wakefulness. Cortical responses at 1.67 Hz, which correspond to the sound location change, were observed during wakefulness regardless of attention to the stimuli but vanished during NREM sleep. These results for the first time indicate that sleep preserves basic auditory processing but disrupts the higher-order brain function of sound localization.
Collapse
Affiliation(s)
- Yan Wang
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; Chinese Institute for Brain Research, Beijing 102206, China; PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing 100083, China.
| | - Guangyuan Zou
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Li Zheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Lang Qin
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Qihong Zou
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China.
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China; Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China.
| |
Collapse
|
12
|
Matsubara T, Stufflebeam S, Khan S, Ahveninen J, Hämäläinen M, Goto Y, Maekawa T, Tobimatsu S, Kishida K. Weighted Blind Source Separation Can Decompose the Frequency Mismatch Response by Deviant Concatenation: An MEG Study. Front Neurol 2022; 13:762497. [PMID: 35280282 PMCID: PMC8916481 DOI: 10.3389/fneur.2022.762497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Accepted: 01/25/2022] [Indexed: 11/13/2022] Open
Abstract
The mismatch response (MMR) is thought to be a neurophysiological measure of novel auditory detection that could serve as a translational biomarker of various neurological diseases. When recorded with electroencephalography (EEG) or magnetoencephalography (MEG), the MMR is traditionally extracted by subtracting the event-related potential/field (ERP/ERF) elicited in response to “deviant” sounds that occur randomly within a train of repetitive “standard” sounds. However, there are several problems with such a subtraction, which include increased noise and the neural adaptation problem. On the basis of the original theory underlying MMR (i.e., the memory-comparison process), the MMR should be present only in deviant epochs. Therefore, we proposed a novel method called weighted-BSST/k, which uses only the deviant response to derive the MMR. Deviant concatenation and weight assignment are the primary procedures of weighted-BSST/k, which maximize the benefits of time-delayed correlation. We hypothesized that this novel weighted-BSST/k method highlights responses related to the detection of the deviant stimulus and is more sensitive than independent component analysis (ICA). To test this hypothesis and the validity and efficacy of the weighted-BSST/k in comparison with ICA (infomax), we evaluated the methods in 12 healthy adults. Auditory stimuli were presented at a constant rate of 2 Hz. Frequency MMRs at a sensor level were obtained from the bilateral temporal lobes with the subtraction approach at 96–276 ms (the MMR time range), defined based on spatio-temporal cluster permutation analysis. In the application of the weighted-BSST/k, the deviant responses were given a constant weight using a rectangular window on the MMR time range. The ERF elicited by the weighted deviant responses demonstrated one or a few dominant components representing the MMR that fitted well with that of the sensor space analysis using the conventional subtraction approach. In contrast, infomax or weighted-infomax revealed many minor or pseudo components as constituents of the MMR. Our single-trial, contrast-free approach may assist in using the MMR in basic and clinical research, and it opens a new and potentially useful way to analyze event-related MEG/EEG data.
Collapse
Affiliation(s)
- Teppei Matsubara
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
- Harvard Medical School, Boston, MA, United States
- Japan Society for the Promotion of Science, Tokyo, Japan
- International University of Health and Welfare, Fukuoka, Japan
- *Correspondence: Teppei Matsubara
| | - Steven Stufflebeam
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Yoshinobu Goto
- Department of Physiology, School of Medicine, International University of Health and Welfare, Narita, Japan
| | | | - Shozo Tobimatsu
- Department of Orthoptics, Faculty of Medicine, Fukuoka International University of Health and Welfare, Fukuoka, Japan
| | - Kuniharu Kishida
- Gifu University, Gifu, Japan
- Hermitage of Magnetoencephalography, Osaka, Japan
| |
Collapse
|
13
|
Tian X, Liu Y, Guo Z, Cai J, Tang J, Chen F, Zhang H. Cerebral Representation of Sound Localization Using Functional Near-Infrared Spectroscopy. Front Neurosci 2022; 15:739706. [PMID: 34970110 PMCID: PMC8712652 DOI: 10.3389/fnins.2021.739706] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 11/09/2021] [Indexed: 11/30/2022] Open
Abstract
Sound localization is an essential part of auditory processing. However, the cortical representation of identifying the direction of sound sources presented in the sound field using functional near-infrared spectroscopy (fNIRS) is currently unknown. Therefore, in this study, we used fNIRS to investigate the cerebral representation of different sound sources. Twenty-five normal-hearing subjects (aged 26 ± 2.7, male 11, female 14) were included and actively took part in a block design task. The test setup for sound localization was composed of a seven-speaker array spanning a horizontal arc of 180° in front of the participants. Pink noise bursts with two intensity levels (48 dB/58 dB) were randomly applied via five loudspeakers (–90°/–30°/–0°/+30°/+90°). Sound localization task performances were collected, and simultaneous signals from auditory processing cortical fields were recorded for analysis by using a support vector machine (SVM). The results showed a classification accuracy of 73.60, 75.60, and 77.40% on average at –90°/0°, 0°/+90°, and –90°/+90° with high intensity, and 70.60, 73.6, and 78.6% with low intensity. The increase of oxyhemoglobin was observed in the bilateral non-primary auditory cortex (AC) and dorsolateral prefrontal cortex (dlPFC). In conclusion, the oxyhemoglobin (oxy-Hb) response showed different neural activity patterns between the lateral and front sources in the AC and dlPFC. Our results may serve as a basic contribution for further research on the use of fNIRS in spatial auditory studies.
Collapse
Affiliation(s)
- Xuexin Tian
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yimeng Liu
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Zengzhi Guo
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jieqing Cai
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Jie Tang
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Department of Physiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Hearing Research Center, Southern Medical University, Guangzhou, China.,Key Laboratory of Mental Health of the Ministry of Education, Southern Medical University, Guangzhou, China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongzheng Zhang
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Hearing Research Center, Southern Medical University, Guangzhou, China
| |
Collapse
|
14
|
Kim JH, Shim L, Bahng J, Lee HJ. Proficiency in Using Level Cue for Sound Localization Is Related to the Auditory Cortical Structure in Patients With Single-Sided Deafness. Front Neurosci 2021; 15:749824. [PMID: 34707477 PMCID: PMC8542703 DOI: 10.3389/fnins.2021.749824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 09/20/2021] [Indexed: 11/13/2022] Open
Abstract
Spatial hearing, which largely relies on binaural time/level cues, is a challenge for patients with asymmetric hearing. The degree of the deficit is largely variable, and better sound localization performance is frequently reported. Studies on the compensatory mechanism revealed that monaural level cues and monoaural spectral cues contribute to variable behavior in those patients who lack binaural spatial cues. However, changes in the monaural level cues have not yet been separately investigated. In this study, the use of the level cue in sound localization was measured using stimuli of 1 kHz at a fixed level in patients with single-sided deafness (SSD), the most severe form of asymmetric hearing. The mean absolute error (MAE) was calculated and related to the duration/age onset of SSD. To elucidate the biological correlate of this variable behavior, sound localization ability was compared with the cortical volume of the parcellated auditory cortex. In both SSD patients (n = 26) and normal controls with one ear acutely plugged (n = 23), localization performance was best on the intact ear side; otherwise, there was wide interindividual variability. In the SSD group, the MAE on the intact ear side was worse than that of the acutely plugged controls, and it deteriorated with longer duration/younger age at SSD onset. On the impaired ear side, MAE improved with longer duration/younger age at SSD onset. Performance asymmetry across lateral hemifields decreased in the SSD group, and the maximum decrease was observed with the most extended duration/youngest age at SSD onset. The decreased functional asymmetry in patients with right SSD was related to greater cortical volumes in the right posterior superior temporal gyrus and the left planum temporale, which are typically involved in auditory spatial processing. The study results suggest that structural plasticity in the auditory cortex is related to behavioral changes in sound localization when utilizing monaural level cues in patients with SSD.
Collapse
Affiliation(s)
- Ja Hee Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, South Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Leeseul Shim
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Junghwa Bahng
- Department of Audiology and Speech-Language Pathology, Hallym University of Graduate Studies, Seoul, South Korea
| | - Hyo-Jeong Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, South Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| |
Collapse
|
15
|
Schäfer E, Vedoveli AE, Righetti G, Gamerdinger P, Knipper M, Tropitzsch A, Karnath HO, Braun C, Li Hegner Y. Activities of the Right Temporo-Parieto-Occipital Junction Reflect Spatial Hearing Ability in Cochlear Implant Users. Front Neurosci 2021; 15:613101. [PMID: 33776632 PMCID: PMC7994335 DOI: 10.3389/fnins.2021.613101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 02/18/2021] [Indexed: 11/13/2022] Open
Abstract
Spatial hearing is critical for us not only to orient ourselves in space, but also to follow a conversation with multiple speakers involved in a complex sound environment. The hearing ability of people who suffered from severe sensorineural hearing loss can be restored by cochlear implants (CIs), however, with a large outcome variability. Yet, the causes of the CI performance variability remain incompletely understood. Despite the CI-based restoration of the peripheral auditory input, central auditory processing might still not function fully. Here we developed a multi-modal repetition suppression (MMRS) paradigm that is capable of capturing stimulus property-specific processing, in order to identify the neural correlates of spatial hearing and potential central neural indexes useful for the rehabilitation of sound localization in CI users. To this end, 17 normal hearing and 13 CI participants underwent the MMRS task while their brain activity was recorded with a 256-channel electroencephalography (EEG). The participants were required to discriminate between the probe sound location coming from a horizontal array of loudspeakers. The EEG MMRS response following the probe sound was elicited at various brain regions and at different stages of processing. Interestingly, the more similar this differential MMRS response in the right temporo-parieto-occipital (TPO) junction in CI users was to the normal hearing group, the better was the spatial hearing performance in individual CI users. Based on this finding, we suggest that the differential MMRS response at the right TPO junction could serve as a central neural index for intact or impaired sound localization abilities.
Collapse
Affiliation(s)
| | | | | | | | - Marlies Knipper
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
| | - Anke Tropitzsch
- Comprehensive Cochlear Implant Center, ENT Clinic Tübingen, Tübingen University Hospital, Tübingen, Germany
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Christoph Braun
- MEG Center, University of Tübingen, Tübingen, Germany.,CIMeC, Center for Mind/Brain Research, University of Trento, Rovereto, Italy.,DiPsCo, Department of Psychology and Cognitive Science, Rovereto, Italy
| | - Yiwen Li Hegner
- MEG Center, University of Tübingen, Tübingen, Germany.,Center of Neurology, Department of Neurology and Epileptology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| |
Collapse
|
16
|
Sehatpour P, Avissar M, Kantrowitz JT, Corcoran CM, De Baun HM, Patel GH, Girgis RR, Brucato G, Lopez-Calderon J, Silipo G, Dias E, Martinez A, Javitt DC. Deficits in Pre-attentive Processing of Spatial Location and Negative Symptoms in Subjects at Clinical High Risk for Schizophrenia. Front Psychiatry 2021; 11:629144. [PMID: 33603682 PMCID: PMC7884473 DOI: 10.3389/fpsyt.2020.629144] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 12/28/2020] [Indexed: 12/11/2022] Open
Abstract
Deficits in mismatch negativity (MMN) generation are among the best-established biomarkers for cognitive dysfunction in schizophrenia and predict conversion to schizophrenia (Sz) among individuals at symptomatic clinical high risk (CHR). Impairments in MMN index dysfunction at both subcortical and cortical components of the early auditory system. To date, the large majority of studies have been conducted using deviants that differ from preceding standards in either tonal frequency (pitch) or duration. By contrast, MMN to sound location deviation has been studied to only a limited degree in Sz and has not previously been examined in CHR populations. Here, we evaluated location MMN across Sz and CHR using an optimized, multi-deviant pattern that included a location-deviant, as defined using interaural time delay (ITD) stimuli along with pitch, duration, frequency modulation (FM) and intensity deviants in a sample of 42 Sz, 33 CHR and 28 healthy control (HC) subjects. In addition, we obtained resting state functional connectivity (rsfMRI) on CHR subjects. Sz showed impaired MMN performance across all deviant types, along with strong correlation between MMN deficits and impaired neurocognitive function. In this sample of largely non-converting CHR subjects, no deficits were observed in either pitch or duration MMN. By contrast, CHR subjects showed significant impairments in location MMN generation particularly over right hemisphere and significant correlation between impaired location MMN and negative symptoms including deterioration of role function. In addition, significant correlations were observed between location MMN and rsfMRI involving brainstem circuits. In general, location detection using ITD stimuli depends upon precise processing within midbrain regions and provides a rapid and robust reorientation of attention. Present findings reinforce the utility of MMN as a pre-attentive index of auditory cognitive dysfunction in Sz and suggest that location MMN may index brain circuits distinct from those indexed by other deviant types.
Collapse
Affiliation(s)
- Pejman Sehatpour
- College of Physicians and Surgeons, New York State Psychiatric Institute, Columbia University, New York, NY, United States
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Michael Avissar
- College of Physicians and Surgeons, New York State Psychiatric Institute, Columbia University, New York, NY, United States
| | - Joshua T. Kantrowitz
- College of Physicians and Surgeons, New York State Psychiatric Institute, Columbia University, New York, NY, United States
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | | | - Heloise M. De Baun
- College of Physicians and Surgeons, New York State Psychiatric Institute, Columbia University, New York, NY, United States
| | - Gaurav H. Patel
- College of Physicians and Surgeons, New York State Psychiatric Institute, Columbia University, New York, NY, United States
| | - Ragy R. Girgis
- College of Physicians and Surgeons, New York State Psychiatric Institute, Columbia University, New York, NY, United States
| | - Gary Brucato
- College of Physicians and Surgeons, New York State Psychiatric Institute, Columbia University, New York, NY, United States
| | - Javier Lopez-Calderon
- Centro de Investigaciones Médicas, Escuela de Medicina, Universidad de Talca, Talca, Chile
| | - Gail Silipo
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Elisa Dias
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Antigona Martinez
- College of Physicians and Surgeons, New York State Psychiatric Institute, Columbia University, New York, NY, United States
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Daniel C. Javitt
- College of Physicians and Surgeons, New York State Psychiatric Institute, Columbia University, New York, NY, United States
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| |
Collapse
|
17
|
Charpentier J, Latinus M, Andersson F, Saby A, Cottier JP, Bonnet-Brilhault F, Houy-Durand E, Gomot M. Brain correlates of emotional prosodic change detection in autism spectrum disorder. NEUROIMAGE-CLINICAL 2020; 28:102512. [PMID: 33395999 PMCID: PMC8481911 DOI: 10.1016/j.nicl.2020.102512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/17/2020] [Accepted: 11/20/2020] [Indexed: 11/30/2022]
Abstract
We used an oddball paradigm with vocal stimuli to record hemodynamic responses. Brain processing of vocal change relies on STG, insula and lingual area. Activity of the change processing network can be modulated by saliency and emotion. Brain processing of vocal deviancy/novelty appears typical in adults with autism.
Autism Spectrum Disorder (ASD) is currently diagnosed by the joint presence of social impairments and restrictive, repetitive patterns of behaviors. While the co-occurrence of these two categories of symptoms is at the core of the pathology, most studies investigated only one dimension to understand underlying physiopathology. In this study, we analyzed brain hemodynamic responses in neurotypical adults (CTRL) and adults with autism spectrum disorder during an oddball paradigm allowing to explore brain responses to vocal changes with different levels of saliency (deviancy or novelty) and different emotional content (neutral, angry). Change detection relies on activation of the supratemporal gyrus and insula and on deactivation of the lingual area. The activity of these brain areas involved in the processing of deviancy with vocal stimuli was modulated by saliency and emotion. No group difference between CTRL and ASD was reported for vocal stimuli processing or for deviancy/novelty processing, regardless of emotional content. Findings highlight that brain processing of voices and of neutral/ emotional vocal changes is typical in adults with ASD. Yet, at the behavioral level, persons with ASD still experience difficulties with those cues. This might indicate impairments at latter processing stages or simply show that alterations present in childhood might have repercussions at adult age.
Collapse
Affiliation(s)
| | | | | | - Agathe Saby
- Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | | | | | - Emmanuelle Houy-Durand
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France; Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | - Marie Gomot
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France.
| |
Collapse
|
18
|
Rima S, Kerbyson G, Jones E, Schmid MC. Advantage of detecting visual events in the right hemifield is affected by reading skill. Vision Res 2020; 169:41-48. [PMID: 32172007 PMCID: PMC7103781 DOI: 10.1016/j.visres.2020.03.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Revised: 02/26/2020] [Accepted: 03/01/2020] [Indexed: 01/20/2023]
Abstract
Visual perception is often not homogenous across the visual field and can vary depending on situational demands. The reasons behind this inhomogeneity are not clear. Here we show that directing attention that is consistent with a western reading habit from left to right, results in a ~32% higher sensitivity to detect transient visual events in the right hemifield. This right visual field advantage was largely reduced in individuals with reading difficulties from developmental dyslexia. Similarly, visual detection became more symmetric in skilled readers, when attention was guided opposite to the reading pattern. Taken together, these findings highlight a higher sensitivity in the right visual field for detecting the onset of sudden visual events that is well accounted for by left hemisphere dominated reading habit.
Collapse
Affiliation(s)
- Samy Rima
- Universite de Fribourg, Fribourg, Switzerland.
| | - Grace Kerbyson
- Newcastle University, Institute of Neuroscience, Newcastle upon Tyne, United Kingdom
| | - Elizabeth Jones
- Newcastle University, Institute of Neuroscience, Newcastle upon Tyne, United Kingdom
| | - Michael C Schmid
- Newcastle University, Institute of Neuroscience, Newcastle upon Tyne, United Kingdom; Universite de Fribourg, Fribourg, Switzerland.
| |
Collapse
|
19
|
van der Heijden K, Formisano E, Valente G, Zhan M, Kupers R, de Gelder B. Reorganization of Sound Location Processing in the Auditory Cortex of Blind Humans. Cereb Cortex 2020; 30:1103-1116. [PMID: 31504283 DOI: 10.1093/cercor/bhz151] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 05/27/2019] [Accepted: 06/16/2019] [Indexed: 11/12/2022] Open
Abstract
Auditory spatial tasks induce functional activation in the occipital-visual-cortex of early blind humans. Less is known about the effects of blindness on auditory spatial processing in the temporal-auditory-cortex. Here, we investigated spatial (azimuth) processing in congenitally and early blind humans with a phase-encoding functional magnetic resonance imaging (fMRI) paradigm. Our results show that functional activation in response to sounds in general-independent of sound location-was stronger in the occipital cortex but reduced in the medial temporal cortex of blind participants in comparison with sighted participants. Additionally, activation patterns for binaural spatial processing were different for sighted and blind participants in planum temporale. Finally, fMRI responses in the auditory cortex of blind individuals carried less information on sound azimuth position than those in sighted individuals, as assessed with a 2-channel, opponent coding model for the cortical representation of sound azimuth. These results indicate that early visual deprivation results in reorganization of binaural spatial processing in the auditory cortex and that blind individuals may rely on alternative mechanisms for processing azimuth position.
Collapse
Affiliation(s)
- Kiki van der Heijden
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, 6200 MD Maastricht, the Netherlands
| | - Elia Formisano
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, 6200 MD Maastricht, the Netherlands.,Maastricht Center for Systems Biology, Maastricht University, 6200 MD Maastricht, the Netherlands
| | - Giancarlo Valente
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, 6200 MD Maastricht, the Netherlands
| | - Minye Zhan
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, 6200 MD Maastricht, the Netherlands
| | - Ron Kupers
- BRAINlab and Neuropsychiatry Laboratory, Faculty of Health and Medical Sciences, Department of Neuroscience and Pharmacology, University of Copenhagen, 2200 Copenhagen, Denmark.,Department of Radiology and Biomedical Imaging, Yale University, 300 Cedar Street, New Haven, CT 06520, USA
| | - Beatrice de Gelder
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, 6200 MD Maastricht, the Netherlands.,Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, UK
| |
Collapse
|
20
|
Kopco N, Doreswamy KK, Huang S, Rossi S, Ahveninen J. Cortical auditory distance representation based on direct-to-reverberant energy ratio. Neuroimage 2020; 208:116436. [PMID: 31809885 PMCID: PMC6997045 DOI: 10.1016/j.neuroimage.2019.116436] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 11/30/2019] [Accepted: 12/02/2019] [Indexed: 11/26/2022] Open
Abstract
Auditory distance perception and its neuronal mechanisms are poorly understood, mainly because 1) it is difficult to separate distance processing from intensity processing, 2) multiple intensity-independent distance cues are often available, and 3) the cues are combined in a context-dependent way. A recent fMRI study identified human auditory cortical area representing intensity-independent distance for sources presented along the interaural axis (Kopco et al. PNAS, 109, 11019-11024). For these sources, two intensity-independent cues are available, interaural level difference (ILD) and direct-to-reverberant energy ratio (DRR). Thus, the observed activations may have been contributed by not only distance-related, but also direction-encoding neuron populations sensitive to ILD. Here, the paradigm from the previous study was used to examine DRR-based distance representation for sounds originating in front of the listener, where ILD is not available. In a virtual environment, we performed behavioral and fMRI experiments, combined with computational analyses to identify the neural representation of distance based on DRR. The stimuli varied in distance (15-100 cm) while their received intensity was varied randomly and independently of distance. Behavioral performance showed that intensity-independent distance discrimination is accurate for frontal stimuli, even though it is worse than for lateral stimuli. fMRI activations for sounds varying in frontal distance, as compared to varying only in intensity, increased bilaterally in the posterior banks of Heschl's gyri, the planum temporale, and posterior superior temporal gyrus regions. Taken together, these results suggest that posterior human auditory cortex areas contain neuron populations that are sensitive to distance independent of intensity and of binaural cues relevant for directional hearing.
Collapse
Affiliation(s)
- Norbert Kopco
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA; Institute of Computer Science, P. J. Šafárik University, Košice, 04001, Slovakia; Hearing Research Center, Boston University, Boston, MA, 02215, USA.
| | - Keerthi Kumar Doreswamy
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA; Institute of Computer Science, P. J. Šafárik University, Košice, 04001, Slovakia
| | - Samantha Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA
| | - Stephanie Rossi
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA
| |
Collapse
|
21
|
Bednar A, Lalor EC. Where is the cocktail party? Decoding locations of attended and unattended moving sound sources using EEG. Neuroimage 2019; 205:116283. [PMID: 31629828 DOI: 10.1016/j.neuroimage.2019.116283] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 10/08/2019] [Accepted: 10/14/2019] [Indexed: 11/18/2022] Open
Abstract
Recently, we showed that in a simple acoustic scene with one sound source, auditory cortex tracks the time-varying location of a continuously moving sound. Specifically, we found that both the delta phase and alpha power of the electroencephalogram (EEG) can be used to reconstruct the sound source azimuth. However, in natural settings, we are often presented with a mixture of multiple competing sounds and so we must focus our attention on the relevant source in order to segregate it from the competing sources e.g. 'cocktail party effect'. While many studies have examined this phenomenon in the context of sound envelope tracking by the cortex, it is unclear how we process and utilize spatial information in complex acoustic scenes with multiple sound sources. To test this, we created an experiment where subjects listened to two concurrent sound stimuli that were moving within the horizontal plane over headphones while we recorded their EEG. Participants were tasked with paying attention to one of the two presented stimuli. The data were analyzed by deriving linear mappings, temporal response functions (TRF), between EEG data and attended as well unattended sound source trajectories. Next, we used these TRFs to reconstruct both trajectories from previously unseen EEG data. In a first experiment we used noise stimuli and included the task involved spatially localizing embedded targets. Then, in a second experiment, we employed speech stimuli and a non-spatial speech comprehension task. Results showed the trajectory of an attended sound source can be reliably reconstructed from both delta phase and alpha power of EEG even in the presence of distracting stimuli. Moreover, the reconstruction was robust to task and stimulus type. The cortical representation of the unattended source position was below detection level for the noise stimuli, but we observed weak tracking of the unattended source location for the speech stimuli by the delta phase of EEG. In addition, we demonstrated that the trajectory reconstruction method can in principle be used to decode selective attention on a single-trial basis, however, its performance was inferior to envelope-based decoders. These results suggest a possible dissociation of delta phase and alpha power of EEG in the context of sound trajectory tracking. Moreover, the demonstrated ability to localize and determine the attended speaker in complex acoustic environments is particularly relevant for cognitively controlled hearing devices.
Collapse
Affiliation(s)
- Adam Bednar
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland.
| | - Edmund C Lalor
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland; Department of Biomedical Engineering, Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
22
|
Abstract
Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.
Collapse
|
23
|
Representation of Auditory Motion Directions and Sound Source Locations in the Human Planum Temporale. J Neurosci 2019; 39:2208-2220. [PMID: 30651333 DOI: 10.1523/jneurosci.2289-18.2018] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.
Collapse
|
24
|
Active Sound Localization Sharpens Spatial Tuning in Human Primary Auditory Cortex. J Neurosci 2018; 38:8574-8587. [PMID: 30126968 DOI: 10.1523/jneurosci.0587-18.2018] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 07/09/2018] [Accepted: 07/19/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing sensitivity in humans is dynamic and task-dependent, but the mechanisms in human auditory cortex that enable dynamic sound location encoding remain unclear. Using functional magnetic resonance imaging (fMRI), we assessed how active behavior affects encoding of sound location (azimuth) in primary auditory cortical areas and planum temporale (PT). According to the hierarchical model of auditory processing and cortical functional specialization, PT is implicated in sound location ("where") processing. Yet, our results show that spatial tuning profiles in primary auditory cortical areas (left primary core and right caudo-medial belt) sharpened during a sound localization ("where") task compared with a sound identification ("what") task. In contrast, spatial tuning in PT was sharp but did not vary with task performance. We further applied a population pattern decoder to the measured fMRI activity patterns, which confirmed the task-dependent effects in the left core: sound location estimates from fMRI patterns measured during active sound localization were most accurate. In PT, decoding accuracy was not modulated by task performance. These results indicate that changes of population activity in human primary auditory areas reflect dynamic and task-dependent processing of sound location. As such, our findings suggest that the hierarchical model of auditory processing may need to be revised to include an interaction between primary and functionally specialized areas depending on behavioral requirements.SIGNIFICANCE STATEMENT According to a purely hierarchical view, cortical auditory processing consists of a series of analysis stages from sensory (acoustic) processing in primary auditory cortex to specialized processing in higher-order areas. Posterior-dorsal cortical auditory areas, planum temporale (PT) in humans, are considered to be functionally specialized for spatial processing. However, this model is based mostly on passive listening studies. Our results provide compelling evidence that active behavior (sound localization) sharpens spatial selectivity in primary auditory cortex, whereas spatial tuning in functionally specialized areas (PT) is narrow but task-invariant. These findings suggest that the hierarchical view of cortical functional specialization needs to be extended: our data indicate that active behavior involves feedback projections from higher-order regions to primary auditory cortex.
Collapse
|
25
|
Da Costa S, Clarke S, Crottaz-Herbette S. Keeping track of sound objects in space: The contribution of early-stage auditory areas. Hear Res 2018; 366:17-31. [PMID: 29643021 DOI: 10.1016/j.heares.2018.03.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 03/21/2018] [Accepted: 03/28/2018] [Indexed: 12/01/2022]
Abstract
The influential dual-stream model of auditory processing stipulates that information pertaining to the meaning and to the position of a given sound object is processed in parallel along two distinct pathways, the ventral and dorsal auditory streams. Functional independence of the two processing pathways is well documented by conscious experience of patients with focal hemispheric lesions. On the other hand there is growing evidence that the meaning and the position of a sound are combined early in the processing pathway, possibly already at the level of early-stage auditory areas. Here, we investigated how early auditory areas integrate sound object meaning and space (simulated by interaural time differences) using a repetition suppression fMRI paradigm at 7 T. Subjects listen passively to environmental sounds presented in blocks of repetitions of the same sound object (same category) or different sounds objects (different categories), perceived either in the left or right space (no change within block) or shifted left-to-right or right-to-left halfway in the block (change within block). Environmental sounds activated bilaterally the superior temporal gyrus, middle temporal gyrus, inferior frontal gyrus, and right precentral cortex. Repetitions suppression effects were measured within bilateral early-stage auditory areas in the lateral portion of the Heschl's gyrus and posterior superior temporal plane. Left lateral early-stages areas showed significant effects for position and change, interactions Category x Initial Position and Category x Change in Position, while right lateral areas showed main effect of category and interaction Category x Change in Position. The combined evidence from our study and from previous studies speaks in favour of a position-linked representation of sound objects, which is independent from semantic encoding within the ventral stream and from spatial encoding within the dorsal stream. We argue for a third auditory stream, which has its origin in lateral belt areas and tracks sound objects across space.
Collapse
Affiliation(s)
- Sandra Da Costa
- Centre d'Imagerie BioMédicale (CIBM), EPFL et Universités de Lausanne et de Genève, Bâtiment CH, Station 6, CH-1015 Lausanne, Switzerland.
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| |
Collapse
|
26
|
Meuret S, Ludwig A, Predel D, Staske B, Fuchs M. Localization and Spatial Discrimination in Children and Adolescents with Moderate Sensorineural Hearing Loss Tested without Their Hearing Aids. Audiol Neurootol 2018; 22:326-342. [DOI: 10.1159/000485826] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2017] [Accepted: 11/27/2017] [Indexed: 11/19/2022] Open
Abstract
The present study investigated two measures of spatial acoustic perception in children and adolescents with sensorineural hearing loss (SNHL) tested without their hearing aids and compared it to age-matched controls. Auditory localization was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests to separately address spatial auditory processing based on interaural time and intensity differences. In SNHL children, localization (hit accuracy) was significantly reduced compared to normal-hearing children and intraindividual variability (dispersion) considerably increased. Given the respective impairments, the performance based on interaural time differences (low frequencies) was still better than that based on intensity differences (high frequencies). For MAA, age-matched comparisons yielded not only increased MAA values in SNHL children, but also no decrease with increasing age compared to normal-hearing children. Deficits in MAA were most apparent in the frontal azimuth. Thus, children with SNHL do not seem to benefit from frontal positions of the sound sources as do normal-hearing children. The results give an indication that the processing of spatial cues in SNHL children is restricted, which could also imply problems regarding speech understanding in challenging hearing situations.
Collapse
|
27
|
Rauschecker JP. Where, When, and How: Are they all sensorimotor? Towards a unified view of the dorsal pathway in vision and audition. Cortex 2018; 98:262-268. [PMID: 29183630 PMCID: PMC5771843 DOI: 10.1016/j.cortex.2017.10.020] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 08/19/2017] [Accepted: 10/12/2017] [Indexed: 10/18/2022]
Abstract
Dual processing streams in sensory systems have been postulated for a long time. Much experimental evidence has been accumulated from behavioral, neuropsychological, electrophysiological, neuroanatomical and neuroimaging work supporting the existence of largely segregated cortical pathways in both vision and audition. More recently, debate has returned to the question of overlap between these pathways and whether there aren't really more than two processing streams. The present piece defends the dual-system view. Focusing on the functions of the dorsal stream in the auditory and language system I try to reconcile the various models of Where, How and When into one coherent concept of sensorimotor integration. This framework incorporates principles of internal models in feedback control systems and is applicable to the visual system as well.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA; Institute for Advanced Study, Technische Universität München, Garching bei München, Germany.
| |
Collapse
|
28
|
Tissieres I, Fornari E, Clarke S, Crottaz-Herbette S. Supramodal effect of rightward prismatic adaptation on spatial representations within the ventral attentional system. Brain Struct Funct 2017; 223:1459-1471. [PMID: 29151115 DOI: 10.1007/s00429-017-1572-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 11/15/2017] [Indexed: 10/18/2022]
Abstract
Rightward prismatic adaptation (R-PA) was shown to alleviate not only visuo-spatial but also auditory symptoms in neglect. The neural mechanisms underlying the effect of R-PA have been previously investigated in visual tasks, demonstrating a shift of hemispheric dominance for visuo-spatial attention from the right to the left hemisphere both in normal subjects and in patients. We have investigated whether the same neural mechanisms underlie the supramodal effect of R-PA on auditory attention. Normal subjects underwent a brief session of R-PA, which was preceded and followed by an fMRI evaluation during which subjects detected targets within the left, central and right space in the auditory or visual modality. R-PA-related changes in activation patterns were found bilaterally in the inferior parietal lobule. In either modality, the representation of the left, central and right space increased in the left IPL, whereas the representation of the right space decreased in the right IPL. Thus, a brief exposure to R-PA modulated the representation of the auditory and visual space within the ventral attentional system. This shift in hemispheric dominance for auditory spatial attention offers a parsimonious explanation for the previously reported effects of R-PA on auditory symptoms in neglect.
Collapse
Affiliation(s)
- Isabel Tissieres
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011, Lausanne, Switzerland
| | - Eleonora Fornari
- CIBM (Centre d'Imagerie Biomédicale), Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, 1011, Lausanne, Switzerland
| | - Stephanie Clarke
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011, Lausanne, Switzerland.
| |
Collapse
|
29
|
Fitzgerald K, Provost A, Todd J. First-impression bias effects on mismatch negativity to auditory spatial deviants. Psychophysiology 2017; 55. [DOI: 10.1111/psyp.13013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Revised: 08/24/2017] [Accepted: 08/24/2017] [Indexed: 11/30/2022]
Affiliation(s)
- Kaitlin Fitzgerald
- School of Psychology; University of Newcastle; Callaghan New South Wales Australia
- Priority Research Centre for Translational Neuroscience and Mental Health Research, University of Newcastle; Callaghan New South Wales Australia
| | - Alexander Provost
- School of Psychology; University of Newcastle; Callaghan New South Wales Australia
- Priority Research Centre for Translational Neuroscience and Mental Health Research, University of Newcastle; Callaghan New South Wales Australia
| | - Juanita Todd
- School of Psychology; University of Newcastle; Callaghan New South Wales Australia
- Priority Research Centre for Translational Neuroscience and Mental Health Research, University of Newcastle; Callaghan New South Wales Australia
| |
Collapse
|
30
|
The role of auditory cortex in the spatial ventriloquism aftereffect. Neuroimage 2017; 162:257-268. [PMID: 28889003 DOI: 10.1016/j.neuroimage.2017.09.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Revised: 08/15/2017] [Accepted: 09/01/2017] [Indexed: 11/21/2022] Open
Abstract
Cross-modal recalibration allows the brain to maintain coherent sensory representations of the world. Using functional magnetic resonance imaging (fMRI), the present study aimed at identifying the neural mechanisms underlying recalibration in an audiovisual ventriloquism aftereffect paradigm. Participants performed a unimodal sound localization task, before and after they were exposed to adaptation blocks, in which sounds were paired with spatially disparate visual stimuli offset by 14° to the right. Behavioral results showed a significant rightward shift in sound localization following adaptation, indicating a ventriloquism aftereffect. Regarding fMRI results, left and right planum temporale (lPT/rPT) were found to respond more to contralateral sounds than to central sounds at pretest. Contrasting posttest with pretest blocks revealed significantly enhanced fMRI-signals in space-sensitive lPT after adaptation, matching the behavioral rightward shift in sound localization. Moreover, a region-of-interest analysis in lPT/rPT revealed that the lPT activity correlated positively with the localization shift for right-side sounds, whereas rPT activity correlated negatively with the localization shift for left-side and central sounds. Finally, using functional connectivity analysis, we observed enhanced coupling of the lPT with left and right inferior parietal areas as well as left motor regions following adaptation and a decoupling of lPT/rPT with contralateral auditory cortex, which scaled with participants' degree of adaptation. Together, the fMRI results suggest that cross-modal spatial recalibration is accomplished by an adjustment of unisensory representations in low-level auditory cortex. Such persistent adjustments of low-level sensory representations seem to be mediated by the interplay with higher-level spatial representations in parietal cortex.
Collapse
|
31
|
Tuning to Binaural Cues in Human Auditory Cortex. J Assoc Res Otolaryngol 2016; 17:37-53. [PMID: 26466943 DOI: 10.1007/s10162-015-0546-4] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 09/25/2015] [Indexed: 10/22/2022] Open
Abstract
Interaural level and time differences (ILD and ITD), the primary binaural cues for sound localization in azimuth, are known to modulate the tuned responses of neurons in mammalian auditory cortex (AC). The majority of these neurons respond best to cue values that favor the contralateral ear, such that contralateral bias is evident in the overall population response and thereby expected in population-level functional imaging data. Human neuroimaging studies, however, have not consistently found contralaterally biased binaural response patterns. Here, we used functional magnetic resonance imaging (fMRI) to parametrically measure ILD and ITD tuning in human AC. For ILD, contralateral tuning was observed, using both univariate and multivoxel analyses, in posterior superior temporal gyrus (pSTG) in both hemispheres. Response-ILD functions were U-shaped, revealing responsiveness to both contralateral and—to a lesser degree—ipsilateral ILD values, consistent with rate coding by unequal populations of contralaterally and ipsilaterally tuned neurons. In contrast, for ITD, univariate analyses showed modest contralateral tuning only in left pSTG, characterized by a monotonic response-ITD function. A multivoxel classifier, however, revealed ITD coding in both hemispheres. Although sensitivity to ILD and ITD was distributed in similar AC regions, the differently shaped response functions and different response patterns across hemispheres suggest that basic ILD and ITD processes are not fully integrated in human AC. The results support opponent-channel theories of ILD but not necessarily ITD coding, the latter of which may involve multiple types of representation that differ across hemispheres.
Collapse
|
32
|
Ruzzoli M, Pirulli C, Mazza V, Miniussi C, Brignani D. The mismatch negativity as an index of cognitive decline for the early detection of Alzheimer's disease. Sci Rep 2016; 6:33167. [PMID: 27616726 PMCID: PMC5018736 DOI: 10.1038/srep33167] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Accepted: 08/12/2016] [Indexed: 01/02/2023] Open
Abstract
Evidence suggests that Alzheimer's disease (AD) is part of a continuum, characterized by long preclinical phases before the onset of clinical symptoms. In several cases, this continuum starts with a syndrome, defined as mild cognitive impairment (MCI), in which daily activities are preserved despite the presence of cognitive decline. The possibility of having a reliable and sensitive neurophysiological marker that can be used for early detection of AD is extremely valuable because of the incidence of this type of dementia. In this study, we aimed to investigate the reliability of auditory mismatch negativity (aMMN) as a marker of cognitive decline from normal ageing progressing from MCI to AD. We compared aMMN elicited in the frontal and temporal locations by duration deviant sounds in short (400 ms) and long (4000 ms) inter-trial intervals (ITI) in three groups. We found that at a short ITI, MCI showed only the temporal component of aMMN and AD the frontal component compared to healthy elderly who presented both. At a longer ITI, aMMN was elicited only in normal ageing subjects at the temporal locations. Our study provides empirical evidence for the possibility to adopt aMMN as an index for assessing cognitive decline in pathological ageing.
Collapse
Affiliation(s)
- Manuela Ruzzoli
- Departament de Tecnologies de la Informació i les Comunicacions, Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| | - Cornelia Pirulli
- Cognitive Neuroscience Section, IRCCS Centro San Giovanni di Dio Fatebenefratelli, Brescia, Italy
| | - Veronica Mazza
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
| | - Carlo Miniussi
- Cognitive Neuroscience Section, IRCCS Centro San Giovanni di Dio Fatebenefratelli, Brescia, Italy
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
| | - Debora Brignani
- Cognitive Neuroscience Section, IRCCS Centro San Giovanni di Dio Fatebenefratelli, Brescia, Italy
| |
Collapse
|
33
|
Derey K, Valente G, de Gelder B, Formisano E. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations. Cereb Cortex 2015; 26:450-464. [PMID: 26545618 PMCID: PMC4677988 DOI: 10.1093/cercor/bhv269] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC.
Collapse
Affiliation(s)
- Kiki Derey
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| |
Collapse
|
34
|
Fiehler K, Schütz I, Meller T, Thaler L. Neural Correlates of Human Echolocation of Path Direction During Walking. Multisens Res 2015; 28:195-226. [PMID: 26152058 DOI: 10.1163/22134808-00002491] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Echolocation can be used by blind and sighted humans to navigate their environment. The current study investigated the neural activity underlying processing of path direction during walking. Brain activity was measured with fMRI in three blind echolocation experts, and three blind and three sighted novices. During scanning, participants listened to binaural recordings that had been made prior to scanning while echolocation experts had echolocated during walking along a corridor which could continue to the left, right, or straight ahead. Participants also listened to control sounds that contained ambient sounds and clicks, but no echoes. The task was to decide if the corridor in the recording continued to the left, right, or straight ahead, or if they were listening to a control sound. All participants successfully dissociated echo from no echo sounds, however, echolocation experts were superior at direction detection. We found brain activations associated with processing of path direction (contrast: echo vs. no echo) in superior parietal lobule (SPL) and inferior frontal cortex in each group. In sighted novices, additional activation occurred in the inferior parietal lobule (IPL) and middle and superior frontal areas. Within the framework of the dorso-dorsal and ventro-dorsal pathway proposed by Rizzolatti and Matelli (2003), our results suggest that blind participants may automatically assign directional meaning to the echoes, while sighted participants may apply more conscious, high-level spatial processes. High similarity of SPL and IFC activations across all three groups, in combination with previous research, also suggest that all participants recruited a multimodal spatial processing system for action (here: locomotion).
Collapse
|
35
|
Cai Y, Zheng Y, Liang M, Zhao F, Yu G, Liu Y, Chen Y, Chen G. Auditory Spatial Discrimination and the Mismatch Negativity Response in Hearing-Impaired Individuals. PLoS One 2015; 10:e0136299. [PMID: 26305694 PMCID: PMC4549058 DOI: 10.1371/journal.pone.0136299] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2015] [Accepted: 08/02/2015] [Indexed: 12/01/2022] Open
Abstract
The aims of the present study were to investigate the ability of hearing-impaired (HI) individuals with different binaural hearing conditions to discriminate spatial auditory-sources at the midline and lateral positions, and to explore the possible central processing mechanisms by measuring the minimal audible angle (MAA) and mismatch negativity (MMN) response. To measure MAA at the left/right 0°, 45° and 90° positions, 12 normal-hearing (NH) participants and 36 patients with sensorineural hearing loss, which included 12 patients with symmetrical hearing loss (SHL) and 24 patients with asymmetrical hearing loss (AHL) [12 with unilateral hearing loss on the left (UHLL) and 12 with unilateral hearing loss on the right (UHLR)] were recruited. In addition, 128-electrode electroencephalography was used to record the MMN response in a separate group of 60 patients (20 UHLL, 20 UHLR and 20 SHL patients) and 20 NH participants. The results showed MAA thresholds of the NH participants to be significantly lower than the HI participants. Also, a significantly smaller MAA threshold was obtained at the midline position than at the lateral position in both NH and SHL groups. However, in the AHL group, MAA threshold for the 90° position on the affected side was significantly smaller than the MMA thresholds obtained at other positions. Significantly reduced amplitudes and prolonged latencies of the MMN were found in the HI groups compared to the NH group. In addition, contralateral activation was found in the UHL group for sounds emanating from the 90° position on the affected side and in the NH group. These findings suggest that the abilities of spatial discrimination at the midline and lateral positions vary significantly in different hearing conditions. A reduced MMN amplitude and prolonged latency together with bilaterally symmetrical cortical activations over the auditory hemispheres indicate possible cortical compensatory changes associated with poor behavioral spatial discrimination in individuals with HI.
Collapse
Affiliation(s)
- Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
- * E-mail:
| | - Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Fei Zhao
- Department of Speech Language Therapy and Hearing Science, Cardiff Metropolitan University, Cardiff, Wales
- Department of Hearing and Speech Sciences, Xinhua College, Sun Yat-sen University, Guangzhou, China
| | - Guangzheng Yu
- Acoustic Lab, Physics Department, South China University of Technology, Guangzhou, 510641, China
| | - Yu Liu
- Acoustic Lab, Physics Department, South China University of Technology, Guangzhou, 510641, China
| | - Yuebo Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Guisheng Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
36
|
Roaring lions and chirruping lemurs: How the brain encodes sound objects in space. Neuropsychologia 2015; 75:304-13. [DOI: 10.1016/j.neuropsychologia.2015.06.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Revised: 06/07/2015] [Accepted: 06/10/2015] [Indexed: 01/29/2023]
|
37
|
Freigang C, Richter N, Rübsamen R, Ludwig AA. Age-related changes in sound localisation ability. Cell Tissue Res 2015; 361:371-86. [PMID: 26077928 DOI: 10.1007/s00441-015-2230-8] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2015] [Accepted: 05/26/2015] [Indexed: 10/23/2022]
Abstract
Auditory spatial processing is an important ability in everyday life and allows the processing of omnidirectional information. In this review, we report and compare data from psychoacoustic and electrophysiological experiments on sound localisation accuracy and auditory spatial discrimination in infants, children, and young and older adults. The ability to process auditory spatial information changes over lifetime: the perception of the acoustic space develops from an initially imprecise representation in infants and young children to a concise representation of spatial positions in young adults and the respective performance declines again in older adults. Localisation accuracy shows a strong deterioration in older adults, presumably due to declined processing of binaural temporal and monaural spectro-temporal cues. When compared to young adults, the thresholds for spatial discrimination were strongly elevated both in young children and older adults. Despite the consistency of the measured values the underlying causes for the impaired performance might be different: (1) the effect is due to reduced cognitive processing ability and is thus task-related; (2) the effect is due to reduced information about the auditory space and caused by declined processing in auditory brain stem circuits; and (3) the auditory space processing regime in young children is still undergoing developmental changes and the interrelation with spatial visual processing is not yet established. In conclusion, we argue that for studying auditory space processing over the life course, it is beneficial to investigate spatial discrimination ability instead of localisation accuracy because it more reliably indicates changes in the processing ability.
Collapse
Affiliation(s)
- Claudia Freigang
- Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Talstrasse 33, 04103, Leipzig, Germany,
| | | | | | | |
Collapse
|
38
|
Trapeau R, Schönwiesner M. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex. Neuroimage 2015; 118:26-38. [PMID: 26054873 DOI: 10.1016/j.neuroimage.2015.06.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 05/06/2015] [Accepted: 06/02/2015] [Indexed: 11/29/2022] Open
Abstract
The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere.
Collapse
Affiliation(s)
- Régis Trapeau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada; Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
39
|
Badcock JC. A Neuropsychological Approach to Auditory Verbal Hallucinations and Thought Insertion - Grounded in Normal Voice Perception. ACTA ACUST UNITED AC 2015; 7:631-652. [PMID: 27617046 PMCID: PMC4995233 DOI: 10.1007/s13164-015-0270-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
A neuropsychological perspective on auditory verbal hallucinations (AVH) links key phenomenological features of the experience, such as voice location and identity, to functionally separable pathways in normal human audition. Although this auditory processing stream (APS) framework has proven valuable for integrating research on phenomenology with cognitive and neural accounts of hallucinatory experiences, it has not yet been applied to other symptoms presumed to be closely related to AVH – such as thought insertion (TI). In this paper, I propose that an APS framework offers a useful way of thinking about the experience of TI as well as AVH, providing a common conceptual framework for both. I argue that previous self-monitoring theories struggle to account for both the differences and similarities in the characteristic features of AVH and TI, which can be readily accommodated within an APS framework. Furthermore, the APS framework can be integrated with predictive processing accounts of psychotic symptoms; makes predictions about potential sites of prediction error signals; and may offer a template for understanding a range of other symptoms beyond AVH and TI.
Collapse
Affiliation(s)
- Johanna C Badcock
- Centre for Clinical Research in Neuropsychiatry, School of Psychiatry and Clinical Neurosciences, University of Western Australia, Crawley, 6009 Western Australia
| |
Collapse
|
40
|
Zhang X, Zhang Q, Hu X, Zhang B. Neural representation of three-dimensional acoustic space in the human temporal lobe. Front Hum Neurosci 2015; 9:203. [PMID: 25932011 PMCID: PMC4399328 DOI: 10.3389/fnhum.2015.00203] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2015] [Accepted: 03/27/2015] [Indexed: 11/13/2022] Open
Abstract
Sound localization is an important function of the human brain, but the underlying cortical mechanisms remain unclear. In this study, we recorded auditory stimuli in three-dimensional space and then replayed the stimuli through earphones during functional magnetic resonance imaging (fMRI). By employing a machine learning algorithm, we successfully decoded sound location from the blood oxygenation level-dependent signals in the temporal lobe. Analysis of the data revealed that different cortical patterns were evoked by sounds from different locations. Specifically, discrimination of sound location along the abscissa axis evoked robust responses in the left posterior superior temporal gyrus (STG) and right mid-STG, discrimination along the elevation (EL) axis evoked robust responses in the left posterior middle temporal lobe (MTL) and right STG, and discrimination along the ordinate axis evoked robust responses in the left mid-MTL and right mid-STG. These results support a distributed representation of acoustic space in human cortex.
Collapse
Affiliation(s)
- Xiaolu Zhang
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China
| | - Qingtian Zhang
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China
| | - Xiaolin Hu
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China ; Center for Brain-Inspired Computing Research (CBICR), Tsinghua University Beijing, China
| | - Bo Zhang
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China ; Center for Brain-Inspired Computing Research (CBICR), Tsinghua University Beijing, China
| |
Collapse
|
41
|
Abstract
The auditory cortex is a network of areas in the part of the brain that receives inputs from the subcortical auditory pathways in the brainstem and thalamus. Through an elaborate network of intrinsic and extrinsic connections, the auditory cortex is thought to bring about the conscious perception of sound and provide a basis for the comprehension and production of meaningful utterances. In this chapter, the organization of auditory cortex is described with an emphasis on its anatomic features and the flow of information within the network. These features are then used to introduce key neurophysiologic concepts that are being intensively studied in humans and animal models. The discussion is presented in the context of our working model of the primate auditory cortex and extensions to humans. The material is presented in the context of six underlying principles, which reflect distinct, but related, aspects of anatomic and physiologic organization: (1) the division of auditory cortex into regions; (2) the subdivision of regions into areas; (3) tonotopic organization of areas; (4) thalamocortical connections; (5) serial and parallel organization of connections; and (6) topographic relationships between auditory and auditory-related areas. Although the functional roles of the various components of this network remain poorly defined, a more complete understanding is emerging from ongoing studies that link auditory behavior to its anatomic and physiologic substrates.
Collapse
Affiliation(s)
- Troy A Hackett
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine and Department of Psychology, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
42
|
Audio-visual synchrony modulates the ventriloquist illusion and its neural/spatial representation in the auditory cortex. Neuroimage 2014; 98:425-34. [DOI: 10.1016/j.neuroimage.2014.04.077] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2013] [Revised: 04/25/2014] [Accepted: 04/30/2014] [Indexed: 11/20/2022] Open
|
43
|
Rinne T, Ala-Salomäki H, Stecker GC, Pätynen J, Lokki T. Processing of spatial sounds in human auditory cortex during visual, discrimination and 2-back tasks. Front Neurosci 2014; 8:220. [PMID: 25120423 PMCID: PMC4112909 DOI: 10.3389/fnins.2014.00220] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2014] [Accepted: 07/05/2014] [Indexed: 11/13/2022] Open
Abstract
Previous imaging studies on the brain mechanisms of spatial hearing have mainly focused on sounds varying in the horizontal plane. In this study, we compared activations in human auditory cortex (AC) and adjacent inferior parietal lobule (IPL) to sounds varying in horizontal location, distance, or space (i.e., different rooms). In order to investigate both stimulus-dependent and task-dependent activations, these sounds were presented during visual discrimination, auditory discrimination, and auditory 2-back memory tasks. Consistent with previous studies, activations in AC were modulated by the auditory tasks. During both auditory and visual tasks, activations in AC were stronger to sounds varying in horizontal location than along other feature dimensions. However, in IPL, this enhancement was detected only during auditory tasks. Based on these results, we argue that IPL is not primarily involved in stimulus-level spatial analysis but that it may represent such information for more general processing when relevant to an active auditory task.
Collapse
Affiliation(s)
- Teemu Rinne
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Advanced Magnetic Imaging Centre, Aalto University School of Science Espoo, Finland
| | - Heidi Ala-Salomäki
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - G Christopher Stecker
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center Nashville, TN, USA
| | - Jukka Pätynen
- Department of Media Technology, Aalto University School of Science Espoo, Finland
| | - Tapio Lokki
- Department of Media Technology, Aalto University School of Science Espoo, Finland
| |
Collapse
|
44
|
Shrem T, Deouell LY. Frequency-dependent auditory space representation in the human planum temporale. Front Hum Neurosci 2014; 8:524. [PMID: 25100973 PMCID: PMC4106454 DOI: 10.3389/fnhum.2014.00524] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2014] [Accepted: 06/27/2014] [Indexed: 12/04/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) findings suggest that a part of the planum temporale (PT) is involved in representing spatial properties of acoustic information. Here, we tested whether this representation of space is frequency-dependent or generalizes across spectral content, as required from high order sensory representations. Using sounds with two different spectral content and two spatial locations in individually tailored virtual acoustic environment, we compared three conditions in a sparse-fMRI experiment: Single Location, in which two sounds were both presented from one location; Fixed Mapping, in which there was one-to-one mapping between two sounds and two locations; and Mixed Mapping, in which the two sounds were equally likely to appear at either one of the two locations. We surmised that only neurons tuned to both location and frequency should be differentially adapted by the Mixed and Fixed mappings. Replicating our previous findings, we found adaptation to spatial location in the PT. Importantly, activation was higher for Mixed Mapping than for Fixed Mapping blocks, even though the two sounds and the two locations appeared equally in both conditions. These results show that spatially tuned neurons in the human PT are not invariant to the spectral content of sounds.
Collapse
Affiliation(s)
- Talia Shrem
- Human Cognitive Neuroscience Lab, Department of Psychology, Social Sciences Faculty, The Hebrew University of Jerusalem Jerusalem, Israel
| | - Leon Y Deouell
- Human Cognitive Neuroscience Lab, Department of Psychology, Social Sciences Faculty, The Hebrew University of Jerusalem Jerusalem, Israel ; Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem Jerusalem, Israel
| |
Collapse
|
45
|
Ahveninen J, Huang S, Nummenmaa A, Belliveau JW, Hung AY, Jääskeläinen IP, Rauschecker JP, Rossi S, Tiitinen H, Raij T. Evidence for distinct human auditory cortex regions for sound location versus identity processing. Nat Commun 2014; 4:2585. [PMID: 24121634 PMCID: PMC3932554 DOI: 10.1038/ncomms3585] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2013] [Accepted: 09/10/2013] [Indexed: 11/16/2022] Open
Abstract
Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound-identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55–145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School-Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Building 149, 13th Street, Charlestown, Massachusetts 02129, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
46
|
Preattentive processing of horizontal motion, radial motion, and intensity changes of sounds. Neuroreport 2014; 24:861-5. [PMID: 24022175 DOI: 10.1097/wnr.0000000000000006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Localization of sound sources is critical for an appropriate behavioral response. This is not only true for localization in the horizontal plane but also for localization in depth. Depth ranging of sound sources implicates various distance cues, among others sound intensity. In this study, we measured human electroencephalography and compared mismatch negativity (MMN) amplitudes and latencies for horizontal motion, radial motion, and pure intensity changes in the free field. We observed similar MMN latencies for horizontal and radial motion, whereas MMN responses to pure intensity changes were comparably delayed. MMN amplitudes and latencies were not different for approaching and receding sounds. Our data suggest similar fast processing for horizontal and radial motion, whereas pure intensity changes are possibly processed with less priority.
Collapse
|
47
|
Pérez-González D, Malmierca MS. Adaptation in the auditory system: an overview. Front Integr Neurosci 2014; 8:19. [PMID: 24600361 PMCID: PMC3931124 DOI: 10.3389/fnint.2014.00019] [Citation(s) in RCA: 105] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2013] [Accepted: 02/05/2014] [Indexed: 11/13/2022] Open
Abstract
The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Collapse
Affiliation(s)
- David Pérez-González
- Auditory Neurophysiology Laboratory (Lab 1), Institute of Neuroscience of Castilla y León, University of Salamanca Salamanca, Spain
| | - Manuel S Malmierca
- Auditory Neurophysiology Laboratory (Lab 1), Institute of Neuroscience of Castilla y León, University of Salamanca Salamanca, Spain ; Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca Salamanca, Spain
| |
Collapse
|
48
|
Kusmierek P, Rauschecker JP. Selectivity for space and time in early areas of the auditory dorsal stream in the rhesus monkey. J Neurophysiol 2014; 111:1671-85. [PMID: 24501260 DOI: 10.1152/jn.00436.2013] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The respective roles of ventral and dorsal cortical processing streams are still under discussion in both vision and audition. We characterized neural responses in the caudal auditory belt cortex, an early dorsal stream region of the macaque. We found fast neural responses with elevated temporal precision as well as neurons selective to sound location. These populations were partly segregated: Neurons in a caudomedial area more precisely followed temporal stimulus structure but were less selective to spatial location. Response latencies in this area were even shorter than in primary auditory cortex. Neurons in a caudolateral area showed higher selectivity for sound source azimuth and elevation, but responses were slower and matching to temporal sound structure was poorer. In contrast to the primary area and other regions studied previously, latencies in the caudal belt neurons were not negatively correlated with best frequency. Our results suggest that two functional substreams may exist within the auditory dorsal stream.
Collapse
Affiliation(s)
- Pawel Kusmierek
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia
| | | |
Collapse
|
49
|
Thaler L, Milne JL, Arnott SR, Kish D, Goodale MA. Neural correlates of motion processing through echolocation, source hearing, and vision in blind echolocation experts and sighted echolocation novices. J Neurophysiol 2014; 111:112-27. [DOI: 10.1152/jn.00501.2013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We have shown in previous research (Thaler L, Arnott SR, Goodale MA. PLoS One 6: e20162, 2011) that motion processing through echolocation activates temporal-occipital cortex in blind echolocation experts. Here we investigated how neural substrates of echo-motion are related to neural substrates of auditory source-motion and visual-motion. Three blind echolocation experts and twelve sighted echolocation novices underwent functional MRI scanning while they listened to binaural recordings of moving or stationary echolocation or auditory source sounds located either in left or right space. Sighted participants' brain activity was also measured while they viewed moving or stationary visual stimuli. For each of the three modalities separately (echo, source, vision), we then identified motion-sensitive areas in temporal-occipital cortex and in the planum temporale. We then used a region of interest (ROI) analysis to investigate cross-modal responses, as well as laterality effects. In both sighted novices and blind experts, we found that temporal-occipital source-motion ROIs did not respond to echo-motion, and echo-motion ROIs did not respond to source-motion. This double-dissociation was absent in planum temporale ROIs. Furthermore, temporal-occipital echo-motion ROIs in blind, but not sighted, participants showed evidence for contralateral motion preference. Temporal-occipital source-motion ROIs did not show evidence for contralateral preference in either blind or sighted participants. Our data suggest a functional segregation of processing of auditory source-motion and echo-motion in human temporal-occipital cortex. Furthermore, the data suggest that the echo-motion response in blind experts may represent a reorganization rather than exaggeration of response observed in sighted novices. There is the possibility that this reorganization involves the recruitment of “visual” cortical areas.
Collapse
Affiliation(s)
- L. Thaler
- Department of Psychology, Durham University, Durham, United Kingdom
| | - J. L. Milne
- The Brain and Mind Institute, The University of Western Ontario, London, Ontario, Canada
| | - S. R. Arnott
- The Rotman Research Institute, Baycrest, Toronto, Ontario, Canada; and
| | - D. Kish
- World Access for the Blind, Encino, California
| | - M. A. Goodale
- The Brain and Mind Institute, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
50
|
Takanen M, Raitio T, Santala O, Alku P, Pulkki V. Fusion of spatially separated vowel formant cues. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:4508. [PMID: 25669261 DOI: 10.1121/1.4826181] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Previous studies on fusion in speech perception have demonstrated the ability of the human auditory system to group separate components of speech-like sounds together and consequently to enable the identification of speech despite the spatial separation between the components. Typically, the spatial separation has been implemented using headphone reproduction where the different components evoke auditory images at different lateral positions. In the present study, a multichannel loudspeaker system was used to investigate whether the correct vowel is identified and whether two auditory events are perceived when a noise-excited vowel is divided into two components that are spatially separated. The two components consisted of the even and odd formants. Both the amount of spatial separation between the components and the directions of the components were varied. Neither the spatial separation nor the directions of the components affected the vowel identification. Interestingly, an additional auditory event not associated with any vowel was perceived at the same time when the components were presented symmetrically in front of the listener. In such scenarios, the vowel was perceived from the direction of the odd formant components.
Collapse
Affiliation(s)
- Marko Takanen
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| | - Tuomo Raitio
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| | - Olli Santala
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| | - Paavo Alku
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| |
Collapse
|