1
|
Liu H, Bai Y, Zheng Q, Liu J, Zhu J, Ni G. Electrophysiological correlation of auditory selective spatial attention in the "cocktail party" situation. Hum Brain Mapp 2024; 45:e26793. [PMID: 39037186 PMCID: PMC11261592 DOI: 10.1002/hbm.26793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 07/04/2024] [Accepted: 07/09/2024] [Indexed: 07/23/2024] Open
Abstract
The auditory system can selectively attend to the target source in complex environments, the phenomenon known as the "cocktail party" effect. However, the spatiotemporal dynamics of electrophysiological activity associated with auditory selective spatial attention (ASSA) remain largely unexplored. In this study, single-source and multiple-source paradigms were designed to simulate different auditory environments, and microstate analysis was introduced to reveal the electrophysiological correlates of ASSA. Furthermore, cortical source analysis was employed to reveal the neural activity regions of these microstates. The results showed that five microstates could explain the spatiotemporal dynamics of ASSA, ranging from MS1 to MS5. Notably, MS2 and MS3 showed significantly lower partial properties in multiple-source situations than in single-source situations, whereas MS4 had shorter durations and MS5 longer durations in multiple-source situations than in single-source situations. MS1 had insignificant differences between the two situations. Cortical source analysis showed that the activation regions of these microstates initially transferred from the right temporal cortex to the temporal-parietal cortex, and subsequently to the dorsofrontal cortex. Moreover, the neural activity of the single-source situations was greater than that of the multiple-source situations in MS2 and MS3, correlating with the N1 and P2 components, with the greatest differences observed in the superior temporal gyrus and inferior parietal lobule. These findings suggest that these specific microstates and their associated activation regions may serve as promising substrates for decoding ASSA in complex environments.
Collapse
Affiliation(s)
- Hongxing Liu
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
| | - Yanru Bai
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
- Haihe Laboratory of Brain‐computer Interaction and Human‐machine IntegrationTianjinChina
| | - Qi Zheng
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
| | - Jihan Liu
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
| | - Jianing Zhu
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
| | - Guangjian Ni
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
- Haihe Laboratory of Brain‐computer Interaction and Human‐machine IntegrationTianjinChina
- Tianjin Key Laboratory of Brain Science and NeuroengineeringTianjinChina
| |
Collapse
|
2
|
Clarke S, Da Costa S, Crottaz-Herbette S. Dual Representation of the Auditory Space. Brain Sci 2024; 14:535. [PMID: 38928534 PMCID: PMC11201621 DOI: 10.3390/brainsci14060535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/19/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024] Open
Abstract
Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked representation of sound objects. Behavioral and imaging studies demonstrated right-hemispheric dominance for explicit sound localization. An early clinical case study documented the dissociation between the explicit sound localizations, which was heavily impaired, and fully preserved use of spatial cues for sound object segregation. The latter involves location-linked encoding of sound objects. We review here evidence pertaining to brain regions involved in location-linked representation of sound objects. Auditory evoked potential (AEP) and functional magnetic resonance imaging (fMRI) studies investigated this aspect by comparing encoding of individual sound objects, which changed their locations or remained stationary. Systematic search identified 1 AEP and 12 fMRI studies. Together with studies of anatomical correlates of impaired of spatial-cue-based sound object segregation after focal brain lesions, the present evidence indicates that the location-linked representation of sound objects involves strongly the left hemisphere and to a lesser degree the right hemisphere. Location-linked encoding of sound objects is present in several early-stage auditory areas and in the specialized temporal voice area. In these regions, emotional valence benefits from location-linked encoding as well.
Collapse
Affiliation(s)
- Stephanie Clarke
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011 Lausanne, Switzerland; (S.D.C.); (S.C.-H.)
| | | | | |
Collapse
|
3
|
Kausel L, Michon M, Soto-Icaza P, Aboitiz F. A multimodal interface for speech perception: the role of the left superior temporal sulcus in social cognition and autism. Cereb Cortex 2024; 34:84-93. [PMID: 38696598 DOI: 10.1093/cercor/bhae066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/17/2024] [Accepted: 02/03/2024] [Indexed: 05/04/2024] Open
Abstract
Multimodal integration is crucial for human interaction, in particular for social communication, which relies on integrating information from various sensory modalities. Recently a third visual pathway specialized in social perception was proposed, which includes the right superior temporal sulcus (STS) playing a key role in processing socially relevant cues and high-level social perception. Importantly, it has also recently been proposed that the left STS contributes to audiovisual integration of speech processing. In this article, we propose that brain areas along the right STS that support multimodal integration for social perception and cognition can be considered homologs to those in the left, language-dominant hemisphere, sustaining multimodal integration of speech and semantic concepts fundamental for social communication. Emphasizing the significance of the left STS in multimodal integration and associated processes such as multimodal attention to socially relevant stimuli, we underscore its potential relevance in comprehending neurodevelopmental conditions characterized by challenges in social communication such as autism spectrum disorder (ASD). Further research into this left lateral processing stream holds the promise of enhancing our understanding of social communication in both typical development and ASD, which may lead to more effective interventions that could improve the quality of life for individuals with atypical neurodevelopment.
Collapse
Affiliation(s)
- Leonie Kausel
- Centro de Estudios en Neurociencia Humana y Neuropsicología (CENHN), Facultad de Psicología, Universidad Diego Portales, Chile, Vergara 275, 8370076 Santiago, Chile
| | - Maëva Michon
- Praxiling Laboratory, Joint Research Unit (UMR 5267), Centre National de la Recherche Scientifique (CNRS), Université Paul Valéry, Montpellier, France, Route de Mende, 34199 Montpellier cedex 5, France
- Centro Interdisciplinario de Neurociencia, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
- Laboratorio de Neurociencia Cognitiva y Evolutiva, Facultad de Medicina, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
| | - Patricia Soto-Icaza
- Centro de Investigación en Complejidad Social (CICS), Facultad de Gobierno, Universidad del Desarrollo, Chile, Av. Las Condes 12461, edificio 3, piso 3, 7590943, Las Condes Santiago, Chile
| | - Francisco Aboitiz
- Centro Interdisciplinario de Neurociencia, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
- Laboratorio de Neurociencia Cognitiva y Evolutiva, Facultad de Medicina, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
| |
Collapse
|
4
|
Wu J, Nie S, Li C, Wang X, Peng Y, Shang J, Diao L, Ding H, Si Q, Wang S, Tong R, Li Y, Sun L, Zhang J. Sound-localization-related activation and functional connectivity of dorsal auditory pathway in relation to demographic, cognitive, and behavioral characteristics in age-related hearing loss. Front Neurosci 2024; 18:1353413. [PMID: 38562303 PMCID: PMC10982313 DOI: 10.3389/fnins.2024.1353413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
Background Patients with age-related hearing loss (ARHL) often struggle with tracking and locating sound sources, but the neural signature associated with these impairments remains unclear. Materials and methods Using a passive listening task with stimuli from five different horizontal directions in functional magnetic resonance imaging, we defined functional regions of interest (ROIs) of the auditory "where" pathway based on the data of previous literatures and young normal hearing listeners (n = 20). Then, we investigated associations of the demographic, cognitive, and behavioral features of sound localization with task-based activation and connectivity of the ROIs in ARHL patients (n = 22). Results We found that the increased high-level region activation, such as the premotor cortex and inferior parietal lobule, was associated with increased localization accuracy and cognitive function. Moreover, increased connectivity between the left planum temporale and left superior frontal gyrus was associated with increased localization accuracy in ARHL. Increased connectivity between right primary auditory cortex and right middle temporal gyrus, right premotor cortex and left anterior cingulate cortex, and right planum temporale and left lingual gyrus in ARHL was associated with decreased localization accuracy. Among the ARHL patients, the task-dependent brain activation and connectivity of certain ROIs were associated with education, hearing loss duration, and cognitive function. Conclusion Consistent with the sensory deprivation hypothesis, in ARHL, sound source identification, which requires advanced processing in the high-level cortex, is impaired, whereas the right-left discrimination, which relies on the primary sensory cortex, is compensated with a tendency to recruit more resources concerning cognition and attention to the auditory sensory cortex. Overall, this study expanded our understanding of the neural mechanisms contributing to sound localization deficits associated with ARHL and may serve as a potential imaging biomarker for investigating and predicting anomalous sound localization.
Collapse
Affiliation(s)
- Junzhi Wu
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Shuai Nie
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Chunlin Li
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Xing Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Ye Peng
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jiaqi Shang
- Center of Clinical Hearing, Shandong Second Provincial General Hospital, Jinan, Shandong, China
| | - Linan Diao
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Hongping Ding
- College of Special Education, Binzhou Medical University, Yantai, Shandong, China
| | - Qian Si
- School of Cyber Science and Technology, Beihang University, Beijing, China
| | - Songjian Wang
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Ministry of Education, Beijing Institute of Otolaryngology, Beijing, China
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Renjie Tong
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Yutang Li
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Liwei Sun
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Juan Zhang
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
5
|
Rauschecker JP, Afsahi RK. Anatomy of the auditory cortex then and now. J Comp Neurol 2023; 531:1883-1892. [PMID: 38010215 PMCID: PMC10872810 DOI: 10.1002/cne.25560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 08/29/2023] [Accepted: 10/13/2023] [Indexed: 11/29/2023]
Abstract
Using neuroanatomical investigations in the macaque, Deepak Pandya and his colleagues have established the framework for auditory cortex organization, with subdivisions into core and belt areas. This has aided subsequent neurophysiological and imaging studies in monkeys and humans, and a nomenclature building on Pandya's work has also been adopted by the Human Connectome Project. The foundational work by Pandya and his colleagues is highlighted here in the context of subsequent and ongoing studies on the functional anatomy and physiology of auditory cortex in primates, including humans, and their relevance for understanding cognitive aspects of speech and language.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia, USA
| | - Rosstin K Afsahi
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia, USA
| |
Collapse
|
6
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
7
|
Sun L, Li C, Wang S, Si Q, Lin M, Wang N, Sun J, Li H, Liang Y, Wei J, Zhang X, Zhang J. Left frontal eye field encodes sound locations during passive listening. Cereb Cortex 2023; 33:3067-3079. [PMID: 35858212 DOI: 10.1093/cercor/bhac261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 06/02/2022] [Accepted: 06/04/2022] [Indexed: 11/12/2022] Open
Abstract
Previous studies reported that auditory cortices (AC) were mostly activated by sounds coming from the contralateral hemifield. As a result, sound locations could be encoded by integrating opposite activations from both sides of AC ("opponent hemifield coding"). However, human auditory "where" pathway also includes a series of parietal and prefrontal regions. It was unknown how sound locations were represented in those high-level regions during passive listening. Here, we investigated the neural representation of sound locations in high-level regions by voxel-level tuning analysis, regions-of-interest-level (ROI-level) laterality analysis, and ROI-level multivariate pattern analysis. Functional magnetic resonance imaging data were collected while participants listened passively to sounds from various horizontal locations. We found that opponent hemifield coding of sound locations not only existed in AC, but also spanned over intraparietal sulcus, superior parietal lobule, and frontal eye field (FEF). Furthermore, multivariate pattern representation of sound locations in both hemifields could be observed in left AC, right AC, and left FEF. Overall, our results demonstrate that left FEF, a high-level region along the auditory "where" pathway, encodes sound locations during passive listening in two ways: a univariate opponent hemifield activation representation and a multivariate full-field activation pattern representation.
Collapse
Affiliation(s)
- Liwei Sun
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Chunlin Li
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Songjian Wang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Qian Si
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Meng Lin
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Ningyu Wang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| | - Jun Sun
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Hongjun Li
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Ying Liang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Jing Wei
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Xu Zhang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Juan Zhang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| |
Collapse
|
8
|
Ohgami Y, Kotani Y, Yoshida N, Akai H, Kunimatsu A, Kiryu S, Inoue Y. The contralateral effects of anticipated stimuli on brain activity measured by ERP and fMRI. Psychophysiology 2023; 60:e14189. [PMID: 36166644 PMCID: PMC10077996 DOI: 10.1111/psyp.14189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 07/26/2022] [Accepted: 08/30/2022] [Indexed: 01/25/2023]
Abstract
The present study examined the effects of unilateral stimulus presentation on the right hemisphere preponderance of the stimulus-preceding negativity (SPN) in the event-related potential (ERP) experiment, and aimed to elucidate whether unilateral stimulus presentation affected activations in the bilateral anterior insula in the functional magnetic resonance imaging (fMRI) experiment. Separate fMRI and ERP experiments were conducted using visual and auditory stimuli by manipulating the position of stimulus presentation (left side or right side) with the time estimation task. The ERP experiment revealed a significant right hemisphere preponderance during left stimulation and no laterality during the right stimulation. The fMRI experiment revealed that the left anterior insula was activated only in the right stimulation of auditory and visual stimuli whereas the right anterior insula was activated by both left and right stimulations. The visual condition retained a contralateral dominance, but the auditory condition showed a right hemisphere dominance in a localized area. The results of this study indicate that the SPN reflects perceptual anticipation, and also that the anterior insula is involved in its occurrence.
Collapse
Affiliation(s)
- Yoshimi Ohgami
- Institute for Liberal Arts, Tokyo Institute of Technology, Tokyo, Japan
| | - Yasunori Kotani
- Institute for Liberal Arts, Tokyo Institute of Technology, Tokyo, Japan
| | - Nobukiyo Yoshida
- Department of Radiology, Institute of Medical Science, The University of Tokyo, Tokyo, Japan
| | - Hiroyuki Akai
- Department of Radiology, Institute of Medical Science, The University of Tokyo, Tokyo, Japan
| | - Akira Kunimatsu
- Department of Medicine, International University of Health and Welfare, Chiba, Japan
| | - Shigeru Kiryu
- Department of Medicine, International University of Health and Welfare, Chiba, Japan
| | - Yusuke Inoue
- Department of Diagnostic Radiology, Kitasato University, Sagamihara, Kanagawa, Japan
| |
Collapse
|
9
|
Zhang H, Xie J, Xiao Y, Cui G, Xu G, Tao Q, Gebrekidan YY, Yang Y, Ren Z, Li M. Steady-state auditory motion based potentials evoked by intermittent periodic virtual sound source and the effect of auditory noise on EEG enhancement. Hear Res 2023; 428:108670. [PMID: 36563411 DOI: 10.1016/j.heares.2022.108670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 12/12/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
Hearing is one of the most important human perception forms, and humans can capture the movement of sound in complex environments. On the basis of this phenomenon, this study explored the possibility of eliciting a steady-state brain response in an intermittent periodic motion sound source. In this study, a novel discrete continuous and orderly change of sound source positions stimulation paradigm was designed based on virtual sound using head-related transfer functions (HRTFs). And then the auditory motion stimulation paradigms with different noise levels were designed by changing the signal-to-noise ratio (SNR). The characteristics of brain response and the effects of different noises on brain response were studied by analyzing electroencephalogram (EEG) signals evoked by the proposed stimulation. Experimental results showed that the proposed paradigm could elicit a novel steady-state auditory evoked potential (AEP), i.e., steady-state motion auditory evoked potential (SSMAEP). And moderate noise could enhance SSMAEP amplitude and corresponding brain connectivity. This study enriches the types of AEPs and provides insights into the mechanism of brain processing of motion sound sources and the impact of noise on brain processing.
Collapse
Affiliation(s)
- Huanqing Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jun Xie
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China; School of Mechanical Engineering, Xinjiang University, Urumqi, China.
| | - Yi Xiao
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China.
| | - Guiling Cui
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Urumqi, China
| | | | - Yuzhe Yang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Zhiyuan Ren
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Min Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
10
|
Kaufmann B, Cazzoli D, Bartolomeo P, Frey J, Pflugshaupt T, Knobel S, Nef T, Müri R, Nyffeler T. Auditory spatial cueing reduces neglect after right-hemispheric stroke: a proof of concept study. Cortex 2022; 148:152-167. [DOI: 10.1016/j.cortex.2021.12.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 07/20/2021] [Accepted: 12/09/2021] [Indexed: 11/26/2022]
|
11
|
Sugimoto F, Kimura M, Takeda Y. Attenuation of auditory N2 for self-modulated tones during continuous actions. Biol Psychol 2021; 166:108201. [PMID: 34653547 DOI: 10.1016/j.biopsycho.2021.108201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 10/01/2021] [Accepted: 10/04/2021] [Indexed: 11/19/2022]
Abstract
Event-related potentials elicited by tones generated by one's own discrete actions (e.g., button presses) are attenuated compared to those elicited by tones generated externally. The present study investigated whether ERP attenuation would occur when the timing or pitch of tones is modulated by continuous actions, as for such actions, a weak association between actions and their auditory consequences is assumed. In a modulation condition, participants modulated the time interval between tones (Experiment 1) or the pitch of tones (Experiment 2) by turning a steering wheel. In a listening condition, participants listened to the same tones as in the modulation condition without any action. The results revealed that the amplitude of N2 elicited by tones decreased in the modulation compared to listening conditions, consistently in the two experiments, suggesting relatively higher-order auditory processing can be mainly influenced by the prediction of action consequences when continuous actions modulate features of auditory stimuli.
Collapse
Affiliation(s)
- Fumie Sugimoto
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan.
| | - Motohiro Kimura
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan
| | - Yuji Takeda
- Human-Centered Mobility Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan
| |
Collapse
|
12
|
Simal A, Jolicoeur P. Cortical activation by a salient sound modulates visual temporal order judgments: An electrophysiological study of multisensory attentional processes. Psychophysiology 2021; 59:e13943. [PMID: 34536021 DOI: 10.1111/psyp.13943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 06/30/2021] [Accepted: 08/25/2021] [Indexed: 11/30/2022]
Abstract
Previous event-related potential (ERP) studies show that a salient lateral sound activates the visual cortex more strongly contralateral to the sound, observed as an auditory-evoked contralateral occipital positivity (ACOP). Studies showed that this activation enhances the early cortical processing of co-localized visual stimuli presented after, reflected by better detection rates, better discrimination, and sharper perceived contrast. We replicated the ACOP, using earphones, and tested whether auditory cuing can influence temporal order judgments (TOJ) for two visual stimuli (horizontal arrangement) as well as if the ACOP would predict the amplitude of this influence. A lateral salient sound was followed, after 150 or 630 ms, by the visual presentation of a pair of disks, one in left and one in right hemifield, with variable SOA. The TOJ task was to indicate which disk appeared first or which disk appeared second (controlling for response bias). We observed an ACOP at posterior electrode sites and confirmed our hypothesis that the lateral sound influenced TOJ by accelerating the perception of the disk presented on the cued side, even though the sound was irrelevant to the task. Furthermore, the ACOP amplitude was correlated to this visual perceptual change, indicating that a larger change in brain activity was associated with a faster processing of co-localized visual stimuli.
Collapse
Affiliation(s)
- Amour Simal
- Département de Psychologie, Université de Montréal, Montreal, Quebec, Canada.,International Laboratory for Brain, Music, and Sound Research, Montreal, Quebec, Canada.,Centre de recherche de l'Institut universitaire de gériatrie de Montréal, Montreal, Quebec, Canada
| | - Pierre Jolicoeur
- Département de Psychologie, Université de Montréal, Montreal, Quebec, Canada.,International Laboratory for Brain, Music, and Sound Research, Montreal, Quebec, Canada.,Centre de recherche de l'Institut universitaire de gériatrie de Montréal, Montreal, Quebec, Canada
| |
Collapse
|
13
|
Hanenberg C, Schlüter MC, Getzmann S, Lewald J. Short-Term Audiovisual Spatial Training Enhances Electrophysiological Correlates of Auditory Selective Spatial Attention. Front Neurosci 2021; 15:645702. [PMID: 34276281 PMCID: PMC8280319 DOI: 10.3389/fnins.2021.645702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 06/09/2021] [Indexed: 11/13/2022] Open
Abstract
Audiovisual cross-modal training has been proposed as a tool to improve human spatial hearing. Here, we investigated training-induced modulations of event-related potential (ERP) components that have been associated with processes of auditory selective spatial attention when a speaker of interest has to be localized in a multiple speaker ("cocktail-party") scenario. Forty-five healthy participants were tested, including younger (19-29 years; n = 21) and older (66-76 years; n = 24) age groups. Three conditions of short-term training (duration 15 min) were compared, requiring localization of non-speech targets under "cocktail-party" conditions with either (1) synchronous presentation of co-localized auditory-target and visual stimuli (audiovisual-congruency training) or (2) immediate visual feedback on correct or incorrect localization responses (visual-feedback training), or (3) presentation of spatially incongruent auditory-target and visual stimuli presented at random positions with synchronous onset (control condition). Prior to and after training, participants were tested in an auditory spatial attention task (15 min), requiring localization of a predefined spoken word out of three distractor words, which were presented with synchronous stimulus onset from different positions. Peaks of ERP components were analyzed with a specific focus on the N2, which is known to be a correlate of auditory selective spatial attention. N2 amplitudes were significantly larger after audiovisual-congruency training compared with the remaining training conditions for younger, but not older, participants. Also, at the time of the N2, distributed source analysis revealed an enhancement of neural activity induced by audiovisual-congruency training in dorsolateral prefrontal cortex (Brodmann area 9) for the younger group. These findings suggest that cross-modal processes induced by audiovisual-congruency training under "cocktail-party" conditions at a short time scale resulted in an enhancement of correlates of auditory selective spatial attention.
Collapse
Affiliation(s)
| | | | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
14
|
Herrmann B, Butler BE. Hearing loss and brain plasticity: the hyperactivity phenomenon. Brain Struct Funct 2021; 226:2019-2039. [PMID: 34100151 DOI: 10.1007/s00429-021-02313-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 06/03/2021] [Indexed: 12/22/2022]
Abstract
Many aging adults experience some form of hearing problems that may arise from auditory peripheral damage. However, it has been increasingly acknowledged that hearing loss is not only a dysfunction of the auditory periphery but also results from changes within the entire auditory system, from periphery to cortex. Damage to the auditory periphery is associated with an increase in neural activity at various stages throughout the auditory pathway. Here, we review neurophysiological evidence of hyperactivity, auditory perceptual difficulties that may result from hyperactivity, and outline open conceptual and methodological questions related to the study of hyperactivity. We suggest that hyperactivity alters all aspects of hearing-including spectral, temporal, spatial hearing-and, in turn, impairs speech comprehension when background sound is present. By focusing on the perceptual consequences of hyperactivity and the potential challenges of investigating hyperactivity in humans, we hope to bring animal and human electrophysiologists closer together to better understand hearing problems in older adulthood.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, Toronto, ON, M6A 2E1, Canada. .,Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Blake E Butler
- Department of Psychology & The Brain and Mind Institute, University of Western Ontario, London, ON, Canada.,National Centre for Audiology, University of Western Ontario, London, ON, Canada
| |
Collapse
|
15
|
Czoschke S, Fischer C, Bahador T, Bledowski C, Kaiser J. Decoding Concurrent Representations of Pitch and Location in Auditory Working Memory. J Neurosci 2021; 41:4658-4666. [PMID: 33846233 PMCID: PMC8260242 DOI: 10.1523/jneurosci.2999-20.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 02/17/2021] [Accepted: 03/30/2021] [Indexed: 11/21/2022] Open
Abstract
Multivariate analyses of hemodynamic signals serve to identify the storage of specific stimulus contents in working memory (WM). Representations of visual stimuli have been demonstrated both in sensory regions and in higher cortical areas. While previous research has typically focused on the WM maintenance of a single content feature, it remains unclear whether two separate features of a single object can be decoded concurrently. Also, much less evidence exists for representations of auditory compared with visual stimulus features. To address these issues, human participants had to memorize both pitch and perceived location of one of two sample sounds. After a delay phase, they were asked to reproduce either pitch or location. At recall, both features showed comparable levels of discriminability. Region of interest (ROI)-based decoding of functional magnetic resonance imaging (fMRI) data during the delay phase revealed feature-selective activity for both pitch and location of a memorized sound in auditory cortex and superior parietal lobule. The latter region showed higher decoding accuracy for location than pitch. In addition, location could be decoded from angular and supramarginal gyrus and both superior and inferior frontal gyrus. The latter region also showed a trend for decoding of pitch. We found no region exclusively coding pitch memory information. In summary, the present study yielded evidence for concurrent representations of pitch and location of a single object both in sensory cortex and in hierarchically higher regions, pointing toward representation formats that enable feature integration within the same anatomic brain regions.SIGNIFICANCE STATEMENT Decoding of hemodynamic signals serves to identify brain regions involved in the storage of stimulus-specific information in working memory (WM). While to-be-remembered information typically consists of several features, most previous investigations have focused on the maintenance of one memorized feature belonging to one visual object. The present study assessed the concurrent storage of two features of the same object in auditory WM. We found that both pitch and location of memorized sounds were decodable both in early sensory areas, in higher-level superior parietal cortex and, to a lesser extent, in inferior frontal cortex. While auditory cortex is known to process different features in parallel, their concurrent representation in parietal regions may support the integration of object features in WM.
Collapse
Affiliation(s)
- Stefan Czoschke
- Institute of Medical Psychology, Medical Faculty, Goethe University, Frankfurt am Main 60528, Germany
- Brain Imaging Center, Medical Faculty, Goethe University, Frankfurt am Main 60528, Germany
| | - Cora Fischer
- Institute of Medical Psychology, Medical Faculty, Goethe University, Frankfurt am Main 60528, Germany
- Brain Imaging Center, Medical Faculty, Goethe University, Frankfurt am Main 60528, Germany
| | - Tara Bahador
- Institute of Medical Psychology, Medical Faculty, Goethe University, Frankfurt am Main 60528, Germany
- Brain Imaging Center, Medical Faculty, Goethe University, Frankfurt am Main 60528, Germany
| | - Christoph Bledowski
- Institute of Medical Psychology, Medical Faculty, Goethe University, Frankfurt am Main 60528, Germany
- Brain Imaging Center, Medical Faculty, Goethe University, Frankfurt am Main 60528, Germany
| | - Jochen Kaiser
- Institute of Medical Psychology, Medical Faculty, Goethe University, Frankfurt am Main 60528, Germany
- Brain Imaging Center, Medical Faculty, Goethe University, Frankfurt am Main 60528, Germany
| |
Collapse
|
16
|
Stephane M, Dzemidzic M, Yoon G. Keeping the inner voice inside the head, a pilot fMRI study. Brain Behav 2021; 11:e02042. [PMID: 33484101 PMCID: PMC8035434 DOI: 10.1002/brb3.2042] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 12/17/2020] [Accepted: 12/31/2020] [Indexed: 11/10/2022] Open
Abstract
INTRODUCTION The inner voice is experienced during thinking in words (inner speech) and silent reading and evokes brain activity that is highly similar to that associated with external voices. Yet while the inner voice is experienced in internal space (inside the head), external voices (one's own and those of others) are experienced in external space. In this paper, we investigate the neural basis of this differential spatial localization. METHODS We used fMRI to examine the difference in brain activity between reading silently and reading aloud. As the task involved reading aloud, data were first denoised by removing independent components related to head movement. They were subsequently processed using finite impulse response basis function to address the variations of the hemodynamic response. Final analyses were carried out using permutation-based statistics, which is appropriate for small samples. These analyses produce spatiotemporal maps of brain activity. RESULTS Reading silently relative to reading aloud was associated with activity of the "where" auditory pathway (Inferior parietal lobule and middle temporal gyrus), and delayed activity of the primary auditory cortex. CONCLUSIONS These pilot data suggest that internal space localization of the inner voice depends on the same neural resources as that for external space localization of external voices-the "where" auditory pathway. We discuss the implications of these findings on the possible mechanisms of abnormal experiences of the inner voice as is the case in verbal hallucinations.
Collapse
Affiliation(s)
- Massoud Stephane
- Department of Psychiatry, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA
| | - Mario Dzemidzic
- Department of Neurology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA
| | - Gihyun Yoon
- VA Connecticut Healthcare System, Yale University School of Medicine, West Haven, CT, USA
| |
Collapse
|
17
|
Schäfer E, Vedoveli AE, Righetti G, Gamerdinger P, Knipper M, Tropitzsch A, Karnath HO, Braun C, Li Hegner Y. Activities of the Right Temporo-Parieto-Occipital Junction Reflect Spatial Hearing Ability in Cochlear Implant Users. Front Neurosci 2021; 15:613101. [PMID: 33776632 PMCID: PMC7994335 DOI: 10.3389/fnins.2021.613101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 02/18/2021] [Indexed: 11/13/2022] Open
Abstract
Spatial hearing is critical for us not only to orient ourselves in space, but also to follow a conversation with multiple speakers involved in a complex sound environment. The hearing ability of people who suffered from severe sensorineural hearing loss can be restored by cochlear implants (CIs), however, with a large outcome variability. Yet, the causes of the CI performance variability remain incompletely understood. Despite the CI-based restoration of the peripheral auditory input, central auditory processing might still not function fully. Here we developed a multi-modal repetition suppression (MMRS) paradigm that is capable of capturing stimulus property-specific processing, in order to identify the neural correlates of spatial hearing and potential central neural indexes useful for the rehabilitation of sound localization in CI users. To this end, 17 normal hearing and 13 CI participants underwent the MMRS task while their brain activity was recorded with a 256-channel electroencephalography (EEG). The participants were required to discriminate between the probe sound location coming from a horizontal array of loudspeakers. The EEG MMRS response following the probe sound was elicited at various brain regions and at different stages of processing. Interestingly, the more similar this differential MMRS response in the right temporo-parieto-occipital (TPO) junction in CI users was to the normal hearing group, the better was the spatial hearing performance in individual CI users. Based on this finding, we suggest that the differential MMRS response at the right TPO junction could serve as a central neural index for intact or impaired sound localization abilities.
Collapse
Affiliation(s)
| | | | | | | | - Marlies Knipper
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
| | - Anke Tropitzsch
- Comprehensive Cochlear Implant Center, ENT Clinic Tübingen, Tübingen University Hospital, Tübingen, Germany
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Christoph Braun
- MEG Center, University of Tübingen, Tübingen, Germany.,CIMeC, Center for Mind/Brain Research, University of Trento, Rovereto, Italy.,DiPsCo, Department of Psychology and Cognitive Science, Rovereto, Italy
| | - Yiwen Li Hegner
- MEG Center, University of Tübingen, Tübingen, Germany.,Center of Neurology, Department of Neurology and Epileptology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| |
Collapse
|
18
|
Erhart M, Czoschke S, Fischer C, Bledowski C, Kaiser J. Decoding Spatial Versus Non-spatial Processing in Auditory Working Memory. Front Neurosci 2021; 15:637877. [PMID: 33679316 PMCID: PMC7933450 DOI: 10.3389/fnins.2021.637877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 01/19/2021] [Indexed: 11/13/2022] Open
Abstract
Objective Research on visual working memory has shown that individual stimulus features are processed in both specialized sensory regions and higher cortical areas. Much less evidence exists for auditory working memory. Here, a main distinction has been proposed between the processing of spatial and non-spatial sound features. Our aim was to examine feature-specific activation patterns in auditory working memory. Methods We collected fMRI data while 28 healthy adults performed an auditory delayed match-to-sample task. Stimuli were abstract sounds characterized by both spatial and non-spatial information, i.e., interaural time delay and central frequency, respectively. In separate recording blocks, subjects had to memorize either the spatial or non-spatial feature, which had to be compared with a probe sound presented after a short delay. We performed both univariate and multivariate comparisons between spatial and non-spatial task blocks. Results Processing of spatial sound features elicited a higher activity in a small cluster in the superior parietal lobe than did sound pattern processing, whereas there was no significant activation difference for the opposite contrast. The multivariate analysis was applied using a whole-brain searchlight approach to identify feature-selective processing. The task-relevant auditory feature could be decoded from multiple brain regions including the auditory cortex, posterior temporal cortex, middle occipital gyrus, and extended parietal and frontal regions. Conclusion In summary, the lack of large univariate activation differences between spatial and non-spatial processing could be attributable to the identical stimulation in both tasks. In contrast, the whole-brain multivariate analysis identified feature-specific activation patterns in widespread cortical regions. This suggests that areas beyond the auditory dorsal and ventral streams contribute to working memory processing of auditory stimulus features.
Collapse
Affiliation(s)
- Mira Erhart
- Institute of Medical Psychology, Medical Faculty, Goethe University Frankfurt am Main, Frankfurt am Main, Germany.,International Max Planck Research School - Translational Psychiatry (IMPRS-TP), Max Planck Institute of Psychiatry, Munich, Germany
| | - Stefan Czoschke
- Institute of Medical Psychology, Medical Faculty, Goethe University Frankfurt am Main, Frankfurt am Main, Germany.,Brain Imaging Center, Medical Faculty, Goethe University Frankfurt am Main, Frankfurt am Main, Germany
| | - Cora Fischer
- Institute of Medical Psychology, Medical Faculty, Goethe University Frankfurt am Main, Frankfurt am Main, Germany.,Brain Imaging Center, Medical Faculty, Goethe University Frankfurt am Main, Frankfurt am Main, Germany
| | - Christoph Bledowski
- Institute of Medical Psychology, Medical Faculty, Goethe University Frankfurt am Main, Frankfurt am Main, Germany.,Brain Imaging Center, Medical Faculty, Goethe University Frankfurt am Main, Frankfurt am Main, Germany
| | - Jochen Kaiser
- Institute of Medical Psychology, Medical Faculty, Goethe University Frankfurt am Main, Frankfurt am Main, Germany.,Brain Imaging Center, Medical Faculty, Goethe University Frankfurt am Main, Frankfurt am Main, Germany
| |
Collapse
|
19
|
Dalal TC, Muller AM, Stevenson RA. The Relationship Between Multisensory Temporal Processing and Schizotypal Traits. Multisens Res 2021; 34:1-19. [PMID: 33706260 DOI: 10.1163/22134808-bja10044] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Accepted: 01/06/2021] [Indexed: 11/19/2022]
Abstract
Recent literature has suggested that deficits in sensory processing are associated with schizophrenia (SCZ), and more specifically hallucination severity. The DSM-5's shift towards a dimensional approach to diagnostic criteria has led to SCZ and schizotypal personality disorder (SPD) being classified as schizophrenia spectrum disorders. With SCZ and SPD overlapping in aetiology and symptomatology, such as sensory abnormalities, it is important to investigate whether these deficits commonly reported in SCZ extend to non-clinical expressions of SPD. In this study, we investigated whether levels of SPD traits were related to audiovisual multisensory temporal processing in a non-clinical sample, revealing two novel findings. First, less precise multisensory temporal processing was related to higher overall levels of SPD symptomatology. Second, this relationship was specific to the cognitive-perceptual domain of SPD symptomatology, and more specifically, the Unusual Perceptual Experiences and Odd Beliefs or Magical Thinking symptomatology. The current study provides an initial look at the relationship between multisensory temporal processing and schizotypal traits. Additionally, it builds on the previous literature by suggesting that less precise multisensory temporal processing is not exclusive to SCZ but may also be related to non-clinical expressions of schizotypal traits in the general population.
Collapse
Affiliation(s)
- Tyler C Dalal
- 1Department of Psychology, University of Western Ontario, London, ON, M6G 2N5, Canada
- 2Brain and Mind Institute, University of Western Ontario, London, ON, M6G 2N5, Canada
| | - Anne-Marie Muller
- 1Department of Psychology, University of Western Ontario, London, ON, M6G 2N5, Canada
- 2Brain and Mind Institute, University of Western Ontario, London, ON, M6G 2N5, Canada
| | - Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, ON, M6G 2N5, Canada
- 2Brain and Mind Institute, University of Western Ontario, London, ON, M6G 2N5, Canada
| |
Collapse
|
20
|
Mohagheghian F, Khajehpour H, Samadzadehaghdam N, Eqlimi E, Jalilvand H, Makkiabadi B, Deevband MR. Altered effective brain network topology in tinnitus: An EEG source connectivity analysis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
21
|
Latini F, Trevisi G, Fahlström M, Jemstedt M, Alberius Munkhammar Å, Zetterling M, Hesselager G, Ryttlefors M. New Insights Into the Anatomy, Connectivity and Clinical Implications of the Middle Longitudinal Fasciculus. Front Neuroanat 2021; 14:610324. [PMID: 33584207 PMCID: PMC7878690 DOI: 10.3389/fnana.2020.610324] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 12/30/2020] [Indexed: 12/01/2022] Open
Abstract
The middle longitudinal fascicle (MdLF) is a long, associative white matter tract connecting the superior temporal gyrus (STG) with the parietal and occipital lobe. Previous studies show different cortical terminations, and a possible segmentation pattern of the tract. In this study, we performed a post-mortem white matter dissection of 12 human hemispheres and an in vivo deterministic fiber tracking of 24 subjects acquired from the Human Connectome Project to establish whether a constant organization of fibers exists among the MdLF subcomponents and to acquire anatomical information on each subcomponent. Moreover, two clinical cases of brain tumors impinged on MdLF territories are reported to further discuss the anatomical results in light of previously published data on the functional involvement of this bundle. The main finding is that the MdLF is consistently organized into two layers: an antero-ventral segment (aMdLF) connecting the anterior STG (including temporal pole and planum polare) and the extrastriate lateral occipital cortex, and a posterior-dorsal segment (pMdLF) connecting the posterior STG, anterior transverse temporal gyrus and planum temporale with the superior parietal lobule and lateral occipital cortex. The anatomical connectivity pattern and quantitative differences between the MdLF subcomponents along with the clinical cases reported in this paper support the role of MdLF in high-order functions related to acoustic information. We suggest that pMdLF may contribute to the learning process associated with verbal-auditory stimuli, especially on left side, while aMdLF may play a role in processing/retrieving auditory information already consolidated within the temporal lobe.
Collapse
Affiliation(s)
- Francesco Latini
- Neurosurgical Unit, Department of Surgery, Ospedale Santo Spirito, Pescara, Italy
| | - Gianluca Trevisi
- Neurosurgical Unit, Department of Surgery, Ospedale Santo Spirito, Pescara, Italy
| | - Markus Fahlström
- Section of Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - Malin Jemstedt
- Section of Speech-Language Pathology, Department of Neuroscience, Uppsala University, Uppsala, Sweden
| | | | - Maria Zetterling
- Section of Neurosurgery, Department of Neuroscience, Uppsala University, Uppsala, Sweden
| | - Göran Hesselager
- Section of Neurosurgery, Department of Neuroscience, Uppsala University, Uppsala, Sweden
| | - Mats Ryttlefors
- Section of Neurosurgery, Department of Neuroscience, Uppsala University, Uppsala, Sweden
| |
Collapse
|
22
|
Josef-Golubić S. Triple model of auditory sensory processing: a novel gating stream directly links primary auditory areas to executive prefrontal cortex. Acta Clin Croat 2020; 59:721-728. [PMID: 34285443 PMCID: PMC8253058 DOI: 10.20471/acc.2020.59.04.19] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Accepted: 10/09/2018] [Indexed: 11/24/2022] Open
Abstract
The generally accepted model of sensory processing of visual and auditory stimuli assumes two major parallel processing streams, ventral and dorsal, which comprise functionally and anatomically distinct but interacting processes in which the ventral stream supports stimulus identification, and the dorsal stream is involved in recognizing the stimulus spatial location and sensori-motor integration functions. However, recent studies suggest the existence of a third, very fast sensory processing pathway, a gating stream that directly links the primary auditory cortices to the executive prefrontal cortex within the first 50 milliseconds after presentation of a stimulus, bypassing hierarchical structure of the ventral and dorsal pathways. Gating stream propagates the sensory gating phenomenon, which serves as a basic protective mechanism preventing irrelevant, repeated information from recurrent sensory processing. The goal of the present paper is to introduce the novel 'three-stream' model of auditory processing, including the new fast sensory processing stream, i.e. gating stream, alongside the well-affirmed dorsal and ventral sensory processing pathways. The impairments in sensory processing along the gating stream have been found to be strongly involved in the pathophysiological sensory processing in Alzheimer's disease and could be the underlying issue in numerous neuropsychiatric disorders and diseases that are linked to the pathological sensory gating inhibition, such as schizophrenia, post-traumatic stress disorder, bipolar disorder or attention deficit hyperactivity disorder.
Collapse
Affiliation(s)
- Sanja Josef-Golubić
- Department of Physics, Faculty of Science, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
23
|
Vannson N, Strelnikov K, James CJ, Deguine O, Barone P, Marx M. Evidence of a functional reorganization in the auditory dorsal stream following unilateral hearing loss. Neuropsychologia 2020; 149:107683. [PMID: 33212140 DOI: 10.1016/j.neuropsychologia.2020.107683] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/16/2020] [Accepted: 11/08/2020] [Indexed: 12/11/2022]
Abstract
Unilateral hearing loss (UHL) generates a disruption of binaural hearing mechanisms, which impairs sound localization and speech understanding in noisy environments. We conducted an original study using fMRI and psychoacoustic assessments to investigate the relationships between the extent of cortical reorganization across the auditory areas for UHL patients, the severity of unilateral hearing loss, and the deficit in binaural abilities. Twenty-eight volunteers (14 UHL patients) were recruited (twenty-two females and six males). The brain imaging analysis demonstrated that UHL induces a shift in aural dominance favoring the better ear, with a cortical reorganization located in the non-primary auditory areas, ipsilateral (same side) to the better ear. This reorganization is correlated not only to the hearing loss severity but also to spatial localization abilities. A regression analysis between brain activity and patient's performance clearly showed that the spatial hearing deficit was linked to a functional alteration of the posterior auditory areas known to process spatial hearing. Altogether, our study reveals that UHL alters the dorsal auditory stream, which is deleterious to spatial hearing.
Collapse
Affiliation(s)
- Nicolas Vannson
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France; Cochlear France SAS, Toulouse, France.
| | | | | | - Olivier Deguine
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France; Service d'Otologie, Otoneurologie et ORL pédiatrique, Hôpital Pierre-Paul Riquet, CHU Toulouse Purpan, France
| | - Pascal Barone
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France
| | - Mathieu Marx
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France; Service d'Otologie, Otoneurologie et ORL pédiatrique, Hôpital Pierre-Paul Riquet, CHU Toulouse Purpan, France
| |
Collapse
|
24
|
Causal Inference in Audiovisual Perception. J Neurosci 2020; 40:6600-6612. [PMID: 32669354 DOI: 10.1523/jneurosci.0051-20.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 06/26/2020] [Accepted: 07/01/2020] [Indexed: 11/21/2022] Open
Abstract
In our natural environment the senses are continuously flooded with a myriad of signals. To form a coherent representation of the world, the brain needs to integrate sensory signals arising from a common cause and segregate signals coming from separate causes. An unresolved question is how the brain solves this binding or causal inference problem and determines the causal structure of the sensory signals. In this functional magnetic resonance imaging (fMRI) study human observers (female and male) were presented with synchronous auditory and visual signals at the same location (i.e., common cause) or different locations (i.e., separate causes). On each trial, observers decided whether signals come from common or separate sources(i.e., "causal decisions"). To dissociate participants' causal inference from the spatial correspondence cues we adjusted the audiovisual disparity of the signals individually for each participant to threshold accuracy. Multivariate fMRI pattern analysis revealed the lateral prefrontal cortex as the only region that encodes predominantly the outcome of observers' causal inference (i.e., common vs separate causes). By contrast, the frontal eye field (FEF) and the intraparietal sulcus (IPS0-4) form a circuitry that concurrently encodes spatial (auditory and visual stimulus locations), decisional (causal inference), and motor response dimensions. These results suggest that the lateral prefrontal cortex plays a key role in inferring and making explicit decisions about the causal structure that generates sensory signals in our environment. By contrast, informed by observers' inferred causal structure, the FEF-IPS circuitry integrates auditory and visual spatial signals into representations that guide motor responses.SIGNIFICANCE STATEMENT In our natural environment, our senses are continuously flooded with a myriad of signals. Transforming this barrage of sensory signals into a coherent percept of the world relies inherently on solving the causal inference problem, deciding whether sensory signals arise from a common cause and should hence be integrated or else be segregated. This functional magnetic resonance imaging study shows that the lateral prefrontal cortex plays a key role in inferring the causal structure of the environment. Crucially, informed by the spatial correspondence cues and the inferred causal structure the frontal eye field and the intraparietal sulcus form a circuitry that integrates auditory and visual spatial signals into representations that guide motor responses.
Collapse
|
25
|
Rau PLP, Zheng J, Wang L, Zhao J, Wang D. Haptic and Auditory-Haptic Attentional Blink in Spatial and Object-Based Tasks. Multisens Res 2020; 33:295-312. [PMID: 31883506 DOI: 10.1163/22134808-20191483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Accepted: 10/14/2019] [Indexed: 11/19/2022]
Abstract
Dual-task performance depends on both modalities (e.g., vision, audition, haptics) and task types (spatial or object-based), and the order by which different task types are organized. Previous studies on haptic and especially auditory-haptic attentional blink (AB) are scarce, and the effect of task types and their order have not been fully explored. In this study, 96 participants, divided into four groups of task type combinations, identified auditory or haptic Target 1 (T1) and haptic Target 2 (T2) in rapid series of sounds and forces. We observed a haptic AB (i.e., the accuracy of identifying T2 increased with increasing stimulus onset asynchrony between T1 and T2) in spatial, object-based, and object-spatial tasks, but not in spatial-object task. Changing the modality of an object-based T1 from haptics to audition eliminated the AB, but similar haptic-to-auditory change of the modality of a spatial T1 had no effect on the AB (if it exists). Our findings fill a gap in the literature regarding the auditory-haptic AB, and substantiate the importance of modalities, task types and their order, and the interaction between them. These findings were explained by how the cerebral cortex is organized for processing spatial and object-based information in different modalities.
Collapse
Affiliation(s)
| | - Jian Zheng
- 1Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Lijun Wang
- 2State Key Lab of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Jingyu Zhao
- 1Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Dangxiao Wang
- 2State Key Lab of Virtual Reality Technology and Systems, Beihang University, Beijing, China.,3Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China.,4Peng Cheng Laboratory (PCL), Shenzhen, Guangdong Province, China
| |
Collapse
|
26
|
Stewart HJ, Shen D, Sham N, Alain C. Involuntary Orienting and Conflict Resolution during Auditory Attention: The Role of Ventral and Dorsal Streams. J Cogn Neurosci 2020; 32:1851-1863. [PMID: 32573378 DOI: 10.1162/jocn_a_01594] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Selective attention to sound object features such as pitch and location is associated with enhanced brain activity in ventral and dorsal streams, respectively. We examined the role of these pathways in involuntary orienting and conflict resolution using fMRI. Participants were presented with two tones that may, or may not, share the same nonspatial (frequency) or spatial (location) auditory features. In separate blocks of trials, participants were asked to attend to sound frequency or sound location and ignore the change in the task-irrelevant feature. In both attend-frequency and attend-location tasks, RTs were slower when the task-irrelevant feature changed than when it stayed the same (involuntary orienting). This behavioral cost coincided with enhanced activity in the pFC and superior temporal gyrus. Conflict resolution was examined by comparing situations where the change in stimulus features was congruent (both features changed) and incongruent (only one feature changed). Participants were slower and less accurate for incongruent than congruent sound features. This congruency effect was associated with enhanced activity in the pFC and was greater in the right superior temporal gyrus and medial frontal cortex during the attend-location task than during the attend-frequency task. Together, these findings do not support a strict division of "labor" into ventral and dorsal streams but rather suggest interactions between these pathways in situations involving changes in task-irrelevant sound feature and conflict resolution. These findings also validate the Test of Attention in Listening task by revealing distinct neural correlates for involuntary orienting and conflict resolution.
Collapse
Affiliation(s)
- Hannah J Stewart
- Baycrest Centre, Toronto, Ontario, Canada.,University College London.,Cincinnati Children's Hospital Medical Center
| | - Dawei Shen
- Baycrest Centre, Toronto, Ontario, Canada
| | - Nasim Sham
- Baycrest Centre, Toronto, Ontario, Canada
| | - Claude Alain
- Baycrest Centre, Toronto, Ontario, Canada.,University of Toronto
| |
Collapse
|
27
|
What and where in the auditory systems of sighted and early blind individuals: Evidence from representational similarity analysis. J Neurol Sci 2020; 413:116805. [PMID: 32259708 DOI: 10.1016/j.jns.2020.116805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 03/14/2020] [Accepted: 03/24/2020] [Indexed: 11/24/2022]
Abstract
Separated ventral and dorsal streams in auditory system have been proposed to process sound identification and localization respectively. Despite the popularity of the dual-pathway model, it remains controversial how much independence two neural pathways enjoy and whether visual experiences can influence the distinct cortical organizational scheme. In this study, representational similarity analysis (RSA) was used to explore the functional roles of distinct cortical regions that lay within either the ventral or dorsal auditory streams of sighted and early blind (EB) participants. We found functionally segregated auditory networks in both sighted and EB groups where anterior superior temporal gyrus (aSTG) and inferior frontal junction (IFJ) were more related to the sound identification, while posterior superior temporal gyrus (pSTG) and inferior parietal lobe (IPL) preferred the sound localization. The findings indicated visual experiences may not have an influence on this functional dissociation and the cortex of the human brain may be organized as task-specific and modality-independent strategies. Meanwhile, partial overlap of spatial and non-spatial auditory information processing was observed, illustrating the existence of interaction between the two auditory streams. Furthermore, we investigated the effect of visual experiences on the neural bases of auditory perception and observed the cortical reorganization in EB participants in whom middle occipital gyrus was recruited to process auditory information. Our findings examined the distinct cortical networks that abstractly encoded sound identification and localization, and confirmed the existence of interaction from the multivariate perspective. Furthermore, the results suggested visual experience might not impact the functional specialization of auditory regions.
Collapse
|
28
|
Notter MP, Hanke M, Murray MM, Geiser E. Encoding of Auditory Temporal Gestalt in the Human Brain. Cereb Cortex 2020; 29:475-484. [PMID: 29365070 DOI: 10.1093/cercor/bhx328] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Indexed: 12/16/2022] Open
Abstract
The perception of an acoustic rhythm is invariant to the absolute temporal intervals constituting a sound sequence. It is unknown where in the brain temporal Gestalt, the percept emerging from the relative temporal proximity between acoustic events, is encoded. Two different relative temporal patterns, each induced by three experimental conditions with different absolute temporal patterns as sensory basis, were presented to participants. A linear support vector machine classifier was trained to differentiate activation patterns in functional magnetic resonance imaging data to the two different percepts. Across the sensory constituents the classifier decoded which percept was perceived. A searchlight analysis localized activation patterns specific to the temporal Gestalt bilaterally to the temporoparietal junction, including the planum temporale and supramarginal gyrus, and unilaterally to the right inferior frontal gyrus (pars opercularis). We show that auditory areas not only process absolute temporal intervals, but also integrate them into percepts of Gestalt and that encoding of these percepts persists in high-level associative areas. The findings complement existing knowledge regarding the processing of absolute temporal patterns to the processing of relative temporal patterns relevant to the sequential binding of perceptual elements into Gestalt.
Collapse
Affiliation(s)
- Michael P Notter
- Department of Radiology.,Neuropsychology and Neurorehabilitation Service.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Michael Hanke
- Institute of Psychology, Otto-von-Guericke-University.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Micah M Murray
- Department of Radiology.,Neuropsychology and Neurorehabilitation Service.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.,Ophthalmology Department, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Eveline Geiser
- Department of Radiology.,Neuropsychology and Neurorehabilitation Service.,McGovern Institute, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
29
|
Deng Y, Choi I, Shinn-Cunningham B. Topographic specificity of alpha power during auditory spatial attention. Neuroimage 2020; 207:116360. [PMID: 31760150 PMCID: PMC9883080 DOI: 10.1016/j.neuroimage.2019.116360] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 10/06/2019] [Accepted: 11/13/2019] [Indexed: 01/31/2023] Open
Abstract
Visual and somatosensory spatial attention both induce parietal alpha (8-14 Hz) oscillations whose topographical distribution depends on the direction of spatial attentional focus. In the auditory domain, contrasts of parietal alpha power for leftward and rightward attention reveal qualitatively similar lateralization; however, it is not clear whether alpha lateralization changes monotonically with the direction of auditory attention as it does for visual spatial attention. In addition, most previous studies of alpha oscillation did not consider individual differences in alpha frequency, but simply analyzed power in a fixed spectral band. Here, we recorded electroencephalography in human subjects when they directed attention to one of five azimuthal locations. After a cue indicating the direction of an upcoming target sequence of spoken syllables (yet before the target began), alpha power changed in a task-specific manner. Individual peak alpha frequencies differed consistently between central electrodes and parieto-occipital electrodes, suggesting multiple neural generators of task-related alpha. Parieto-occipital alpha increased over the hemisphere ipsilateral to attentional focus compared to the contralateral hemisphere, and changed systematically as the direction of attention shifted from far left to far right. These results showing that parietal alpha lateralization changes smoothly with the direction of auditory attention as in visual spatial attention provide further support to the growing evidence that the frontoparietal attention network is supramodal.
Collapse
Affiliation(s)
- Yuqi Deng
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, 52242, USA
| | - Barbara Shinn-Cunningham
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA,Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA,Corresponding author. Baker Hall 254G, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, 15213, USA. (B. Shinn-Cunningham)
| |
Collapse
|
30
|
Joint Representation of Spatial and Phonetic Features in the Human Core Auditory Cortex. Cell Rep 2020; 24:2051-2062.e2. [PMID: 30134167 DOI: 10.1016/j.celrep.2018.07.076] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Revised: 04/09/2018] [Accepted: 07/22/2018] [Indexed: 12/12/2022] Open
Abstract
The human auditory cortex simultaneously processes speech and determines the location of a speaker in space. Neuroimaging studies in humans have implicated core auditory areas in processing the spectrotemporal and the spatial content of sound; however, how these features are represented together is unclear. We recorded directly from human subjects implanted bilaterally with depth electrodes in core auditory areas as they listened to speech from different directions. We found local and joint selectivity to spatial and spectrotemporal speech features, where the spatial and spectrotemporal features are organized independently of each other. This representation enables successful decoding of both spatial and phonetic information. Furthermore, we found that the location of the speaker does not change the spectrotemporal tuning of the electrodes but, rather, modulates their mean response level. Our findings contribute to defining the functional organization of responses in the human auditory cortex, with implications for more accurate neurophysiological models of speech processing.
Collapse
|
31
|
Zulfiqar I, Moerel M, Formisano E. Spectro-Temporal Processing in a Two-Stream Computational Model of Auditory Cortex. Front Comput Neurosci 2020; 13:95. [PMID: 32038212 PMCID: PMC6987265 DOI: 10.3389/fncom.2019.00095] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 12/23/2019] [Indexed: 12/14/2022] Open
Abstract
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is optimized for analyzing fine-grained temporal and spectral information, respectively. Here we use a Wilson and Cowan firing-rate modeling framework to simulate spectro-temporal processing of sounds in these auditory streams and to investigate the link between neural population activity and behavioral results of psychoacoustic experiments. The proposed model consisted of two core (A1 and R, representing primary areas) and two belt (Slow and Fast, representing rostral and caudal processing respectively) areas, differing in terms of their spectral and temporal response properties. First, we simulated the responses to amplitude modulated (AM) noise and tones. In agreement with electrophysiological results, we observed an area-dependent transition from a temporal (synchronization) to a rate code when moving from low to high modulation rates. Simulated neural responses in a task of amplitude modulation detection suggested that thresholds derived from population responses in core areas closely resembled those of psychoacoustic experiments in human listeners. For tones, simulated modulation threshold functions were found to be dependent on the carrier frequency. Second, we simulated the responses to complex tones with missing fundamental stimuli and found that synchronization of responses in the Fast area accurately encoded pitch, with the strength of synchronization depending on number and order of harmonic components. Finally, using speech stimuli, we showed that the spectral and temporal structure of the speech was reflected in parallel by the modeled areas. The analyses highlighted that the Slow stream coded with high spectral precision the aspects of the speech signal characterized by slow temporal changes (e.g., prosody), while the Fast stream encoded primarily the faster changes (e.g., phonemes, consonants, temporal pitch). Interestingly, the pitch of a speaker was encoded both spatially (i.e., tonotopically) in Slow area and temporally in Fast area. Overall, performed simulations showed that the model is valuable for generating hypotheses on how the different cortical areas/streams may contribute toward behaviorally relevant aspects of auditory processing. The model can be used in combination with physiological models of neurovascular coupling to generate predictions for human functional MRI experiments.
Collapse
Affiliation(s)
- Isma Zulfiqar
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
| | - Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| | - Elia Formisano
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| |
Collapse
|
32
|
Neural correlates of perceptual switching while listening to bistable auditory streaming stimuli. Neuroimage 2020; 204:116220. [DOI: 10.1016/j.neuroimage.2019.116220] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Revised: 08/19/2019] [Accepted: 09/19/2019] [Indexed: 11/15/2022] Open
|
33
|
Phenomenology of Voice-Hearing in Psychosis Spectrum Disorders: a Review of Neural Mechanisms. Curr Behav Neurosci Rep 2019. [DOI: 10.1007/s40473-019-00196-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
34
|
Baker CM, Burks JD, Briggs RG, Conner AK, Glenn CA, Morgan JP, Stafford J, Sali G, McCoy TM, Battiste JD, O'Donoghue DL, Sughrue ME. A Connectomic Atlas of the Human Cerebrum-Chapter 2: The Lateral Frontal Lobe. Oper Neurosurg (Hagerstown) 2019; 15:S10-S74. [PMID: 30260426 DOI: 10.1093/ons/opy254] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 09/18/2018] [Indexed: 11/14/2022] Open
Abstract
In this supplement, we show a comprehensive anatomic atlas of the human cerebrum demonstrating all 180 distinct regions comprising the cerebral cortex. The location, functional connectivity, and structural connectivity of these regions are outlined, and where possible a discussion is included of the functional significance of these areas. In part 2, we specifically address regions relevant to the lateral frontal lobe.
Collapse
Affiliation(s)
- Cordell M Baker
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - Joshua D Burks
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - Robert G Briggs
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - Andrew K Conner
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - Chad A Glenn
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - Jake P Morgan
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - Jordan Stafford
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - Goksel Sali
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - Tressie M McCoy
- Department of Physical Therapy, University of Ok-lahoma Health Sciences Center, Okla-homa City, Oklahoma
| | - James D Battiste
- Department of Neurology, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - Daniel L O'Donoghue
- Department of Cell Biology, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - Michael E Sughrue
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma.,De-partment of Neurosurgery, Prince of Wales Private Hospital, Sydney, Australia
| |
Collapse
|
35
|
Bednar A, Lalor EC. Where is the cocktail party? Decoding locations of attended and unattended moving sound sources using EEG. Neuroimage 2019; 205:116283. [PMID: 31629828 DOI: 10.1016/j.neuroimage.2019.116283] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 10/08/2019] [Accepted: 10/14/2019] [Indexed: 11/18/2022] Open
Abstract
Recently, we showed that in a simple acoustic scene with one sound source, auditory cortex tracks the time-varying location of a continuously moving sound. Specifically, we found that both the delta phase and alpha power of the electroencephalogram (EEG) can be used to reconstruct the sound source azimuth. However, in natural settings, we are often presented with a mixture of multiple competing sounds and so we must focus our attention on the relevant source in order to segregate it from the competing sources e.g. 'cocktail party effect'. While many studies have examined this phenomenon in the context of sound envelope tracking by the cortex, it is unclear how we process and utilize spatial information in complex acoustic scenes with multiple sound sources. To test this, we created an experiment where subjects listened to two concurrent sound stimuli that were moving within the horizontal plane over headphones while we recorded their EEG. Participants were tasked with paying attention to one of the two presented stimuli. The data were analyzed by deriving linear mappings, temporal response functions (TRF), between EEG data and attended as well unattended sound source trajectories. Next, we used these TRFs to reconstruct both trajectories from previously unseen EEG data. In a first experiment we used noise stimuli and included the task involved spatially localizing embedded targets. Then, in a second experiment, we employed speech stimuli and a non-spatial speech comprehension task. Results showed the trajectory of an attended sound source can be reliably reconstructed from both delta phase and alpha power of EEG even in the presence of distracting stimuli. Moreover, the reconstruction was robust to task and stimulus type. The cortical representation of the unattended source position was below detection level for the noise stimuli, but we observed weak tracking of the unattended source location for the speech stimuli by the delta phase of EEG. In addition, we demonstrated that the trajectory reconstruction method can in principle be used to decode selective attention on a single-trial basis, however, its performance was inferior to envelope-based decoders. These results suggest a possible dissociation of delta phase and alpha power of EEG in the context of sound trajectory tracking. Moreover, the demonstrated ability to localize and determine the attended speaker in complex acoustic environments is particularly relevant for cognitively controlled hearing devices.
Collapse
Affiliation(s)
- Adam Bednar
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland.
| | - Edmund C Lalor
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland; Department of Biomedical Engineering, Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
36
|
Salvari V, Paraskevopoulos E, Chalas N, Müller K, Wollbrink A, Dobel C, Korth D, Pantev C. Auditory Categorization of Man-Made Sounds Versus Natural Sounds by Means of MEG Functional Brain Connectivity. Front Neurosci 2019; 13:1052. [PMID: 31636532 PMCID: PMC6787283 DOI: 10.3389/fnins.2019.01052] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Accepted: 09/19/2019] [Indexed: 01/27/2023] Open
Abstract
Previous neuroimaging studies have shown that sounds can be discriminated due to living-related or man-made-related characteristics and involve different brain regions. However, these studies have mainly provided source space analyses, which offer simple maps of activated brain regions but do not explain how regions of a distributed system are functionally organized under a specific task. In the present study, we aimed to further examine the functional connectivity of the auditory processing pathway across different categories of non-speech sounds in healthy adults, by means of MEG. Our analyses demonstrated significant activation and interconnection differences between living and man-made object sounds, in the prefrontal areas, anterior-superior temporal gyrus (aSTG), posterior cingulate cortex (PCC), and supramarginal gyrus (SMG), occurring within 80–120 ms post-stimulus interval. Current findings replicated previous ones, in that other regions beyond the auditory cortex are involved during auditory processing. According to the functional connectivity analysis, differential brain networks across the categories exist, which proposes that sound category discrimination processing relies on distinct cortical networks, a notion that has been strongly argued in the literature also in relation to the visual system.
Collapse
Affiliation(s)
- Vasiliki Salvari
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Evangelos Paraskevopoulos
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany.,School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolas Chalas
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Kilian Müller
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Andreas Wollbrink
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Christian Dobel
- Department of Otorhinolaryngology, Friedrich-Schiller University of Jena, Jena, Germany
| | - Daniela Korth
- Department of Otorhinolaryngology, Friedrich-Schiller University of Jena, Jena, Germany
| | - Christo Pantev
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| |
Collapse
|
37
|
Abstract
Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.
Collapse
|
38
|
Hanenberg C, Getzmann S, Lewald J. Transcranial direct current stimulation of posterior temporal cortex modulates electrophysiological correlates of auditory selective spatial attention in posterior parietal cortex. Neuropsychologia 2019; 131:160-170. [PMID: 31145907 DOI: 10.1016/j.neuropsychologia.2019.05.023] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 05/03/2019] [Accepted: 05/25/2019] [Indexed: 01/12/2023]
Abstract
Speech perception in "cocktail-party" situations, in which a sound source of interest has to be extracted out of multiple irrelevant sounds, poses a remarkable challenge to the human auditory system. Studies on structural and electrophysiological correlates of auditory selective spatial attention revealed critical roles of the posterior temporal cortex and the N2 event-related potential (ERP) component in the underlying processes. Here, we explored effects of transcranial direct current stimulation (tDCS) to posterior temporal cortex on neurophysiological correlates of auditory selective spatial attention, with a specific focus on the N2. In a single-blind, sham-controlled crossover design with baseline and follow-up measurements, monopolar anodal and cathodal tDCS was applied for 16 min to the right posterior superior temporal cortex. Two age groups of human subjects, a younger (n = 20; age 18-30 yrs) and an older group (n = 19; age 66-77 yrs), completed an auditory free-field multiple-speakers localization task while ERPs were recorded. The ERP data showed an offline effect of anodal, but not cathodal, tDCS immediately after DC offset for targets contralateral, but not ipsilateral, to the hemisphere of tDCS, without differences between groups. This effect mainly consisted in a substantial increase of the N2 amplitude by 0.9 μV (SE 0.4 μV; d = 0.40) compared with sham tDCS. At the same point in time, cortical source localization revealed a reduction of activity in ipsilateral (right) posterior parietal cortex. Also, localization error was improved after anodal, but not cathodal, tDCS. Given that both the N2 and the posterior parietal cortex are involved in processes of auditory selective spatial attention, these results suggest that anodal tDCS specifically enhanced inhibitory attentional brain processes underlying the focusing onto a target sound source, possibly by improved suppression of irrelevant distracters.
Collapse
Affiliation(s)
- Christina Hanenberg
- Ruhr University Bochum, Faculty of Psychology, D-44780, Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, D-44139, Dortmund, Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, D-44139, Dortmund, Germany
| | - Jörg Lewald
- Ruhr University Bochum, Faculty of Psychology, D-44780, Bochum, Germany.
| |
Collapse
|
39
|
Lemaitre G, Pyles JA, Halpern AR, Navolio N, Lehet M, Heller LM. Who's that Knocking at My Door? Neural Bases of Sound Source Identification. Cereb Cortex 2019; 28:805-818. [PMID: 28052922 DOI: 10.1093/cercor/bhw397] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Accepted: 12/14/2016] [Indexed: 11/13/2022] Open
Abstract
When hearing knocking on a door, a listener typically identifies both the action (forceful and repeated impacts) and the object (a thick wooden board) causing the sound. The current work studied the neural bases of sound source identification by switching listeners' attention toward these different aspects of a set of simple sounds during functional magnetic resonance imaging scanning: participants either discriminated the action or the material that caused the sounds, or they simply discriminated meaningless scrambled versions of them. Overall, discriminating action and material elicited neural activity in a left-lateralized frontoparietal network found in other studies of sound identification, wherein the inferior frontal sulcus and the ventral premotor cortex were under the control of selective attention and sensitive to task demand. More strikingly, discriminating materials elicited increased activity in cortical regions connecting auditory inputs to semantic, motor, and even visual representations, whereas discriminating actions did not increase activity in any regions. These results indicate that discriminating and identifying material requires deeper processing of the stimuli than discriminating actions. These results are consistent with previous studies suggesting that auditory perception is better suited to comprehend the actions than the objects producing sounds in the listeners' environment.
Collapse
Affiliation(s)
- Guillaume Lemaitre
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - John A Pyles
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - Andrea R Halpern
- Bucknell University, Department of Psychology, Lewisburg 17837, PA, USA
| | - Nicole Navolio
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - Matthew Lehet
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| | - Laurie M Heller
- Carnegie Mellon University, Department of Psychology and Center for Neural Basis of Cognition, Pittsburgh, PA 15213, USA
| |
Collapse
|
40
|
Bihemispheric anodal transcranial direct-current stimulation over temporal cortex enhances auditory selective spatial attention. Exp Brain Res 2019; 237:1539-1549. [PMID: 30927041 DOI: 10.1007/s00221-019-05525-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2019] [Accepted: 03/20/2019] [Indexed: 10/27/2022]
Abstract
The capacity to selectively focus on a particular speaker of interest in a complex acoustic environment with multiple persons speaking simultaneously-a so-called "cocktail-party" situation-is of decisive importance for human verbal communication. Here, the efficacy of single-dose transcranial direct-current stimulation (tDCS) in improving this ability was tested in young healthy adults (n = 24), using a spatial task that required the localization of a target word in a simulated "cocktail-party" situation. In a sham-controlled crossover design, offline bihemispheric double-monopolar anodal tDCS was applied for 30 min at 1 mA over auditory regions of temporal lobe, and the participant's performance was assessed prior to tDCS, immediately after tDCS, and 1 h after tDCS. A significant increase in the amount of correct localizations by on average 3.7 percentage points (d = 1.04) was found after active, relative to sham, tDCS, with only insignificant reduction of the effect within 1 h after tDCS offset. Thus, the method of bihemispheric tDCS could be a promising tool for enhancement of human auditory attentional functions that are relevant for spatial orientation and communication in everyday life.
Collapse
|
41
|
Tissieres I, Crottaz-Herbette S, Clarke S. Implicit representation of the auditory space: contribution of the left and right hemispheres. Brain Struct Funct 2019; 224:1569-1582. [PMID: 30848352 DOI: 10.1007/s00429-019-01853-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Accepted: 02/25/2019] [Indexed: 11/24/2022]
Abstract
Spatial cues contribute to the ability to segregate sound sources and thus facilitate their detection and recognition. This implicit use of spatial cues can be preserved in cases of cortical spatial deafness, suggesting that partially distinct neural networks underlie the explicit sound localization and the implicit use of spatial cues. We addressed this issue by assessing 40 patients, 20 patients with left and 20 patients with right hemispheric damage, for their ability to use auditory spatial cues implicitly in a paradigm of spatial release from masking (SRM) and explicitly in sound localization. The anatomical correlates of their performance were determined with voxel-based lesion-symptom mapping (VLSM). During the SRM task, the target was always presented at the centre, whereas the masker was presented at the centre or at one of the two lateral positions on the right or left side. The SRM effect was absent in some but not all patients; the inability to perceive the target when the masker was at one of the lateral positions correlated with lesions of the left temporo-parieto-frontal cortex or of the right inferior parietal lobule and the underlying white matter. As previously reported, sound localization depended critically on the right parietal and opercular cortex. Thus, explicit and implicit use of spatial cues depends on at least partially distinct neural networks. Our results suggest that the implicit use may rely on the left-dominant position-linked representation of sound objects, which has been demonstrated in previous EEG and fMRI studies.
Collapse
Affiliation(s)
- Isabel Tissieres
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland.
| |
Collapse
|
42
|
Representation of Auditory Motion Directions and Sound Source Locations in the Human Planum Temporale. J Neurosci 2019; 39:2208-2220. [PMID: 30651333 DOI: 10.1523/jneurosci.2289-18.2018] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.
Collapse
|
43
|
Modular reconfiguration of an auditory control brain network supports adaptive listening behavior. Proc Natl Acad Sci U S A 2018; 116:660-669. [PMID: 30587584 PMCID: PMC6329957 DOI: 10.1073/pnas.1815321116] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
How do brain networks shape our listening behavior? We here develop and test the hypothesis that, during challenging listening situations, intrinsic brain networks are reconfigured to adapt to the listening demands and thus, to enable successful listening. We find that, relative to a task-free resting state, networks of the listening brain show higher segregation of temporal auditory, ventral attention, and frontal control regions known to be involved in speech processing, sound localization, and effortful listening. Importantly, the relative change in modularity of this auditory control network predicts individuals’ listening success. Our findings shed light on how cortical communication dynamics tune selection and comprehension of speech in challenging listening situations and suggest modularity as the network principle of auditory attention. Speech comprehension in noisy, multitalker situations poses a challenge. Successful behavioral adaptation to a listening challenge often requires stronger engagement of auditory spatial attention and context-dependent semantic predictions. Human listeners differ substantially in the degree to which they adapt behaviorally and can listen successfully under such circumstances. How cortical networks embody this adaptation, particularly at the individual level, is currently unknown. We here explain this adaptation from reconfiguration of brain networks for a challenging listening task (i.e., a linguistic variant of the Posner paradigm with concurrent speech) in an age-varying sample of n = 49 healthy adults undergoing resting-state and task fMRI. We here provide evidence for the hypothesis that more successful listeners exhibit stronger task-specific reconfiguration (hence, better adaptation) of brain networks. From rest to task, brain networks become reconfigured toward more localized cortical processing characterized by higher topological segregation. This reconfiguration is dominated by the functional division of an auditory and a cingulo-opercular module and the emergence of a conjoined auditory and ventral attention module along bilateral middle and posterior temporal cortices. Supporting our hypothesis, the degree to which modularity of this frontotemporal auditory control network is increased relative to resting state predicts individuals’ listening success in states of divided and selective attention. Our findings elucidate how fine-tuned cortical communication dynamics shape selection and comprehension of speech. Our results highlight modularity of the auditory control network as a key organizational principle in cortical implementation of auditory spatial attention in challenging listening situations.
Collapse
|
44
|
Zhang J, Zhang G, Li X, Wang P, Wang B, Liu B. Decoding sound categories based on whole-brain functional connectivity patterns. Brain Imaging Behav 2018; 14:100-109. [PMID: 30361945 DOI: 10.1007/s11682-018-9976-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
2Sound decoding is important for patients with sensory loss, such as the blind. Previous studies on sound categorization were conducted by estimating brain activity using univariate analysis or voxel-wise multivariate decoding methods and suggested some regions were sensitive to auditory categories. It is proposed that feedback connections between brain areas may facilitate auditory object selection. Therefore, it is important to explore whether functional connectivity among regions can be used to decode sound category. In this study, we constructed whole-brain functional connectivity patterns when subjects perceived four different sound categories and combined them with multivariate pattern classification analysis for sound decoding. The categorical discriminative networks and regions were determined based on the weight maps. Results showed that a high accuracy in multi-category classification was obtained based on the whole-brain functional connectivity patterns and the results were verified by different preprocessing parameters. Insight into the category discriminative functional networks showed that contributive connections crossed the left and right brain, and ranged from primary regions to high-level cognitive regions, which provide new evidence for the distributed representation of auditory object. Further analysis of brain regions in the discriminative networks showed that superior temporal gyrus and Heschl's gyrus significantly contributed to discriminating sound categories. Together, the findings reveal that functional connectivity based multivariate classification method provides rich information for auditory category decoding. The successful decoding results implicate the interactive properties of the distributed brain areas in auditory sound representation.
Collapse
Affiliation(s)
- Jinliang Zhang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Gaoyan Zhang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China. .,State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, 100084, People's Republic of China.
| |
Collapse
|
45
|
Active Sound Localization Sharpens Spatial Tuning in Human Primary Auditory Cortex. J Neurosci 2018; 38:8574-8587. [PMID: 30126968 DOI: 10.1523/jneurosci.0587-18.2018] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 07/09/2018] [Accepted: 07/19/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing sensitivity in humans is dynamic and task-dependent, but the mechanisms in human auditory cortex that enable dynamic sound location encoding remain unclear. Using functional magnetic resonance imaging (fMRI), we assessed how active behavior affects encoding of sound location (azimuth) in primary auditory cortical areas and planum temporale (PT). According to the hierarchical model of auditory processing and cortical functional specialization, PT is implicated in sound location ("where") processing. Yet, our results show that spatial tuning profiles in primary auditory cortical areas (left primary core and right caudo-medial belt) sharpened during a sound localization ("where") task compared with a sound identification ("what") task. In contrast, spatial tuning in PT was sharp but did not vary with task performance. We further applied a population pattern decoder to the measured fMRI activity patterns, which confirmed the task-dependent effects in the left core: sound location estimates from fMRI patterns measured during active sound localization were most accurate. In PT, decoding accuracy was not modulated by task performance. These results indicate that changes of population activity in human primary auditory areas reflect dynamic and task-dependent processing of sound location. As such, our findings suggest that the hierarchical model of auditory processing may need to be revised to include an interaction between primary and functionally specialized areas depending on behavioral requirements.SIGNIFICANCE STATEMENT According to a purely hierarchical view, cortical auditory processing consists of a series of analysis stages from sensory (acoustic) processing in primary auditory cortex to specialized processing in higher-order areas. Posterior-dorsal cortical auditory areas, planum temporale (PT) in humans, are considered to be functionally specialized for spatial processing. However, this model is based mostly on passive listening studies. Our results provide compelling evidence that active behavior (sound localization) sharpens spatial selectivity in primary auditory cortex, whereas spatial tuning in functionally specialized areas (PT) is narrow but task-invariant. These findings suggest that the hierarchical view of cortical functional specialization needs to be extended: our data indicate that active behavior involves feedback projections from higher-order regions to primary auditory cortex.
Collapse
|
46
|
Theta-burst stimulation causally affects side perception in the Deutsch's octave illusion. Sci Rep 2018; 8:12844. [PMID: 30150659 PMCID: PMC6110737 DOI: 10.1038/s41598-018-31248-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 08/07/2018] [Indexed: 11/16/2022] Open
Abstract
Deutsch’s octave illusion is produced by a sequence of two specular dichotic stimuli presented in alternation to the left and right ear causing an illusory segregation of pitch (frequency) and side (ear of origin). Previous studies have indicated that illusory perception of pitch takes place in temporo-frontal areas, whereas illusory perception of side is primarily associated to neural activity in parietal cortex and in particular in the inferior parietal lobule (IPL). Here we investigated the causal role of left IPL in the perception of side (ear of origin) during the octave illusion by following its inhibition through continuous theta-burst stimulation (cTBS), as compared to the left posterior intraparietal sulcus (pIPS), whose activity is thought to be unrelated to side perception during the illusion. We observed a prolonged modification in the side of the illusory perceived tone during the first 10 minutes following the stimulation. Specifically, while after cTBS over the left IPS subjects reported to perceive the last tone more often at the right compared to the left ear, cTBS over left IPL significantly reverted this distribution, as the number of last perceived tones at the right ear was smaller than at the left ear. Such alteration was not maintained in the successive 10 minutes. These results provide the first evidence of the causal involvement of the left IPL in the perception of side during the octave illusion.
Collapse
|
47
|
Hausfeld L, Riecke L, Formisano E. Acoustic and higher-level representations of naturalistic auditory scenes in human auditory and frontal cortex. Neuroimage 2018. [DOI: 10.1016/j.neuroimage.2018.02.065] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022] Open
|
48
|
Alain C, Khatamian Y, He Y, Lee Y, Moreno S, Leung AWS, Bialystok E. Different neural activities support auditory working memory in musicians and bilinguals. Ann N Y Acad Sci 2018; 1423:435-446. [PMID: 29771462 DOI: 10.1111/nyas.13717] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 03/13/2018] [Accepted: 03/17/2018] [Indexed: 02/28/2024]
Abstract
Musical training and bilingualism benefit executive functioning and working memory (WM)-however, the brain networks supporting this advantage are not well specified. Here, we used functional magnetic resonance imaging and the n-back task to assess WM for spatial (sound location) and nonspatial (sound category) auditory information in musician monolingual (musicians), nonmusician bilinguals (bilinguals), and nonmusician monolinguals (controls). Musicians outperformed bilinguals and controls on the nonspatial WM task. Overall, spatial and nonspatial WM were associated with greater activity in dorsal and ventral brain regions, respectively. Increasing WM load yielded similar recruitment of the anterior-posterior attention network in all three groups. In both tasks and both levels of difficulty, musicians showed lower brain activity than controls in superior prefrontal frontal gyrus and dorsolateral prefrontal cortex (DLPFC) bilaterally, a finding that may reflect improved and more efficient use of neural resources. Bilinguals showed enhanced activity in language-related areas (i.e., left DLPFC and left supramarginal gyrus) relative to musicians and controls, which could be associated with the need to suppress interference associated with competing semantic activations from multiple languages. These findings indicate that the auditory WM advantage in musicians and bilinguals is mediated by different neural networks specific to each life experience.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Yasha Khatamian
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
| | - Yu He
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
| | - Yunjo Lee
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
| | - Sylvain Moreno
- School of Interactive Arts and Technology, Simon Fraser University, Burnaby, British Columbia, Canada
- Digital Health Hub, Innovation Boulevard, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Ada W S Leung
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
- Department of Occupational Therapy, University of Alberta, Edmonton, Alberta, Canada
| | - Ellen Bialystok
- Rotman Research Institute, Baycrest Centre, University of Toronto, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| |
Collapse
|
49
|
|
50
|
Da Costa S, Clarke S, Crottaz-Herbette S. Keeping track of sound objects in space: The contribution of early-stage auditory areas. Hear Res 2018; 366:17-31. [PMID: 29643021 DOI: 10.1016/j.heares.2018.03.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 03/21/2018] [Accepted: 03/28/2018] [Indexed: 12/01/2022]
Abstract
The influential dual-stream model of auditory processing stipulates that information pertaining to the meaning and to the position of a given sound object is processed in parallel along two distinct pathways, the ventral and dorsal auditory streams. Functional independence of the two processing pathways is well documented by conscious experience of patients with focal hemispheric lesions. On the other hand there is growing evidence that the meaning and the position of a sound are combined early in the processing pathway, possibly already at the level of early-stage auditory areas. Here, we investigated how early auditory areas integrate sound object meaning and space (simulated by interaural time differences) using a repetition suppression fMRI paradigm at 7 T. Subjects listen passively to environmental sounds presented in blocks of repetitions of the same sound object (same category) or different sounds objects (different categories), perceived either in the left or right space (no change within block) or shifted left-to-right or right-to-left halfway in the block (change within block). Environmental sounds activated bilaterally the superior temporal gyrus, middle temporal gyrus, inferior frontal gyrus, and right precentral cortex. Repetitions suppression effects were measured within bilateral early-stage auditory areas in the lateral portion of the Heschl's gyrus and posterior superior temporal plane. Left lateral early-stages areas showed significant effects for position and change, interactions Category x Initial Position and Category x Change in Position, while right lateral areas showed main effect of category and interaction Category x Change in Position. The combined evidence from our study and from previous studies speaks in favour of a position-linked representation of sound objects, which is independent from semantic encoding within the ventral stream and from spatial encoding within the dorsal stream. We argue for a third auditory stream, which has its origin in lateral belt areas and tracks sound objects across space.
Collapse
Affiliation(s)
- Sandra Da Costa
- Centre d'Imagerie BioMédicale (CIBM), EPFL et Universités de Lausanne et de Genève, Bâtiment CH, Station 6, CH-1015 Lausanne, Switzerland.
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| |
Collapse
|