1
|
Tsuji RK, Hamerschmidt R, Lavinsky J, Felix F, Silva VAR. Brazilian Society of Otology task force - single sided deafness - recommendations based on strength of evidence. Braz J Otorhinolaryngol 2024; 91:101514. [PMID: 39378663 PMCID: PMC11492085 DOI: 10.1016/j.bjorl.2024.101514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Accepted: 09/10/2024] [Indexed: 10/10/2024] Open
Abstract
OBJECTIVE To make evidence-based recommendations for the treatment of Single-Sided Deafness (SSD) in children and adults. METHODS Task force members were instructed on knowledge synthesis methods, including electronic database search, review and selection of relevant citations, and critical appraisal of selected studies. Articles written in English or Portuguese on SSD were eligible for inclusion. The American College of Physicians' guideline grading system and the American Thyroid Association's guideline criteria were used for critical appraisal of evidence and recommendations for therapeutic interventions. RESULTS The topics were divided into 3 parts: (1) Impact of SSD in children; (2) Impact of SSD in adults; and (3) SSD in patients with temporal bone tumors. CONCLUSIONS Decision-making for patients with SSD is complex and multifactorial. The lack of consensus on the quality of outcomes and on which measurement tools to use hinders a proper comparison of different treatment options. Contralateral routing of signal hearing aids and bone conduction devices can alleviate the head shadow effect and improve sound awareness and signal-to-noise ratio in the affected ear. However, they cannot restore binaural hearing. Cochlear implants can restore binaural hearing, producing significant improvements in speech perception, spatial localization of sound, tinnitus control, and overall quality of life. However, cochlear implantation is not recommended in cases of cochlear nerve deficiency, a relatively common cause of congenital SSD.
Collapse
Affiliation(s)
- Robinson Koji Tsuji
- Universidade de São Paulo (USP), Faculdade de Medicina, Departamento de Otorrinolaringologia, São Paulo, SP, Brazil
| | - Rogério Hamerschmidt
- Universidade Federal do Paraná (UFPR), Departamento de Otorrinolaringologia, Curitiba, PR, Brazil
| | - Joel Lavinsky
- Universidade Federal do Rio Grande do Sul (UFRGS), Departamento de Ciências Morfológicas, Porto Alegre, RS, Brazil
| | - Felippe Felix
- Universidade Federal do Rio de Janeiro (UFRJ), Hospital Universitário Clementino Fraga Filho (HUCFF), Rio de Janeiro, RJ, Brazil
| | - Vagner Antonio Rodrigues Silva
- Universidade Estadual de Campinas (Unicamp), Faculdade de Ciências Médicas (FCM), Departamento de Otorrinolaringologia, Cirurgia de Cabeça e Pescoço, Campinas, SP, Brazil.
| |
Collapse
|
2
|
Liu H, Bai Y, Xu Z, Liu J, Ni G, Ming D. The scalp time-varying network of auditory spatial attention in "cocktail-party" situations. Hear Res 2024; 442:108946. [PMID: 38150794 DOI: 10.1016/j.heares.2023.108946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 12/29/2023]
Abstract
Sound source localization in "cocktail-party" situations is a remarkable ability of the human auditory system. However, the neural mechanisms underlying auditory spatial attention are still largely unknown. In this study, the "cocktail-party" situations are simulated through multiple sound sources and presented through head-related transfer functions and headphones. Furthermore, the scalp time-varying network of auditory spatial attention is constructed using the high-temporal resolution electroencephalogram, and its network properties are measured quantitatively using graph theory analysis. The results show that the time-varying network of auditory spatial attention in "cocktail-party" situations is more complex and partially different than in simple acoustic situations, especially in the early- and middle-latency periods. The network coupling strength increases continuously over time, and the network hub shifts from the posterior temporal lobe to the parietal lobe and then to the frontal lobe region. In addition, the right hemisphere has a stronger network strength for processing auditory spatial information in "cocktail-party" situations, i.e., the right hemisphere has higher clustering levels, higher transmission efficiency, and more node degrees during the early- and middle-latency periods, while this phenomenon disappears and appears symmetrically during the late-latency period. These findings reveal different network patterns and properties of auditory spatial attention in "cocktail-party" situations during different periods and demonstrate the dominance of the right hemisphere in the dynamic processing of auditory spatial information.
Collapse
Affiliation(s)
- Hongxing Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Yanru Bai
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China
| | - Zihao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Jihan Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Guangjian Ni
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China.
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China
| |
Collapse
|
3
|
Pantaleo A, Murri A, Cavallaro G, Pontillo V, Auricchio D, Quaranta N. Single-Sided Deafness and Hearing Rehabilitation Modalities: Contralateral Routing of Signal Devices, Bone Conduction Devices, and Cochlear Implants. Brain Sci 2024; 14:99. [PMID: 38275519 PMCID: PMC10814000 DOI: 10.3390/brainsci14010099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 01/10/2024] [Accepted: 01/18/2024] [Indexed: 01/27/2024] Open
Abstract
Single sided deafness (SSD) is characterized by significant sensorineural hearing loss, severe or profound, in only one ear. SSD adversely affects various aspects of auditory perception, including causing impairment in sound localization, difficulties with speech comprehension in noisy environments, and decreased spatial awareness, resulting in a significant decline in overall quality of life (QoL). Several treatment options are available for SSD, including cochlear implants (CI), contralateral routing of signal (CROS), and bone conduction devices (BCD). The lack of consensus on outcome domains and measurement tools complicates treatment comparisons and decision-making. This narrative overview aims to summarize the treatment options available for SSD in adult and pediatric populations, discussing their respective advantages and disadvantages. Rerouting devices (CROS and BCD) attenuate the effects of head shadow and improve sound awareness and signal-to-noise ratio in the affected ear; however, they cannot restore binaural hearing. CROS devices, being non-implantable, are the least invasive option. Cochlear implantation is the only strategy that can restore binaural hearing, delivering significant improvements in speech perception, spatial localization, tinnitus control, and overall QoL. Comprehensive preoperative counseling, including a discussion of alternative technologies, implications of no treatment, expectations, and auditory training, is critical to optimizing therapeutic outcomes.
Collapse
Affiliation(s)
- Alessandra Pantaleo
- Otolaryngology Unit, Department of BMS, Neuroscience and Sensory Organs, University of Bari, 70121 Bari, Italy; (A.P.); (A.M.); (V.P.); (D.A.)
| | - Alessandra Murri
- Otolaryngology Unit, Department of BMS, Neuroscience and Sensory Organs, University of Bari, 70121 Bari, Italy; (A.P.); (A.M.); (V.P.); (D.A.)
| | - Giada Cavallaro
- Otolaryngology Unit, Madonna delle Grazie Hospital, 75100 Matera, Italy;
| | - Vito Pontillo
- Otolaryngology Unit, Department of BMS, Neuroscience and Sensory Organs, University of Bari, 70121 Bari, Italy; (A.P.); (A.M.); (V.P.); (D.A.)
| | - Debora Auricchio
- Otolaryngology Unit, Department of BMS, Neuroscience and Sensory Organs, University of Bari, 70121 Bari, Italy; (A.P.); (A.M.); (V.P.); (D.A.)
| | - Nicola Quaranta
- Otolaryngology Unit, Department of BMS, Neuroscience and Sensory Organs, University of Bari, 70121 Bari, Italy; (A.P.); (A.M.); (V.P.); (D.A.)
| |
Collapse
|
4
|
Colas T, Farrugia N, Hendrickx E, Paquier M. Sound externalization in dynamic binaural listening: A comparative behavioral and EEG study. Hear Res 2023; 440:108912. [PMID: 37952369 DOI: 10.1016/j.heares.2023.108912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/14/2023]
Abstract
Binaural reproduction aims at recreating a realistic sound scene at the ears of the listener using headphones. Unfortunately, externalization for frontal and rear sources is often poor (virtual sources are perceived inside the head, instead of outside the head). Nevertheless, previous studies have shown that large head-tracked movements could substantially improve externalization and that this improvement persisted once the subject had stopped moving his/her head. The present study investigates the relation between externalization and evoked response potentials (ERPs) by performing behavioral and EEG measurements in the same experimental conditions. Different degrees of externalization were achieved by preceding measurements with 1) head-tracked movements, 2) untracked head movements, and 3) no head movement. Results showed that performing a head movement, whether the head tracking was active or not, increased the amplitude of ERP components after 100 ms, which suggests that preceding head movements alters the auditory processing. Moreover, untracked head movements gave a stronger amplitude on the N1 component, which might be a marker of a consistency break in regards to the real world. While externalization scores were higher after head-tracked movements in the behavioral experiment, no marker of externalization could be found in the EEG results.
Collapse
Affiliation(s)
- Tom Colas
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France.
| | - Nicolas Farrugia
- IMT Atlantique, CNRS Lab-STICC UMR 6285, 655 avenue du Technopole, 29280 Plouzane, France
| | - Etienne Hendrickx
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France
| | - Mathieu Paquier
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France
| |
Collapse
|
5
|
Han JH, Lee J, Lee HJ. The effect of noise on the cortical activity patterns of speech processing in adults with single-sided deafness. Front Neurol 2023; 14:1054105. [PMID: 37006498 PMCID: PMC10060629 DOI: 10.3389/fneur.2023.1054105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 02/27/2023] [Indexed: 03/18/2023] Open
Abstract
The most common complaint in people with single-sided deafness (SSD) is difficulty in understanding speech in a noisy environment. Moreover, the neural mechanism of speech-in-noise (SiN) perception in SSD individuals is still poorly understood. In this study, we measured the cortical activity in SSD participants during a SiN task to compare with a speech-in-quiet (SiQ) task. Dipole source analysis revealed left hemispheric dominance in both left- and right-sided SSD group. Contrary to SiN listening, this hemispheric difference was not found during SiQ listening in either group. In addition, cortical activation in the right-sided SSD individuals was independent of the location of sound whereas activation sites in the left-sided SSD group were altered by the sound location. Examining the neural-behavioral relationship revealed that N1 activation is associated with the duration of deafness and the SiN perception ability of individuals with SSD. Our findings indicate that SiN listening is processed differently in the brains of left and right SSD individuals.
Collapse
Affiliation(s)
- Ji-Hye Han
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Republic of Korea
| | - Jihyun Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Republic of Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Republic of Korea
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea
- *Correspondence: Hyo-Jeong Lee
| |
Collapse
|
6
|
Han JH, Lee J, Lee HJ. Attentional modulation of auditory cortical activity in individuals with single-sided deafness. Neuropsychologia 2023; 183:108515. [PMID: 36792051 DOI: 10.1016/j.neuropsychologia.2023.108515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 02/10/2023] [Accepted: 02/12/2023] [Indexed: 02/15/2023]
Abstract
Persons with single-sided deafness (SSD) typically complain about the impaired ability to locate sounds and to understand speech within background noise. However, the findings from previous studies suggest that paying attention to sounds could mitigate the degraded spatial and speech-in-noise perception. In the present study, we characterize the pattern of cortical activation depending on the side of deafness, and attentional modulation of neural responses to determine if it can assist better sound processing in people with SSD. For the active listening condition, adult subjects with SSD performed sound localization tasks. On the other hand, they watched movies without attending to speech stimuli during passive listening. The sensor-level global field power of N1 and source-level N1 activation were computed to compare the active- and passive-listening conditions and left- and right-sided deafness. The results show that attentional modulation differs depending on the side of deafness: active listening increased the cortical activity in individuals with left-sided deafness but not in those with right-sided deafness. At the source level, the attentional gain was more apparent in left-sided deafness in that paying attention enhanced brain activation in both hemispheres. In addition, SSD participants with larger cortical activities in the right primary auditory cortex had shorter durations of deafness. Our results indicate that the side of deafness can change top-down attentional processing in the auditory cortical pathway in SSD patients.
Collapse
Affiliation(s)
- Ji-Hye Han
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea; Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Sacred Heart Hospital, Anyang, Republic of Korea
| | - Jihyun Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea; Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Sacred Heart Hospital, Anyang, Republic of Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea; Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Sacred Heart Hospital, Anyang, Republic of Korea; Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea.
| |
Collapse
|
7
|
Tian X, Liu Y, Guo Z, Cai J, Tang J, Chen F, Zhang H. Cerebral Representation of Sound Localization Using Functional Near-Infrared Spectroscopy. Front Neurosci 2022; 15:739706. [PMID: 34970110 PMCID: PMC8712652 DOI: 10.3389/fnins.2021.739706] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 11/09/2021] [Indexed: 11/30/2022] Open
Abstract
Sound localization is an essential part of auditory processing. However, the cortical representation of identifying the direction of sound sources presented in the sound field using functional near-infrared spectroscopy (fNIRS) is currently unknown. Therefore, in this study, we used fNIRS to investigate the cerebral representation of different sound sources. Twenty-five normal-hearing subjects (aged 26 ± 2.7, male 11, female 14) were included and actively took part in a block design task. The test setup for sound localization was composed of a seven-speaker array spanning a horizontal arc of 180° in front of the participants. Pink noise bursts with two intensity levels (48 dB/58 dB) were randomly applied via five loudspeakers (–90°/–30°/–0°/+30°/+90°). Sound localization task performances were collected, and simultaneous signals from auditory processing cortical fields were recorded for analysis by using a support vector machine (SVM). The results showed a classification accuracy of 73.60, 75.60, and 77.40% on average at –90°/0°, 0°/+90°, and –90°/+90° with high intensity, and 70.60, 73.6, and 78.6% with low intensity. The increase of oxyhemoglobin was observed in the bilateral non-primary auditory cortex (AC) and dorsolateral prefrontal cortex (dlPFC). In conclusion, the oxyhemoglobin (oxy-Hb) response showed different neural activity patterns between the lateral and front sources in the AC and dlPFC. Our results may serve as a basic contribution for further research on the use of fNIRS in spatial auditory studies.
Collapse
Affiliation(s)
- Xuexin Tian
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yimeng Liu
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Zengzhi Guo
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jieqing Cai
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Jie Tang
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Department of Physiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Hearing Research Center, Southern Medical University, Guangzhou, China.,Key Laboratory of Mental Health of the Ministry of Education, Southern Medical University, Guangzhou, China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongzheng Zhang
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Hearing Research Center, Southern Medical University, Guangzhou, China
| |
Collapse
|
8
|
Spinal and Cerebral Integration of Noxious Inputs in Left-handed Individuals. Brain Topogr 2021; 34:568-586. [PMID: 34338897 DOI: 10.1007/s10548-021-00864-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 07/23/2021] [Indexed: 10/20/2022]
Abstract
Some pain-related information is processed preferentially in the right cerebral hemisphere. Considering that functional lateralization can be affected by handedness, spinal and cerebral pain-related responses may be different between right- and left-handed individuals. Therefore, this study aimed to investigate the cortical and spinal mechanisms of nociceptive integration when nociceptive stimuli are applied to right -handed vs. left -handed individuals. The NFR, evoked potentials (ERP: P45, N100, P260), and event-related spectral perturbations (ERSP: theta, alpha, beta and gamma band oscillations) were compared between ten right-handed and ten left-handed participants. Pain was induced by transcutaneous electrical stimulation of the lower limbs and left upper limb. Stimulation intensity was adjusted individually in five counterbalanced conditions of 21 stimuli each: three unilateral (right lower limb, left lower limb, and left upper limb stimulation) and two bilateral conditions (right and left lower limbs, and the right lower limb and left upper limb stimulation). The amplitude of the NFR, ERP, ERSP, and pain ratings were compared between groups and conditions using a mixed ANOVA. A significant increase of responses was observed in bilateral compared with unilateral conditions for pain intensity, NFR amplitude, N100, theta oscillations, and gamma oscillations. However, these effects were not significantly different between right- and left-handed individuals. These results suggest that spinal and cerebral integration of bilateral nociceptive inputs is similar between right- and left-handed individuals. They also imply that pain-related responses measured in this study may be examined independently of handedness.
Collapse
|
9
|
Han JH, Lee J, Lee HJ. Ear-Specific Hemispheric Asymmetry in Unilateral Deafness Revealed by Auditory Cortical Activity. Front Neurosci 2021; 15:698718. [PMID: 34393711 PMCID: PMC8363420 DOI: 10.3389/fnins.2021.698718] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/12/2021] [Indexed: 12/14/2022] Open
Abstract
Profound unilateral deafness reduces the ability to localize sounds achieved via binaural hearing. Furthermore, unilateral deafness promotes a substantial change in cortical processing to binaural stimulation, thereby leading to reorganization over the whole brain. Although distinct patterns in the hemispheric laterality depending on the side and duration of deafness have been suggested, the neurological mechanisms underlying the difference in relation to behavioral performance when detecting spatially varied cues remain unknown. To elucidate the mechanism, we compared N1/P2 auditory cortical activities and the pattern of hemispheric asymmetry of normal hearing, unilaterally deaf (UD), and simulated acute unilateral hearing loss groups while passively listening to speech sounds delivered from different locations under open free field condition. The behavioral performances of the participants concerning sound localization were measured by detecting sound sources in the azimuth plane. The results reveal a delayed reaction time in the right-sided UD (RUD) group for the sound localization task and prolonged P2 latency compared to the left-sided UD (LUD) group. Moreover, the RUD group showed adaptive cortical reorganization evidenced by increased responses in the hemisphere ipsilateral to the intact ear for individuals with better sound localization whereas left-sided unilateral deafness caused contralateral dominance in activity from the hearing ear. The brain dynamics of right-sided unilateral deafness indicate greater capability of adaptive change to compensate for impairment in spatial hearing. In addition, cortical N1 responses to spatially varied speech sounds in unilateral deaf people were inversely related to the duration of deafness in the area encompassing the right auditory cortex, indicating that early intervention would be needed to protect from maladaptation of the central auditory system following unilateral deafness.
Collapse
Affiliation(s)
- Ji-Hye Han
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang-si, South Korea
| | - Jihyun Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang-si, South Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang-si, South Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon-si, South Korea
| |
Collapse
|
10
|
Jiao X, Ying C, Tong S, Tang Y, Wang J, Sun J. The lateralization and reliability of spatial mismatch negativity elicited by auditory deviants with virtual spatial location. Int J Psychophysiol 2021; 165:92-100. [PMID: 33901512 DOI: 10.1016/j.ijpsycho.2021.04.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 04/14/2021] [Accepted: 04/19/2021] [Indexed: 01/10/2023]
Abstract
Mismatch negativity (MMN) is an intensively studied event-related potential component that reflects pre-attentive auditory processing. Existing spatial MMN (sMMN) studies usually use loud-speakers in different locations or deliver sound with binaural localization cues through earphones to elicit MMN, which either was practically complicated or sounded unnatural to the subjects. In the present study, we generated head related transfer function (HRTF)-based spatial sounds and verified that the HRTF-based sounds retained the left and the right spatial localization cues. We further used them as deviants to elicit sMMN with conventional oddball paradigm. Results showed that sMMN was successfully elicited by the HRTF-based deviants in 18 of 21 healthy subjects in two separate sessions. Furthermore, the left deviants elicited higher sMMN amplitudes in the right hemisphere compared to the left hemisphere, while the right deviants elicited sMMN with similar amplitudes in both hemispheres, which supports a combination of contralateral and right-hemispheric dominance in spatial auditory information processing. In addition, the sMMN in response to the right deviants showed good test-retest reliability, while the sMMN in response to the left deviants had weak test-retest reliability. These findings implicate that HRTF-based sMMN could be a robust paradigm to investigate spatial localization and discrimination abilities.
Collapse
Affiliation(s)
- Xiong Jiao
- Shanghai Med-X Engineering Research Center, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chunwei Ying
- Shanghai Med-X Engineering Research Center, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China
| | - Shanbao Tong
- Shanghai Med-X Engineering Research Center, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; Brain Science and Technology Research Center, Shanghai Jiao Tong University, Shanghai, China
| | - Yingying Tang
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Brain Science and Technology Research Center, Shanghai Jiao Tong University, Shanghai, China
| | - Jijun Wang
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China; CAS Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Science, Shanghai, PR China; Institute of Psychology and Behavioral Science, Shanghai Jiao Tong University, Shanghai, PR China.
| | - Junfeng Sun
- Shanghai Med-X Engineering Research Center, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China; Brain Science and Technology Research Center, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
11
|
Fleming JT, Noyce AL, Shinn-Cunningham BG. Audio-visual spatial alignment improves integration in the presence of a competing audio-visual stimulus. Neuropsychologia 2020; 146:107530. [PMID: 32574616 DOI: 10.1016/j.neuropsychologia.2020.107530] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 06/08/2020] [Accepted: 06/08/2020] [Indexed: 11/26/2022]
Abstract
In order to parse the world around us, we must constantly determine which sensory inputs arise from the same physical source and should therefore be perceptually integrated. Temporal coherence between auditory and visual stimuli drives audio-visual (AV) integration, but the role played by AV spatial alignment is less well understood. Here, we manipulated AV spatial alignment and collected electroencephalography (EEG) data while human subjects performed a free-field variant of the "pip and pop" AV search task. In this paradigm, visual search is aided by a spatially uninformative auditory tone, the onsets of which are synchronized to changes in the visual target. In Experiment 1, tones were either spatially aligned or spatially misaligned with the visual display. Regardless of AV spatial alignment, we replicated the key pip and pop result of improved AV search times. Mirroring the behavioral results, we found an enhancement of early event-related potentials (ERPs), particularly the auditory N1 component, in both AV conditions. We demonstrate that both top-down and bottom-up attention contribute to these N1 enhancements. In Experiment 2, we tested whether spatial alignment influences AV integration in a more challenging context with competing multisensory stimuli. An AV foil was added that visually resembled the target and was synchronized to its own stream of synchronous tones. The visual components of the AV target and AV foil occurred in opposite hemifields; the two auditory components were also in opposite hemifields and were either spatially aligned or spatially misaligned with the visual components to which they were synchronized. Search was fastest when the auditory and visual components of the AV target (and the foil) were spatially aligned. Attention modulated ERPs in both spatial conditions, but importantly, the scalp topography of early evoked responses shifted only when stimulus components were spatially aligned, signaling the recruitment of different neural generators likely related to multisensory integration. These results suggest that AV integration depends on AV spatial alignment when stimuli in both modalities compete for selective integration, a common scenario in real-world perception.
Collapse
Affiliation(s)
- Justin T Fleming
- Speech and Hearing Bioscience and Technology Program, Division of Medical Sciences, Harvard Medical School, Boston, MA, USA
| | - Abigail L Noyce
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
| | | |
Collapse
|
12
|
Adel Ghahraman M, Ashrafi M, Mohammadkhani G, Jalaie S. Effects of aging on spatial hearing. Aging Clin Exp Res 2020; 32:733-739. [PMID: 31203530 DOI: 10.1007/s40520-019-01233-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 05/28/2019] [Indexed: 11/28/2022]
Abstract
BACKGROUND Aging has several effects on auditory processing with the most important effect known as speech perception impairment in noise. AIMS The aim of the present study was to investigate the effects of aging on spatial hearing using quick speech in noise (QSIN) and binaural masking level difference (BMLD) tests and speech, spatial, and qualities of hearing scale (SSQ) questionnaire. METHODS The study was carried out on 34 elderly people, aged 60-75 years, with normal peripheral hearing and 34 young participants, aged 18-25 years. Using SSQ questionnaire and QSIN and BMLD tests, the spatial auditory processing ability was compared between the two groups. RESULTS Comparison of mean scores using independent t test showed that there was a significant difference in the mean scores of QSIN, BMLD tests and SSQ questionnaire between the two groups (p < 0.001). Sex was not found to have any effect on the results (p > 0.05). DISCUSSION Structural and neurochemical changes that occur in different parts of the central nervous system by aging affect various aspects of spatial auditory processing, such as localization, the precedence effect, and speech perception in noise. CONCLUSIONS Lower scores of older adults with normal hearing in SSQ questionnaire and behavioral tests, compared with younger participants, may be considered as their weak performance in spatial auditory processing. The results of the present study reconfirm the effects of aging on spatial auditory processing, such as localization and speech perception in noise.
Collapse
Affiliation(s)
- Mansoureh Adel Ghahraman
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| | - Majid Ashrafi
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| | - Ghassem Mohammadkhani
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran.
| | - Shohreh Jalaie
- Biostatistics, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
13
|
Kopco N, Doreswamy KK, Huang S, Rossi S, Ahveninen J. Cortical auditory distance representation based on direct-to-reverberant energy ratio. Neuroimage 2020; 208:116436. [PMID: 31809885 PMCID: PMC6997045 DOI: 10.1016/j.neuroimage.2019.116436] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 11/30/2019] [Accepted: 12/02/2019] [Indexed: 11/26/2022] Open
Abstract
Auditory distance perception and its neuronal mechanisms are poorly understood, mainly because 1) it is difficult to separate distance processing from intensity processing, 2) multiple intensity-independent distance cues are often available, and 3) the cues are combined in a context-dependent way. A recent fMRI study identified human auditory cortical area representing intensity-independent distance for sources presented along the interaural axis (Kopco et al. PNAS, 109, 11019-11024). For these sources, two intensity-independent cues are available, interaural level difference (ILD) and direct-to-reverberant energy ratio (DRR). Thus, the observed activations may have been contributed by not only distance-related, but also direction-encoding neuron populations sensitive to ILD. Here, the paradigm from the previous study was used to examine DRR-based distance representation for sounds originating in front of the listener, where ILD is not available. In a virtual environment, we performed behavioral and fMRI experiments, combined with computational analyses to identify the neural representation of distance based on DRR. The stimuli varied in distance (15-100 cm) while their received intensity was varied randomly and independently of distance. Behavioral performance showed that intensity-independent distance discrimination is accurate for frontal stimuli, even though it is worse than for lateral stimuli. fMRI activations for sounds varying in frontal distance, as compared to varying only in intensity, increased bilaterally in the posterior banks of Heschl's gyri, the planum temporale, and posterior superior temporal gyrus regions. Taken together, these results suggest that posterior human auditory cortex areas contain neuron populations that are sensitive to distance independent of intensity and of binaural cues relevant for directional hearing.
Collapse
Affiliation(s)
- Norbert Kopco
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA; Institute of Computer Science, P. J. Šafárik University, Košice, 04001, Slovakia; Hearing Research Center, Boston University, Boston, MA, 02215, USA.
| | - Keerthi Kumar Doreswamy
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA; Institute of Computer Science, P. J. Šafárik University, Košice, 04001, Slovakia
| | - Samantha Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA
| | - Stephanie Rossi
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown, MA, 02129, USA
| |
Collapse
|
14
|
Shestopalova LB, Petropavlovskaia EA, Semenova VV, Nikitin NI. Lateralization of brain responses to auditory motion: A study using single-trial analysis. Neurosci Res 2020; 162:31-44. [PMID: 32001322 DOI: 10.1016/j.neures.2020.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 12/17/2019] [Accepted: 01/10/2020] [Indexed: 11/19/2022]
Abstract
The present study investigates hemispheric asymmetry of the ERPs and low-frequency oscillatory responses evoked in both hemispheres of the brain by the sound stimuli with delayed onset of motion. EEG was recorded for three patterns of sound motion produced by changes in interaural time differences. Event-related spectral perturbation (ERSP) and inter-trial phase coherence (ITC) were computed from the time-frequency decomposition of EEG signals. The participants either read books of their choice (passive listening) or indicated the sound trajectories perceived using a graphic tablet (active listening). Our goal was to find out whether the lateralization of the motion-onset response (MOR) and oscillatory responses to sound motion were more consistent with the right-hemispheric dominance, contralateral or neglect model of interhemispheric asymmetry. Apparent dominance of the right hemisphere was found only in the ERSP responses. Stronger contralaterality of the left hemisphere corresponding to the "neglect model" of asymmetry was shown by the MOR components and by the phase coherence of the delta-alpha oscillations. Velocity and attention did not change consistently the interhemispheric asymmetry of both the MOR and the oscillatory responses. Our findings demonstrate how the lateralization pattern shown by the MOR potential was interrelated with that of the motion-related single-trial measures.
Collapse
Affiliation(s)
- L B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - E A Petropavlovskaia
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - V V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - N I Nikitin
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| |
Collapse
|
15
|
Bednar A, Lalor EC. Where is the cocktail party? Decoding locations of attended and unattended moving sound sources using EEG. Neuroimage 2019; 205:116283. [PMID: 31629828 DOI: 10.1016/j.neuroimage.2019.116283] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 10/08/2019] [Accepted: 10/14/2019] [Indexed: 11/18/2022] Open
Abstract
Recently, we showed that in a simple acoustic scene with one sound source, auditory cortex tracks the time-varying location of a continuously moving sound. Specifically, we found that both the delta phase and alpha power of the electroencephalogram (EEG) can be used to reconstruct the sound source azimuth. However, in natural settings, we are often presented with a mixture of multiple competing sounds and so we must focus our attention on the relevant source in order to segregate it from the competing sources e.g. 'cocktail party effect'. While many studies have examined this phenomenon in the context of sound envelope tracking by the cortex, it is unclear how we process and utilize spatial information in complex acoustic scenes with multiple sound sources. To test this, we created an experiment where subjects listened to two concurrent sound stimuli that were moving within the horizontal plane over headphones while we recorded their EEG. Participants were tasked with paying attention to one of the two presented stimuli. The data were analyzed by deriving linear mappings, temporal response functions (TRF), between EEG data and attended as well unattended sound source trajectories. Next, we used these TRFs to reconstruct both trajectories from previously unseen EEG data. In a first experiment we used noise stimuli and included the task involved spatially localizing embedded targets. Then, in a second experiment, we employed speech stimuli and a non-spatial speech comprehension task. Results showed the trajectory of an attended sound source can be reliably reconstructed from both delta phase and alpha power of EEG even in the presence of distracting stimuli. Moreover, the reconstruction was robust to task and stimulus type. The cortical representation of the unattended source position was below detection level for the noise stimuli, but we observed weak tracking of the unattended source location for the speech stimuli by the delta phase of EEG. In addition, we demonstrated that the trajectory reconstruction method can in principle be used to decode selective attention on a single-trial basis, however, its performance was inferior to envelope-based decoders. These results suggest a possible dissociation of delta phase and alpha power of EEG in the context of sound trajectory tracking. Moreover, the demonstrated ability to localize and determine the attended speaker in complex acoustic environments is particularly relevant for cognitively controlled hearing devices.
Collapse
Affiliation(s)
- Adam Bednar
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland.
| | - Edmund C Lalor
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland; Department of Biomedical Engineering, Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
16
|
Deng Y, Choi I, Shinn-Cunningham B, Baumgartner R. Impoverished auditory cues limit engagement of brain networks controlling spatial selective attention. Neuroimage 2019; 202:116151. [PMID: 31493531 DOI: 10.1016/j.neuroimage.2019.116151] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Revised: 08/02/2019] [Accepted: 08/31/2019] [Indexed: 12/30/2022] Open
Abstract
Spatial selective attention enables listeners to process a signal of interest in natural settings. However, most past studies on auditory spatial attention used impoverished spatial cues: presenting competing sounds to different ears, using only interaural differences in time (ITDs) and/or intensity (IIDs), or using non-individualized head-related transfer functions (HRTFs). Here we tested the hypothesis that impoverished spatial cues impair spatial auditory attention by only weakly engaging relevant cortical networks. Eighteen normal-hearing listeners reported the content of one of two competing syllable streams simulated at roughly +30° and -30° azimuth. The competing streams consisted of syllables from two different-sex talkers. Spatialization was based on natural spatial cues (individualized HRTFs), individualized IIDs, or generic ITDs. We measured behavioral performance as well as electroencephalographic markers of selective attention. Behaviorally, subjects recalled target streams most accurately with natural cues. Neurally, spatial attention significantly modulated early evoked sensory response magnitudes only for natural cues, not in conditions using only ITDs or IIDs. Consistent with this, parietal oscillatory power in the alpha band (8-14 Hz; associated with filtering out distracting events from unattended directions) showed significantly less attentional modulation with isolated spatial cues than with natural cues. Our findings support the hypothesis that spatial selective attention networks are only partially engaged by impoverished spatial auditory cues. These results not only suggest that studies using unnatural spatial cues underestimate the neural effects of spatial auditory attention, they also illustrate the importance of preserving natural spatial cues in assistive listening devices to support robust attentional control.
Collapse
Affiliation(s)
- Yuqi Deng
- Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Inyong Choi
- Communication Sciences & Disorders, University of Iowa, Iowa City, IA, 52242, USA
| | - Barbara Shinn-Cunningham
- Biomedical Engineering, Boston University, Boston, MA, 02215, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - Robert Baumgartner
- Biomedical Engineering, Boston University, Boston, MA, 02215, USA; Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria.
| |
Collapse
|
17
|
Ozmeral EJ, Eddins DA, Eddins AC. Electrophysiological responses to lateral shifts are not consistent with opponent-channel processing of interaural level differences. J Neurophysiol 2019; 122:737-748. [PMID: 31242052 DOI: 10.1152/jn.00090.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Cortical encoding of auditory space relies on two major peripheral cues, interaural time difference (ITD) and interaural level difference (ILD) of the sounds arriving at a listener's ears. In much of the precortical auditory pathway, ITD and ILD cues are processed independently, and it is assumed that cue integration is a higher order process. However, there remains debate on how ITDs and ILDs are encoded in the cortex and whether they share a common mechanism. The present study used electroencephalography (EEG) to measure evoked cortical potentials from narrowband noise stimuli with imposed binaural cue changes. Previous studies have similarly tested ITD shifts to demonstrate that neural populations broadly favor one spatial hemifield over the other, which is consistent with an opponent-channel model that computes the relative activity between broadly tuned neural populations. However, it is still a matter of debate whether the same coding scheme applies to ILDs and, if so, whether processing the two binaural cues is distributed across similar regions of the cortex. The results indicate that ITD and ILD cues have similar neural signatures with respect to the monotonic responses to shift magnitude; however, the direction of the shift did not elicit responses equally across cues. Specifically, ITD shifts evoked greater responses for outward than inward shifts, independently of the spatial hemifield of the shift, whereas ILD-shift responses were dependent on the hemifield in which the shift occurred. Active cortical structures showed only minor overlap between responses to cues, suggesting the two are not represented by the same pathway.NEW & NOTEWORTHY Interaural time differences (ITDs) and interaural level differences (ILDs) are critical to locating auditory sources in the horizontal plane. The higher order perceptual feature of auditory space is thought to be encoded together by these binaural differences, yet evidence of their integration in cortex remains elusive. Although present results show some common effects between the two cues, key differences were observed that are not consistent with an ITD-like opponent-channel process for ILD encoding.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida.,Department of Chemical and Biomedical Engineering, University of South Florida, Tampa, Florida
| | - Ann Clock Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| |
Collapse
|
18
|
Hanenberg C, Getzmann S, Lewald J. Transcranial direct current stimulation of posterior temporal cortex modulates electrophysiological correlates of auditory selective spatial attention in posterior parietal cortex. Neuropsychologia 2019; 131:160-170. [PMID: 31145907 DOI: 10.1016/j.neuropsychologia.2019.05.023] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 05/03/2019] [Accepted: 05/25/2019] [Indexed: 01/12/2023]
Abstract
Speech perception in "cocktail-party" situations, in which a sound source of interest has to be extracted out of multiple irrelevant sounds, poses a remarkable challenge to the human auditory system. Studies on structural and electrophysiological correlates of auditory selective spatial attention revealed critical roles of the posterior temporal cortex and the N2 event-related potential (ERP) component in the underlying processes. Here, we explored effects of transcranial direct current stimulation (tDCS) to posterior temporal cortex on neurophysiological correlates of auditory selective spatial attention, with a specific focus on the N2. In a single-blind, sham-controlled crossover design with baseline and follow-up measurements, monopolar anodal and cathodal tDCS was applied for 16 min to the right posterior superior temporal cortex. Two age groups of human subjects, a younger (n = 20; age 18-30 yrs) and an older group (n = 19; age 66-77 yrs), completed an auditory free-field multiple-speakers localization task while ERPs were recorded. The ERP data showed an offline effect of anodal, but not cathodal, tDCS immediately after DC offset for targets contralateral, but not ipsilateral, to the hemisphere of tDCS, without differences between groups. This effect mainly consisted in a substantial increase of the N2 amplitude by 0.9 μV (SE 0.4 μV; d = 0.40) compared with sham tDCS. At the same point in time, cortical source localization revealed a reduction of activity in ipsilateral (right) posterior parietal cortex. Also, localization error was improved after anodal, but not cathodal, tDCS. Given that both the N2 and the posterior parietal cortex are involved in processes of auditory selective spatial attention, these results suggest that anodal tDCS specifically enhanced inhibitory attentional brain processes underlying the focusing onto a target sound source, possibly by improved suppression of irrelevant distracters.
Collapse
Affiliation(s)
- Christina Hanenberg
- Ruhr University Bochum, Faculty of Psychology, D-44780, Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, D-44139, Dortmund, Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, D-44139, Dortmund, Germany
| | - Jörg Lewald
- Ruhr University Bochum, Faculty of Psychology, D-44780, Bochum, Germany.
| |
Collapse
|
19
|
Moncada-Torres A, Joshi SN, Prokopiou A, Wouters J, Epp B, Francart T. A framework for computational modelling of interaural time difference discrimination of normal and hearing-impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:940. [PMID: 30180705 DOI: 10.1121/1.5051322] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Accepted: 08/03/2018] [Indexed: 06/08/2023]
Abstract
Different computational models have been developed to study the interaural time difference (ITD) perception. However, only few have used a physiologically inspired architecture to study ITD discrimination. Furthermore, they do not include aspects of hearing impairment. In this work, a framework was developed to predict ITD thresholds in listeners with normal and impaired hearing. It combines the physiologically inspired model of the auditory periphery proposed by Zilany, Bruce, Nelson, and Carney [(2009). J. Acoust. Soc. Am. 126(5), 2390-2412] as a front end with a coincidence detection stage and a neurometric decision device as a back end. It was validated by comparing its predictions against behavioral data for narrowband stimuli from literature. The framework is able to model ITD discrimination of normal-hearing and hearing-impaired listeners at a group level. Additionally, it was used to explore the effect of different proportions of outer- and inner-hair cell impairment on ITD discrimination.
Collapse
Affiliation(s)
- Arturo Moncada-Torres
- KU Leuven - University of Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Bus 721, 3000 Leuven, Belgium
| | - Suyash N Joshi
- Department of Electrical Engineering, Hearing Systems, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kongens Lyngby, Denmark
| | - Andreas Prokopiou
- KU Leuven - University of Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Bus 721, 3000 Leuven, Belgium
| | - Jan Wouters
- KU Leuven - University of Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Bus 721, 3000 Leuven, Belgium
| | - Bastian Epp
- Department of Electrical Engineering, Hearing Systems, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kongens Lyngby, Denmark
| | - Tom Francart
- KU Leuven - University of Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Bus 721, 3000 Leuven, Belgium
| |
Collapse
|
20
|
Neural tracking of auditory motion is reflected by delta phase and alpha power of EEG. Neuroimage 2018; 181:683-691. [PMID: 30053517 DOI: 10.1016/j.neuroimage.2018.07.054] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 07/10/2018] [Accepted: 07/23/2018] [Indexed: 12/29/2022] Open
Abstract
It is of increasing practical interest to be able to decode the spatial characteristics of an auditory scene from electrophysiological signals. However, the cortical representation of auditory space is not well characterized, and it is unclear how cortical activity reflects the time-varying location of a moving sound. Recently, we demonstrated that cortical response measures to discrete noise bursts can be decoded to determine their origin in space. Here we build on these findings to investigate the cortical representation of a continuously moving auditory stimulus using scalp recorded electroencephalography (EEG). In a first experiment, subjects listened to pink noise over headphones which was spectro-temporally modified to be perceived as randomly moving on a semi-circular trajectory in the horizontal plane. While subjects listened to the stimuli, we recorded their EEG using a 128-channel acquisition system. The data were analysed by 1) building a linear regression model (decoder) mapping the relationship between the stimulus location and a training set of EEG data, and 2) using the decoder to reconstruct an estimate of the time-varying sound source azimuth from the EEG data. The results showed that we can decode sound trajectory with a reconstruction accuracy significantly above chance level. Specifically, we found that the phase of delta (<2 Hz) and power of alpha (8-12 Hz) EEG track the dynamics of a moving auditory object. In a follow-up experiment, we replaced the noise with pulse train stimuli containing only interaural level and time differences (ILDs and ITDs respectively). This allowed us to investigate whether our trajectory decoding is sensitive to both acoustic cues. We found that the sound trajectory can be decoded for both ILD and ITD stimuli. Moreover, their neural signatures were similar and even allowed successful cross-cue classification. This supports the notion of integrated processing of ILD and ITD at the cortical level. These results are particularly relevant for application in devices such as cognitively controlled hearing aids and for the evaluation of virtual acoustic environments.
Collapse
|
21
|
The Encoding of Sound Source Elevation in the Human Auditory Cortex. J Neurosci 2018; 38:3252-3264. [PMID: 29507148 DOI: 10.1523/jneurosci.2530-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Revised: 02/11/2018] [Accepted: 02/14/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation.SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source.
Collapse
|
22
|
Cortical Processing of Level Cues for Spatial Hearing is Impaired in Children with Prelingual Deafness Despite Early Bilateral Access to Sound. Brain Topogr 2017; 31:270-287. [DOI: 10.1007/s10548-017-0596-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Accepted: 09/25/2017] [Indexed: 01/13/2023]
|
23
|
Evidence for cue-independent spatial representation in the human auditory cortex during active listening. Proc Natl Acad Sci U S A 2017; 114:E7602-E7611. [PMID: 28827357 DOI: 10.1073/pnas.1707522114] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
Collapse
|
24
|
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation. eNeuro 2017; 4:eN-NWR-0007-17. [PMID: 28451630 PMCID: PMC5394928 DOI: 10.1523/eneuro.0007-17.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Revised: 02/03/2017] [Accepted: 02/06/2017] [Indexed: 11/21/2022] Open
Abstract
Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals.
Collapse
|
25
|
Ortiz-Rios M, Azevedo FAC, Kuśmierek P, Balla DZ, Munk MH, Keliris GA, Logothetis NK, Rauschecker JP. Widespread and Opponent fMRI Signals Represent Sound Location in Macaque Auditory Cortex. Neuron 2017; 93:971-983.e4. [PMID: 28190642 DOI: 10.1016/j.neuron.2017.01.013] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 12/05/2016] [Accepted: 01/15/2017] [Indexed: 11/15/2022]
Abstract
In primates, posterior auditory cortical areas are thought to be part of a dorsal auditory pathway that processes spatial information. But how posterior (and other) auditory areas represent acoustic space remains a matter of debate. Here we provide new evidence based on functional magnetic resonance imaging (fMRI) of the macaque indicating that space is predominantly represented by a distributed hemifield code rather than by a local spatial topography. Hemifield tuning in cortical and subcortical regions emerges from an opponent hemispheric pattern of activation and deactivation that depends on the availability of interaural delay cues. Importantly, these opponent signals allow responses in posterior regions to segregate space similarly to a hemifield code representation. Taken together, our results reconcile seemingly contradictory views by showing that the representation of space follows closely a hemifield code and suggest that enhanced posterior-dorsal spatial specificity in primates might emerge from this form of coding.
Collapse
Affiliation(s)
- Michael Ortiz-Rios
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Graduate School of Neural & Behavioural Sciences, International Max Planck Research School (IMPRS), University of Tübingen, Österbergstraße 3, 72074 Tübingen, Germany; Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA; Institute of Neuroscience, Henry Welcome Building, Medical School, Framlington Place, Newcastle upon Tyne, NE2 4HH, UK.
| | - Frederico A C Azevedo
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Graduate School of Neural & Behavioural Sciences, International Max Planck Research School (IMPRS), University of Tübingen, Österbergstraße 3, 72074 Tübingen, Germany
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA
| | - Dávid Z Balla
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany
| | - Matthias H Munk
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Department of Systems Neurophysiology, Fachbereich Biologie, Technische Universität Darmstadt, Schnittspahnstraße 10, 64287, Darmstadt, Germany
| | - Georgios A Keliris
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Bio-Imaging Lab, Department of Biomedical Sciences, University of Antwerp, Wilrijk, 2610, Belgium
| | - Nikos K Logothetis
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Division of Imaging Science and Biomedical Engineering, University of Manchester, Manchester, M13 9PL, UK
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA; Institute for Advanced Study of Technische Universität München, Lichtenbergstraße 2 a, 85748 Garching, Germany
| |
Collapse
|
26
|
Bednar A, Boland FM, Lalor EC. Different spatio-temporal electroencephalography features drive the successful decoding of binaural and monaural cues for sound localization. Eur J Neurosci 2017; 45:679-689. [DOI: 10.1111/ejn.13524] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Revised: 01/10/2017] [Accepted: 01/13/2017] [Indexed: 11/27/2022]
Affiliation(s)
- Adam Bednar
- School of Engineering; Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience; Trinity College Dublin; University of Dublin; Dublin Ireland
- Department of Biomedical Engineering and Department of Neuroscience; University of Rochester; 500 Joseph C. Wilson Blvd. Box 270168 Rochester, NY 14611 USA
| | - Francis M. Boland
- School of Engineering; Electronic & Electrical Engineering; Trinity College Dublin; Dublin Ireland
| | - Edmund C. Lalor
- School of Engineering; Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience; Trinity College Dublin; University of Dublin; Dublin Ireland
- Department of Biomedical Engineering and Department of Neuroscience; University of Rochester; 500 Joseph C. Wilson Blvd. Box 270168 Rochester, NY 14611 USA
| |
Collapse
|
27
|
Cortical Representation of Interaural Time Difference Is Impaired by Deafness in Development: Evidence from Children with Early Long-term Access to Sound through Bilateral Cochlear Implants Provided Simultaneously. J Neurosci 2017; 37:2349-2361. [PMID: 28123078 DOI: 10.1523/jneurosci.2538-16.2017] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Revised: 12/21/2016] [Accepted: 01/18/2017] [Indexed: 11/21/2022] Open
Abstract
Accurate use of interaural time differences (ITDs) for spatial hearing may require access to bilateral auditory input during sensitive periods in human development. Providing bilateral cochlear implants (CIs) simultaneously promotes symmetrical development of bilateral auditory pathways but does not support normal ITD sensitivity. Thus, although binaural interactions are established by bilateral CIs in the auditory brainstem, potential deficits in cortical processing of ITDs remain. Cortical ITD processing in children with simultaneous bilateral CIs and normal hearing with similar time-in-sound was explored in the present study. Cortical activity evoked by bilateral stimuli with varying ITDs (0, ±0.4, ±1 ms) was recorded using multichannel electroencephalography. Source analyses indicated dominant activity in the right auditory cortex in both groups but limited ITD processing in children with bilateral CIs. In normal-hearing children, adult-like processing patterns were found underlying the immature P1 (∼100 ms) response peak with reduced activity in the auditory cortex ipsilateral to the leading ITD. Further, the left cortex showed a stronger preference than the right cortex for stimuli leading from the contralateral hemifield. By contrast, children with CIs demonstrated reduced ITD-related changes in both auditory cortices. Decreased parieto-occipital activity, possibly involved in spatial processing, was also revealed in children with CIs. Thus, simultaneous bilateral implantation in young children maintains right cortical dominance during binaural processing but does not fully overcome effects of deafness using present CI devices. Protection of bilateral pathways through simultaneous implantation might be capitalized for ITD processing with signal processing advances, which more consistently represent binaural timing cues.SIGNIFICANCE STATEMENT Multichannel electroencephalography demonstrated impairment of binaural processing in children who are deaf despite early access to bilateral auditory input by first finding that foundations for binaural hearing are normally established during early stages of cortical development. Although 4- to 7-year-old children with normal hearing had immature cortical responses, adult patterns in cortical coding of binaural timing cues were measured. Second, children receiving two cochlear implants in the same surgery maintained normal-like input from both ears, but this did not support significant effects of binaural timing cues in either auditory cortex. Deficits in parieto-occiptal areas further suggested impairment in spatial processing. Results indicate that cochlear implants working independently in each ear do not fully overcome deafness-related binaural processing deficits, even after long-term experience.
Collapse
|
28
|
Undurraga JA, Haywood NR, Marquardt T, McAlpine D. Neural Representation of Interaural Time Differences in Humans-an Objective Measure that Matches Behavioural Performance. J Assoc Res Otolaryngol 2016; 17:591-607. [PMID: 27628539 PMCID: PMC5112218 DOI: 10.1007/s10162-016-0584-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2016] [Accepted: 08/15/2016] [Indexed: 12/22/2022] Open
Abstract
Humans, and many other species, exploit small differences in the timing of sounds at the two ears (interaural time difference, ITD) to locate their source and to enhance their detection in background noise. Despite their importance in everyday listening tasks, however, the neural representation of ITDs in human listeners remains poorly understood, and few studies have assessed ITD sensitivity to a similar resolution to that reported perceptually. Here, we report an objective measure of ITD sensitivity in electroencephalography (EEG) signals to abrupt modulations in the interaural phase of amplitude-modulated low-frequency tones. Specifically, we measured following responses to amplitude-modulated sinusoidal signals (520-Hz carrier) in which the stimulus phase at each ear was manipulated to produce discrete interaural phase modulations at minima in the modulation cycle-interaural phase modulation following responses (IPM-FRs). The depth of the interaural phase modulation (IPM) was defined by the sign and the magnitude of the interaural phase difference (IPD) transition which was symmetric around zero. Seven IPM depths were assessed over the range of ±22 ° to ±157 °, corresponding to ITDs largely within the range experienced by human listeners under natural listening conditions (120 to 841 μs). The magnitude of the IPM-FR was maximal for IPM depths in the range of ±67.6 ° to ±112.6 ° and correlated well with performance in a behavioural experiment in which listeners were required to discriminate sounds containing IPMs from those with only static IPDs. The IPM-FR provides a sensitive measure of binaural processing in the human brain and has a potential to assess temporal binaural processing.
Collapse
Affiliation(s)
- Jaime A Undurraga
- Department Linguistics, The Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW, 2109, Australia.
- UCL Ear Institute, University College London, 332 Gray's Inn Rd., London, WC1X8EE, UK.
| | - Nick R Haywood
- Department Linguistics, The Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW, 2109, Australia
- UCL Ear Institute, University College London, 332 Gray's Inn Rd., London, WC1X8EE, UK
| | - Torsten Marquardt
- UCL Ear Institute, University College London, 332 Gray's Inn Rd., London, WC1X8EE, UK
| | - David McAlpine
- Department Linguistics, The Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW, 2109, Australia
- UCL Ear Institute, University College London, 332 Gray's Inn Rd., London, WC1X8EE, UK
| |
Collapse
|
29
|
Tuning to Binaural Cues in Human Auditory Cortex. J Assoc Res Otolaryngol 2016; 17:37-53. [PMID: 26466943 DOI: 10.1007/s10162-015-0546-4] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 09/25/2015] [Indexed: 10/22/2022] Open
Abstract
Interaural level and time differences (ILD and ITD), the primary binaural cues for sound localization in azimuth, are known to modulate the tuned responses of neurons in mammalian auditory cortex (AC). The majority of these neurons respond best to cue values that favor the contralateral ear, such that contralateral bias is evident in the overall population response and thereby expected in population-level functional imaging data. Human neuroimaging studies, however, have not consistently found contralaterally biased binaural response patterns. Here, we used functional magnetic resonance imaging (fMRI) to parametrically measure ILD and ITD tuning in human AC. For ILD, contralateral tuning was observed, using both univariate and multivoxel analyses, in posterior superior temporal gyrus (pSTG) in both hemispheres. Response-ILD functions were U-shaped, revealing responsiveness to both contralateral and—to a lesser degree—ipsilateral ILD values, consistent with rate coding by unequal populations of contralaterally and ipsilaterally tuned neurons. In contrast, for ITD, univariate analyses showed modest contralateral tuning only in left pSTG, characterized by a monotonic response-ITD function. A multivoxel classifier, however, revealed ITD coding in both hemispheres. Although sensitivity to ILD and ITD was distributed in similar AC regions, the differently shaped response functions and different response patterns across hemispheres suggest that basic ILD and ITD processes are not fully integrated in human AC. The results support opponent-channel theories of ILD but not necessarily ITD coding, the latter of which may involve multiple types of representation that differ across hemispheres.
Collapse
|
30
|
Shestopalova L, Petropavlovskaia E, Vaitulevich S, Nikitin N. Hemispheric asymmetry of ERPs and MMNs evoked by slow, fast and abrupt auditory motion. Neuropsychologia 2016; 91:465-479. [DOI: 10.1016/j.neuropsychologia.2016.09.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 08/25/2016] [Accepted: 09/13/2016] [Indexed: 10/21/2022]
|
31
|
Lewald J, Hanenberg C, Getzmann S. Brain correlates of the orientation of auditory spatial attention onto speaker location in a “cocktail-party” situation. Psychophysiology 2016; 53:1484-95. [DOI: 10.1111/psyp.12692] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2015] [Accepted: 05/24/2016] [Indexed: 11/29/2022]
Affiliation(s)
- Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology; Ruhr University Bochum; Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| | - Christina Hanenberg
- Department of Cognitive Psychology, Faculty of Psychology; Ruhr University Bochum; Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| |
Collapse
|
32
|
Dykstra AR, Burchard D, Starzynski C, Riedel H, Rupp A, Gutschalk A. Lateralization and Binaural Interaction of Middle-Latency and Late-Brainstem Components of the Auditory Evoked Response. J Assoc Res Otolaryngol 2016; 17:357-70. [PMID: 27197812 DOI: 10.1007/s10162-016-0572-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Accepted: 05/02/2016] [Indexed: 01/22/2023] Open
Abstract
We used magnetoencephalography to examine lateralization and binaural interaction of the middle-latency and late-brainstem components of the auditory evoked response (the MLR and SN10, respectively). Click stimuli were presented either monaurally, or binaurally with left- or right-leading interaural time differences (ITDs). While early MLR components, including the N19 and P30, were larger for monaural stimuli presented contralaterally (by approximately 30 and 36 % in the left and right hemispheres, respectively), later components, including the N40 and P50, were larger ipsilaterally. In contrast, MLRs elicited by binaural clicks with left- or right-leading ITDs did not differ. Depending on filter settings, weak binaural interaction could be observed as early as the P13 but was clearly much larger for later components, beginning at the P30, indicating some degree of binaural linearity up to early stages of cortical processing. The SN10, an obscure late-brainstem component, was observed consistently in individuals and showed linear binaural additivity. The results indicate that while the MLR is lateralized in response to monaural stimuli-and not ITDs-this lateralization reverses from primarily contralateral to primarily ipsilateral as early as 40 ms post stimulus and is never as large as that seen with fMRI.
Collapse
Affiliation(s)
- Andrew R Dykstra
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany.
| | - Daniel Burchard
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany.,Department of Human Neurobiology, Center for Cognitive Science, Universität Bremen, Bremen, Germany
| | - Christian Starzynski
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Helmut Riedel
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Andre Rupp
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| |
Collapse
|
33
|
Asymmetries in the representation of space in the human auditory cortex depend on the global stimulus context. Neuroreport 2016; 27:242-6. [DOI: 10.1097/wnr.0000000000000527] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
34
|
Lewald J. Modulation of human auditory spatial scene analysis by transcranial direct current stimulation. Neuropsychologia 2016; 84:282-93. [PMID: 26825012 DOI: 10.1016/j.neuropsychologia.2016.01.030] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2015] [Revised: 01/24/2016] [Accepted: 01/25/2016] [Indexed: 10/22/2022]
Abstract
Localizing and selectively attending to the source of a sound of interest in a complex auditory environment is an important capacity of the human auditory system. The underlying neural mechanisms have, however, still not been clarified in detail. This issue was addressed by using bilateral bipolar-balanced transcranial direct current stimulation (tDCS) in combination with a task demanding free-field sound localization in the presence of multiple sound sources, thus providing a realistic simulation of the so-called "cocktail-party" situation. With left-anode/right-cathode, but not with right-anode/left-cathode, montage of bilateral electrodes, tDCS over superior temporal gyrus, including planum temporale and auditory cortices, was found to improve the accuracy of target localization in left hemispace. No effects were found for tDCS over inferior parietal lobule or with off-target active stimulation over somatosensory-motor cortex that was used to control for non-specific effects. Also, the absolute error in localization remained unaffected by tDCS, thus suggesting that general response precision was not modulated by brain polarization. This finding can be explained in the framework of a model assuming that brain polarization modulated the suppression of irrelevant sound sources, thus resulting in more effective spatial separation of the target from the interfering sound in the complex auditory scene.
Collapse
Affiliation(s)
- Jörg Lewald
- Auditory Cognitive Neuroscience Laboratory, Department of Cognitive Psychology, Ruhr University Bochum, D-44780 Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Ardeystraße 67, D-44139 Dortmund, Germany.
| |
Collapse
|
35
|
Haywood NR, Undurraga JA, Marquardt T, McAlpine D. A Comparison of Two Objective Measures of Binaural Processing: The Interaural Phase Modulation Following Response and the Binaural Interaction Component. Trends Hear 2015; 19:19/0/2331216515619039. [PMID: 26721925 PMCID: PMC4771038 DOI: 10.1177/2331216515619039] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
There has been continued interest in clinical objective measures of binaural processing. One commonly proposed measure is the binaural interaction component (BIC), which is obtained typically by recording auditory brainstem responses (ABRs)—the BIC reflects the difference between the binaural ABR and the sum of the monaural ABRs (i.e., binaural − (left + right)). We have recently developed an alternative, direct measure of sensitivity to interaural time differences, namely, a following response to modulations in interaural phase difference (the interaural phase modulation following response; IPM-FR). To obtain this measure, an ongoing diotically amplitude-modulated signal is presented, and the interaural phase difference of the carrier is switched periodically at minima in the modulation cycle. Such periodic modulations to interaural phase difference can evoke a steady state following response. BIC and IPM-FR measurements were compared from 10 normal-hearing subjects using a 16-channel electroencephalographic system. Both ABRs and IPM-FRs were observed most clearly from similar electrode locations—differential recordings taken from electrodes near the ear (e.g., mastoid) in reference to a vertex electrode (Cz). Although all subjects displayed clear ABRs, the BIC was not reliably observed. In contrast, the IPM-FR typically elicited a robust and significant response. In addition, the IPM-FR measure required a considerably shorter recording session. As the IPM-FR magnitude varied with interaural phase difference modulation depth, it could potentially serve as a correlate of perceptual salience. Overall, the IPM-FR appears a more suitable clinical measure than the BIC.
Collapse
Affiliation(s)
- Nicholas R Haywood
- UCL Ear Institute, UCL School of Life and Medical Sciences, University College London, UK
| | - Jaime A Undurraga
- UCL Ear Institute, UCL School of Life and Medical Sciences, University College London, UK
| | - Torsten Marquardt
- UCL Ear Institute, UCL School of Life and Medical Sciences, University College London, UK
| | - David McAlpine
- UCL Ear Institute, UCL School of Life and Medical Sciences, University College London, UK
| |
Collapse
|
36
|
Integrated processing of spatial cues in human auditory cortex. Hear Res 2015; 327:143-52. [DOI: 10.1016/j.heares.2015.06.006] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Revised: 05/29/2015] [Accepted: 06/02/2015] [Indexed: 11/17/2022]
|
37
|
Monaural and binaural contributions to interaural-level-difference sensitivity in human auditory cortex. Neuroimage 2015; 120:456-66. [PMID: 26163805 PMCID: PMC4589528 DOI: 10.1016/j.neuroimage.2015.07.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2015] [Revised: 06/08/2015] [Accepted: 07/03/2015] [Indexed: 11/20/2022] Open
Abstract
Whole-brain functional magnetic resonance imaging was used to measure blood-oxygenation-level-dependent (BOLD) responses in human auditory cortex (AC) to sounds with intensity varying independently in the left and right ears. Echoplanar images were acquired at 3 Tesla with sparse image acquisition once per 12-second block of sound stimulation. Combinations of binaural intensity and stimulus presentation rate were varied between blocks, and selected to allow measurement of response-intensity functions in three configurations: monaural 55–85 dB SPL, binaural 55–85 dB SPL with intensity equal in both ears, and binaural with average binaural level of 70 dB SPL and interaural level differences (ILD) ranging ±30 dB (i.e., favoring the left or right ear). Comparison of response functions equated for contralateral intensity revealed that BOLD-response magnitudes (1) generally increased with contralateral intensity, consistent with positive drive of the BOLD response by the contralateral ear, (2) were larger for contralateral monaural stimulation than for binaural stimulation, consistent with negative effects (e.g., inhibition) of ipsilateral input, which were strongest in the left hemisphere, and (3) also increased with ipsilateral intensity when contralateral input was weak, consistent with additional, positive, effects of ipsilateral stimulation. Hemispheric asymmetries in the spatial extent and overall magnitude of BOLD responses were generally consistent with previous studies demonstrating greater bilaterality of responses in the right hemisphere and stricter contralaterality in the left hemisphere. Finally, comparison of responses to fast (40/s) and slow (5/s) stimulus presentation rates revealed significant rate-dependent adaptation of the BOLD response that varied across ILD values.
Collapse
|
38
|
Trapeau R, Schönwiesner M. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex. Neuroimage 2015; 118:26-38. [PMID: 26054873 DOI: 10.1016/j.neuroimage.2015.06.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 05/06/2015] [Accepted: 06/02/2015] [Indexed: 11/29/2022] Open
Abstract
The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere.
Collapse
Affiliation(s)
- Régis Trapeau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada; Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
39
|
Salminen NH, Altoè A, Takanen M, Santala O, Pulkki V. Human cortical sensitivity to interaural time difference in high-frequency sounds. Hear Res 2015; 323:99-106. [PMID: 25668126 DOI: 10.1016/j.heares.2015.01.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Revised: 01/22/2015] [Accepted: 01/27/2015] [Indexed: 11/18/2022]
Abstract
Human sound source localization relies on various acoustical cues one of the most important being the interaural time difference (ITD). ITD is best detected in the fine structure of low-frequency sounds but it may also contribute to spatial hearing at higher frequencies if extracted from the sound envelope. The human brain mechanisms related to this envelope ITD cue remain unexplored. Here, we tested the sensitivity of the human auditory cortex to envelope ITD in magnetoencephalography (MEG) recordings. We found two types of sensitivity to envelope ITD. First, the amplitude of the auditory cortical N1m response was smaller for zero envelope ITD than for long envelope ITDs corresponding to the sound being in opposite phase in the two ears. Second, the N1m response amplitude showed ITD-specific adaptation for both fine-structure and for envelope ITD. The auditory cortical sensitivity was weaker for envelope ITD in high-frequency sounds than for fine-structure ITD in low-frequency sounds but occurred within a range of ITDs that are encountered in natural conditions. Finally, the participants were briefly tested for their behavioral ability to detect envelope ITD. Interestingly, we found a correlation between the behavioral performance and the neural sensitivity to envelope ITD. In conclusion, our findings show that the human auditory cortex is sensitive to ITD in the envelope of high-frequency sounds and this sensitivity may have behavioral relevance.
Collapse
Affiliation(s)
- Nelli H Salminen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University School of Science, P.O. Box 12200, FI-00076 Aalto, Finland; MEG Core, Aalto NeuroImaging, Aalto University School of Science, Finland.
| | - Alessandro Altoè
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| | - Marko Takanen
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| | - Olli Santala
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| |
Collapse
|
40
|
Abstract
In spatial perception, visual information has higher acuity than auditory information and we often misperceive sound-source locations when spatially disparate visual stimuli are presented simultaneously. Ventriloquists make good use of this auditory illusion. In this study, we investigated neural substrates of the ventriloquism effect to understand the neural mechanism of multimodal integration. This study was performed in 2 steps. First, we investigated how sound locations were represented in the auditory cortex. Secondly, we investigated how simultaneous presentation of spatially disparate visual stimuli affects neural processing of sound locations. Based on the population rate code hypothesis that assumes monotonic sensitivity to sound azimuth across populations of broadly tuned neurons, we expected a monotonic increase of blood oxygenation level-dependent (BOLD) signals for more contralateral sounds. Consistent with this hypothesis, we found that BOLD signals in the posterior superior temporal gyrus increased monotonically as a function of sound azimuth. We also observed attenuation of the monotonic azimuthal sensitivity by spatially disparate visual stimuli. The alteration of the neural pattern was considered to reflect the neural mechanism of the ventriloquism effect. Our findings indicate that conflicting audiovisual spatial information of an event is associated with an attenuation of neural processing of auditory spatial localization.
Collapse
Affiliation(s)
- Akiko Callan
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka 565-0871, Japan
| | - Daniel Callan
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka 565-0871, Japan
| | - Hiroshi Ando
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka 565-0871, Japan
| |
Collapse
|
41
|
Audio-visual synchrony modulates the ventriloquist illusion and its neural/spatial representation in the auditory cortex. Neuroimage 2014; 98:425-34. [DOI: 10.1016/j.neuroimage.2014.04.077] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2013] [Revised: 04/25/2014] [Accepted: 04/30/2014] [Indexed: 11/20/2022] Open
|
42
|
Functional correlates of the speech-in-noise perception impairment in dyslexia: An MRI study. Neuropsychologia 2014; 60:103-14. [DOI: 10.1016/j.neuropsychologia.2014.05.016] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2013] [Revised: 05/23/2014] [Accepted: 05/24/2014] [Indexed: 10/25/2022]
|
43
|
Huang S, Chang WT, Belliveau JW, Hämäläinen M, Ahveninen J. Lateralized parietotemporal oscillatory phase synchronization during auditory selective attention. Neuroimage 2013; 86:461-9. [PMID: 24185023 DOI: 10.1016/j.neuroimage.2013.10.043] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2013] [Revised: 09/24/2013] [Accepted: 10/18/2013] [Indexed: 10/26/2022] Open
Abstract
Based on the infamous left-lateralized neglect syndrome, one might hypothesize that the dominating right parietal cortex has a bilateral representation of space, whereas the left parietal cortex represents only the contralateral right hemispace. Whether this principle applies to human auditory attention is not yet fully clear. Here, we explicitly tested the differences in cross-hemispheric functional coupling between the intraparietal sulcus (IPS) and auditory cortex (AC) using combined magnetoencephalography (MEG), EEG, and functional MRI (fMRI). Inter-regional pairwise phase consistency (PPC) was analyzed from data obtained during dichotic auditory selective attention task, where subjects were in 10-s trials cued to attend to sounds presented to one ear and to ignore sounds presented in the opposite ear. Using MEG/EEG/fMRI source modeling, parietotemporal PPC patterns were (a) mapped between all AC locations vs. IPS seeds and (b) analyzed between four anatomically defined AC regions-of-interest (ROI) vs. IPS seeds. Consistent with our hypothesis, stronger cross-hemispheric PPC was observed between the right IPS and left AC for attended right-ear sounds, as compared to PPC between the left IPS and right AC for attended left-ear sounds. In the mapping analyses, these differences emerged at 7-13Hz, i.e., at the theta to alpha frequency bands, and peaked in Heschl's gyrus and lateral posterior non-primary ACs. The ROI analysis revealed similarly lateralized differences also in the beta and lower theta bands. Taken together, our results support the view that the right parietal cortex dominates auditory spatial attention.
Collapse
Affiliation(s)
- Samantha Huang
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Wei-Tang Chang
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - John W Belliveau
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA
| | - Matti Hämäläinen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA
| | - Jyrki Ahveninen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| |
Collapse
|
44
|
Ruhnau P, Herrmann B, Maess B, Brauer J, Friederici AD, Schröger E. Processing of complex distracting sounds in school-aged children and adults: evidence from EEG and MEG data. Front Psychol 2013; 4:717. [PMID: 24155730 PMCID: PMC3800842 DOI: 10.3389/fpsyg.2013.00717] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Accepted: 09/18/2013] [Indexed: 11/25/2022] Open
Abstract
When a perceiver performs a task, rarely occurring sounds often have a distracting effect on task performance. The neural mismatch responses in event-related potentials to such distracting stimuli depend on age. Adults commonly show a negative response, whereas in children a positive as well as a negative mismatch response has been reported. Using electro- and magnetoencephalography (EEG/MEG), here we investigated the developmental changes of distraction processing in school-aged children (9–10 years) and adults. Participants took part in an auditory-visual distraction paradigm comprising a visuo-spatial primary task and task-irrelevant environmental sounds distracting from this task. Behaviorally, distractors delayed reaction times (RTs) in the primary task in both age groups, and this delay was of similar magnitude in both groups. The neurophysiological data revealed an early as well as a late mismatch response elicited by distracting stimuli in both age groups. Together with previous research, this indicates that deviance detection is accomplished in a hierarchical manner in the auditory system. Both mismatch responses were localized to auditory cortex areas. All mismatch responses were generally delayed in children, suggesting that not all neurophysiological aspects of deviance processing are mature in school-aged children. Furthermore, the P3a, reflecting involuntary attention capture, was present in both age groups in the EEG with comparable amplitudes and at similar latencies, but with a different topographical distribution. This suggests that involuntary attention shifts toward complex distractors operate comparably in school-aged children and adults, yet undergoing generator maturation.
Collapse
Affiliation(s)
- Philipp Ruhnau
- Center for Mind/Brain Science, University of Trento Mattarello, Italy ; Institute of Psychology, University of Leipzig Leipzig, Germany
| | | | | | | | | | | |
Collapse
|
45
|
Ahveninen J, Kopčo N, Jääskeläinen IP. Psychophysics and neuronal bases of sound localization in humans. Hear Res 2013; 307:86-97. [PMID: 23886698 DOI: 10.1016/j.heares.2013.07.008] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Revised: 07/02/2013] [Accepted: 07/10/2013] [Indexed: 10/26/2022]
Abstract
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory "where" pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School - Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | | | | |
Collapse
|
46
|
Gordon KA, Wong DDE, Papsin BC. Bilateral input protects the cortex from unilaterally-driven reorganization in children who are deaf. Brain 2013; 136:1609-25. [PMID: 23576127 DOI: 10.1093/brain/awt052] [Citation(s) in RCA: 155] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Affiliation(s)
- Karen A Gordon
- Archie's Cochlear Implant Laboratory The Hospital for Sick Children Room 6D08, 555 University Avenue, Toronto, Ontario, Canada.
| | | | | |
Collapse
|
47
|
Richter N, Schröger E, Rübsamen R. Differences in evoked potentials during the active processing of sound location and motion. Neuropsychologia 2013; 51:1204-14. [PMID: 23499852 DOI: 10.1016/j.neuropsychologia.2013.03.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2012] [Revised: 02/25/2013] [Accepted: 03/04/2013] [Indexed: 10/27/2022]
Abstract
Difference in the processing of motion and static sounds in the human cortex was studied by electroencephalography with subjects performing an active discrimination task. Sound bursts were presented in the acoustic free-field between 47° to the left and 47° to the right under three different stimulus conditions: (i) static, (ii) leftward motion, and (iii) rightward motion. In an active oddball design, subject was asked to detect target stimuli which were randomly embedded within a stream of frequently occurring non-target events (i.e. 'standards') and rare non-target stimuli (i.e. 'deviants'). The respective acoustic stimuli were presented in blocks with each stimulus type presented in either of three stimulus conditions: as target, as non-target, or as standard. The analysis focussed on the event related potentials evoked by the different stimulus types under the respective standard condition. Same as in previous studies, all three different acoustic stimuli elicited the obligatory P1/N1/P2 complex in the range of 50-200 ms. However, comparisons of ERPs elicited by static stimuli and both kinds of motion stimuli yielded differences as early as ~100 ms after stimulus-onset, i.e. at the level of the exogenous N1 and P2 components. Differences in signal amplitudes were also found in a time window 300-400 ms ('d300-400 ms' component in 'motion-minus-static' difference wave). For motion stimuli, the N1 amplitudes were larger over the hemisphere contralateral to the origin of motion, while for static stimuli N1 amplitudes over both hemispheres were in the same range. Contrary to the N1 component, the ERP in the 'd300-400 ms' time period showed stronger responses over the hemisphere contralateral to motion termination, with the static stimuli again yielding equal bilateral amplitudes. For the P2 component a motion-specific effect with larger signal amplitudes over the left hemisphere was found compared to static stimuli. The presently documented N1 components comply with the results of previous studies on auditory space processing and suggest a contralateral dominance during the process of cortical integration of spatial acoustic information. Additionally, the cortical activity in the 'd300-400 ms' time period indicates, that in addition to the motion origin (as reflected by the N1) also the direction of motion (leftward/ rightward motion) or rather motion termination is cortically encoded. These electrophysiological results are in accordance with the 'snap shot' hypothesis, assuming that auditory motion processing is not based on a genuine motion-sensitive system, but rather on a comparison process of spatial positions of motion origin (onset) and motion termination (offset). Still, specificities of the present P2 component provides evidence for additional motion-specific processes possibly associated with the evaluation of motion-specific attributes, i.e. motion direction and/or velocity which is preponderant in the left hemisphere.
Collapse
Affiliation(s)
- Nicole Richter
- University of Leipzig, Institute for Biology, Talstr 33, 04103 Leipzig, Germany.
| | | | | |
Collapse
|
48
|
How anatomical asymmetry of human auditory cortex can lead to a rightward bias in auditory evoked fields. Neuroimage 2013; 74:22-9. [PMID: 23415949 DOI: 10.1016/j.neuroimage.2013.02.002] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2012] [Revised: 01/31/2013] [Accepted: 02/02/2013] [Indexed: 11/21/2022] Open
Abstract
Auditory evoked fields and potentials, such as the N1 or the 40-Hz steady state response, are often stronger in the right compared to the left auditory cortex. Here we investigated whether a greater degree of cortical folding in left auditory cortex could result in increased MEG signal cancelation and a subsequent bias in MEG auditory signals toward the right hemisphere. Signal cancelation, due to non-uniformity of the orientations of underlying neural currents, affects MEG and EEG signals generated by any neuronal activity of reasonable spatial extent. We simulated MEG signals in patches of auditory cortex in seventeen subjects, and measured the relationships between underlying activity distribution, cortical non-uniformity, signal cancelation and resulting (fitted) dipole strength and position. Our results suggest that the cancelation of MEG signals from auditory cortex is asymmetric, due to underlying anatomy, and this asymmetry may result in a rightward bias in measurable dipole amplitudes. The effect was significant across all auditory areas tested, with the exception of planum temporale. Importantly, we also show how the rightward bias could be partially or completely offset by increased cortical area, and therefore increased cortical activity, on the left side. We suggest that auditory researchers are aware of the impact of cancelation and its resulting rightward bias in signal strength from auditory cortex. These findings are important for studies seeking functional hemispheric specialization in the auditory cortex with MEG as well as for integration of MEG with other imaging modalities.
Collapse
|
49
|
Neural correlates of sound externalization. Neuroimage 2013; 66:22-7. [DOI: 10.1016/j.neuroimage.2012.10.057] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2012] [Revised: 09/26/2012] [Accepted: 10/12/2012] [Indexed: 11/18/2022] Open
|
50
|
Irsel Tezer F, Ilhan B, Erbil N, Saygi S, Akalan N, Ungan P. Lateralisation of sound in temporal-lobe epilepsy: comparison between pre- and postoperative performances and ERPs. Clin Neurophysiol 2012; 123:2362-9. [PMID: 22883476 DOI: 10.1016/j.clinph.2012.06.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2012] [Revised: 06/17/2012] [Accepted: 06/20/2012] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Our aim was to investigate if spatial hearing is impaired in mesial temporal lobe epilepsy and temporal lobectomy has an effect on this function. METHODS Thirteen patients with mesial temporal lobe epilepsy (TLE) due to sclerosis in their left (n=6) or right (n=7) hippocampus were studied. Their sound lateralisation performance indexed by d' was tested against that of a group of normal subjects (n=13). Patients' ERPs to lateralisation shifts induced by interaural disparities of intensity (IID) and time (ITD) were also recorded. Eight of the patients were re-tested after they underwent anterior temporal lobectomy, which involved the resection/removal of medial structures including amygdala, hippocampus and parahippocampal gyrus. RESULTS The sound-lateralisation performance of the TLE patients was significantly lower than normal subjects, and this disadvantage of the patients was specific to IID-based lateralisation. Amplitudes of their N1 and P2 responses to laterally shifting sounds were much lower than those reported previously for normal subjects. Lobectomy did not have a statistically significant effect on patients' sound-lateralisation performance nor on the amplitude of their auditory directional ERPs. CONCLUSIONS The results show that especially the IID-based sound-lateralisation performance is impaired in TLE patients and that lobectomy should not cause any further deterioration. SIGNIFICANCE This study suggests that a test for assessing the ability of sound lateralisation based on each of the IID and ITD cues should be included in the evaluation of TLE patients.
Collapse
Affiliation(s)
- F Irsel Tezer
- Hacettepe University, Faculty of Medicine, Department of Neurology, Ankara, Turkey.
| | | | | | | | | | | |
Collapse
|