1
|
İlhan B, Kurt S, Ungan P. Auditory cortical responses to abrupt lateralization shifts do not reflect the activity of hemifield-specific units involved in opponent coding of auditory space. Neuropsychologia 2023; 188:108629. [PMID: 37356539 DOI: 10.1016/j.neuropsychologia.2023.108629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/20/2023] [Accepted: 06/22/2023] [Indexed: 06/27/2023]
Abstract
Recent studies show that the classical model based on axonal delay-lines may not explain interaural time difference (ITD) based spatial coding in humans. Instead, a population-code model called "opponent channels model" (OCM) has been suggested. This model comprises two competing channels respectively for the two auditory hemifields, each with a sigmoidal tuning curve. Event-related potentials (ERPs) to ITD-changes are used in some studies to test the predictions of this model by considering the sounds before and after the change as adaptor and probe stimuli, respectively. It is assumed in these studies that the former stimulus causes adaptation of the neurons selective to its side, and that the ERP N1-P2 response to the ITD-change is the specific response of the neurons with selectivity to the side of probe sound. However, these ERP components are known as a global, non-specific acoustic change complex of cortical origin evoked by any change in the auditory environment. It probably does not genuinely reflect the activity of some stimulus-specific neuronal units that have escaped the refractory effect of the preceding adaptor, which means a violation of the crucial assumption in an adaptor-probe paradigm. To assess this viewpoint, we conducted two experiments. In the first one, we recorded ERPs to abrupt lateralization shifts of click trains having various pre- and post-shift ITDs within the physiological range of -600μs to +600μs. Magnitudes of the ERP components P1, N1, and P2 to these ITD-shifts did not comply with the additive behavior of partial probe responses presumed for an adaptor-probe paradigm, casting doubt on the accuracy of testing sensory coding models by using ERPs to abrupt lateralization changes. Findings of the second experiment, involving ERPs to conjoint outwards/transverse shift stimuli also supported this conclusion.
Collapse
Affiliation(s)
- Barkın İlhan
- Department of Biophysics, Necmettin Erbakan University Meram Medical Faculty, Konya, Türkiye.
| | - Saliha Kurt
- Department of Audiometry, Selçuk University Vocational School of Health Services, Konya, Türkiye.
| | | |
Collapse
|
2
|
Altmann CF, Yamasaki D, Song Y, Bucher B. Processing of self-initiated sound motion in the human brain. Brain Res 2021; 1762:147433. [PMID: 33737062 DOI: 10.1016/j.brainres.2021.147433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 03/10/2021] [Accepted: 03/11/2021] [Indexed: 12/01/2022]
Abstract
Interacting with objects in our environment usually leads to audible noise. Brain responses to such self-initiated sounds have been shown to be attenuated, in particular the so-called N1 component measured with electroencephalography (EEG). This attenuation has been proposed to be the effect of an internal forward model that allows for cancellation of the sensory consequences of a motor command. In the current study we asked whether the attenuation due to self-initiation of a sound also affects a later event-related potential - the so-called motion-onset response - that arises in response to moving sounds. To this end, volunteers were instructed to move their index fingers either left or rightward which resulted in virtual movement of a sound either to the left or to the right. In Experiment 1, sound motion was induced with in-ear head-phones by shifting interaural time and intensity differences and thus shifting the intracranial sound image. We compared the motion-onset responses under two conditions: a) congruent, and b) incongruent. In the congruent condition, the sound image moved in the direction of the finger movement, while in the incongruent condition sound motion was in the opposite direction of the finger movement. Clear motion-onset responses with a negative cN1 component peaking at about 160 ms and a positive cP2 component peaking at about 230 ms after motion-onset were obtained for both the congruent and incongruent conditions. However, the motion-onset responses did not significantly differ between congruent and incongruent conditions in amplitude or latency. In Experiment 2, in which sounds were presented with loudspeakers, we observed attenuation for self-induced versus externally triggered sound motion-onset, but again, there was no difference between congruent and incongruent conditions. In sum, these two experiments suggest that the motion-onset response measured by EEG can be attenuated for self-generated sounds. However, our result did not indicate that this attenuation depended on congruency of action and sound motion direction.
Collapse
Affiliation(s)
- Christian F Altmann
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan; Parkinson-Klinik Ortenau, 77709 Wolfach, Germany.
| | - Daiki Yamasaki
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto 606-8501, Japan; Japan Society for the Promotion of Science, Tokyo 102-0083, Japan
| | - Yunqing Song
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan
| | - Benoit Bucher
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto 606-8501, Japan
| |
Collapse
|
3
|
Cai Y, Chen G, Zhong X, Yu G, Mo H, Jiang J, Chen X, Zhao F, Zheng Y. Influence of Audiovisual Training on Horizontal Sound Localization and Its Related ERP Response. Front Hum Neurosci 2018; 12:423. [PMID: 30405377 PMCID: PMC6206041 DOI: 10.3389/fnhum.2018.00423] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Accepted: 10/01/2018] [Indexed: 01/27/2023] Open
Abstract
The objective was to investigate the influence of audiovisual training on horizontal sound localization and the underlying neurological mechanisms using a combination of psychoacoustic and electrophysiological (i.e., event-related potential, ERP) measurements on sound localization. Audiovisual stimuli were used in the training group, whilst the control group was trained using auditory stimuli only. Training sessions were undertaken once per day for three consecutive days. Sound localization accuracy was evaluated daily after training, using psychoacoustic tests. ERP responses were measured on the first and last day of tasks. Sound localization was significantly improved in the audiovisual training group when compared to the control group. Moreover, a significantly greater reduction in front-back confusion ratio for both trained and untrained angles was found between pre- and post-test in the audiovisual training group. ERP measurement showed a decrease in N1 amplitude and an increase in P2 amplitude in both groups. However, changes in late components were only found in the audiovisual training group, with an increase in P400 amplitude and decrease in N500 amplitude. These results suggest that the interactive effect of audiovisual localization training is likely to be mediated at a relatively late cognitive processing stage.
Collapse
Affiliation(s)
- Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Guisheng Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Xiaoli Zhong
- Acoustic Laboratory, Physics Department, South China University of Technology, Guangzhou, China
| | - Guangzheng Yu
- Acoustic Laboratory, Physics Department, South China University of Technology, Guangzhou, China
| | - Hanjie Mo
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Jiajia Jiang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Xiaoting Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Fei Zhao
- Department of Speech Language Therapy and Hearing Science, Cardiff Metropolitan University, Cardiff, United Kingdom.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-sen University, Guangzhou, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
4
|
Neural tracking of auditory motion is reflected by delta phase and alpha power of EEG. Neuroimage 2018; 181:683-691. [PMID: 30053517 DOI: 10.1016/j.neuroimage.2018.07.054] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 07/10/2018] [Accepted: 07/23/2018] [Indexed: 12/29/2022] Open
Abstract
It is of increasing practical interest to be able to decode the spatial characteristics of an auditory scene from electrophysiological signals. However, the cortical representation of auditory space is not well characterized, and it is unclear how cortical activity reflects the time-varying location of a moving sound. Recently, we demonstrated that cortical response measures to discrete noise bursts can be decoded to determine their origin in space. Here we build on these findings to investigate the cortical representation of a continuously moving auditory stimulus using scalp recorded electroencephalography (EEG). In a first experiment, subjects listened to pink noise over headphones which was spectro-temporally modified to be perceived as randomly moving on a semi-circular trajectory in the horizontal plane. While subjects listened to the stimuli, we recorded their EEG using a 128-channel acquisition system. The data were analysed by 1) building a linear regression model (decoder) mapping the relationship between the stimulus location and a training set of EEG data, and 2) using the decoder to reconstruct an estimate of the time-varying sound source azimuth from the EEG data. The results showed that we can decode sound trajectory with a reconstruction accuracy significantly above chance level. Specifically, we found that the phase of delta (<2 Hz) and power of alpha (8-12 Hz) EEG track the dynamics of a moving auditory object. In a follow-up experiment, we replaced the noise with pulse train stimuli containing only interaural level and time differences (ILDs and ITDs respectively). This allowed us to investigate whether our trajectory decoding is sensitive to both acoustic cues. We found that the sound trajectory can be decoded for both ILD and ITD stimuli. Moreover, their neural signatures were similar and even allowed successful cross-cue classification. This supports the notion of integrated processing of ILD and ITD at the cortical level. These results are particularly relevant for application in devices such as cognitively controlled hearing aids and for the evaluation of virtual acoustic environments.
Collapse
|