1
|
Shestopalova LB, Petropavlovskaia EA, Salikova DA, Semenova VV. Temporal integration of sound motion: Motion-onset response and perception. Hear Res 2024; 441:108922. [PMID: 38043403 DOI: 10.1016/j.heares.2023.108922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 11/14/2023] [Accepted: 11/20/2023] [Indexed: 12/05/2023]
Abstract
The purpose of our study was to estimate the time interval required for integrating the acoustical changes related to sound motion using both psychophysical and EEG measures. Healthy listeners performed direction identification tasks under dichotic conditions in the delayed-motion paradigm. Minimal audible movement angle (MAMA) has been measured over the range of velocities from 60 to 360 deg/s. We also measured minimal duration of motion, at which the listeners could identify its direction. EEG was recorded in the same group of subjects during passive listening. Motion onset responses (MOR) were analyzed. MAMA increased linearly with motion velocity. Minimum audible angle (MAA) calculated from this linear function was about 2 deg. For higher velocities of the delayed motion, we found 2- to 3-fold better spatial resolution than the one previously reported for motion starting at the sound onset. The time required for optimal discrimination of motion direction was about 34 ms. The main finding of our study was that both direction identification time obtained in the behavioral task and cN1 latency behaved like hyperbolic functions of the sound's velocity. Direction identification time decreased asymptotically to 8 ms, which was considered minimal integration time for the instantaneous shift detection. Peak latency of cN1 also decreased with increasing velocity and asymptotically approached 137 ms. This limit corresponded to the latency of response to the instantaneous sound shift and was 37 ms later than the latency of the sound-onset response. The direction discrimination time (34 ms) was of the same magnitude as the additional time required for motion processing to be reflected in the MOR potential. Thus, MOR latency can be viewed as a neurophysiological index of temporal integration. Based on the findings obtained, we may assume that no measurable MOR would be evoked by slowly moving stimuli as they would reach their MAMAs in a time longer than the optimal integration time.
Collapse
Affiliation(s)
- Lidia B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia.
| | | | - Diana A Salikova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia
| | - Varvara V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia
| |
Collapse
|
2
|
İlhan B, Kurt S, Ungan P. Auditory cortical responses to abrupt lateralization shifts do not reflect the activity of hemifield-specific units involved in opponent coding of auditory space. Neuropsychologia 2023; 188:108629. [PMID: 37356539 DOI: 10.1016/j.neuropsychologia.2023.108629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/20/2023] [Accepted: 06/22/2023] [Indexed: 06/27/2023]
Abstract
Recent studies show that the classical model based on axonal delay-lines may not explain interaural time difference (ITD) based spatial coding in humans. Instead, a population-code model called "opponent channels model" (OCM) has been suggested. This model comprises two competing channels respectively for the two auditory hemifields, each with a sigmoidal tuning curve. Event-related potentials (ERPs) to ITD-changes are used in some studies to test the predictions of this model by considering the sounds before and after the change as adaptor and probe stimuli, respectively. It is assumed in these studies that the former stimulus causes adaptation of the neurons selective to its side, and that the ERP N1-P2 response to the ITD-change is the specific response of the neurons with selectivity to the side of probe sound. However, these ERP components are known as a global, non-specific acoustic change complex of cortical origin evoked by any change in the auditory environment. It probably does not genuinely reflect the activity of some stimulus-specific neuronal units that have escaped the refractory effect of the preceding adaptor, which means a violation of the crucial assumption in an adaptor-probe paradigm. To assess this viewpoint, we conducted two experiments. In the first one, we recorded ERPs to abrupt lateralization shifts of click trains having various pre- and post-shift ITDs within the physiological range of -600μs to +600μs. Magnitudes of the ERP components P1, N1, and P2 to these ITD-shifts did not comply with the additive behavior of partial probe responses presumed for an adaptor-probe paradigm, casting doubt on the accuracy of testing sensory coding models by using ERPs to abrupt lateralization changes. Findings of the second experiment, involving ERPs to conjoint outwards/transverse shift stimuli also supported this conclusion.
Collapse
Affiliation(s)
- Barkın İlhan
- Department of Biophysics, Necmettin Erbakan University Meram Medical Faculty, Konya, Türkiye.
| | - Saliha Kurt
- Department of Audiometry, Selçuk University Vocational School of Health Services, Konya, Türkiye.
| | | |
Collapse
|
3
|
Zhang H, Xie J, Xiao Y, Cui G, Xu G, Tao Q, Gebrekidan YY, Yang Y, Ren Z, Li M. Steady-state auditory motion based potentials evoked by intermittent periodic virtual sound source and the effect of auditory noise on EEG enhancement. Hear Res 2023; 428:108670. [PMID: 36563411 DOI: 10.1016/j.heares.2022.108670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 12/12/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
Hearing is one of the most important human perception forms, and humans can capture the movement of sound in complex environments. On the basis of this phenomenon, this study explored the possibility of eliciting a steady-state brain response in an intermittent periodic motion sound source. In this study, a novel discrete continuous and orderly change of sound source positions stimulation paradigm was designed based on virtual sound using head-related transfer functions (HRTFs). And then the auditory motion stimulation paradigms with different noise levels were designed by changing the signal-to-noise ratio (SNR). The characteristics of brain response and the effects of different noises on brain response were studied by analyzing electroencephalogram (EEG) signals evoked by the proposed stimulation. Experimental results showed that the proposed paradigm could elicit a novel steady-state auditory evoked potential (AEP), i.e., steady-state motion auditory evoked potential (SSMAEP). And moderate noise could enhance SSMAEP amplitude and corresponding brain connectivity. This study enriches the types of AEPs and provides insights into the mechanism of brain processing of motion sound sources and the impact of noise on brain processing.
Collapse
Affiliation(s)
- Huanqing Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jun Xie
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China; School of Mechanical Engineering, Xinjiang University, Urumqi, China.
| | - Yi Xiao
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China.
| | - Guiling Cui
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Urumqi, China
| | | | - Yuzhe Yang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Zhiyuan Ren
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Min Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
4
|
Fine I, Park WJ. Do you hear what I see? How do early blind individuals experience object motion? Philos Trans R Soc Lond B Biol Sci 2023; 378:20210460. [PMID: 36511418 PMCID: PMC9745882 DOI: 10.1098/rstb.2021.0460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 09/13/2022] [Indexed: 12/15/2022] Open
Abstract
One of the most important tasks for 3D vision is tracking the movement of objects in space. The ability of early blind individuals to understand motion in the environment from noisy and unreliable auditory information is an impressive example of cortical adaptation that is only just beginning to be understood. Here, we compare visual and auditory motion processing, and discuss the effect of early blindness on the perception of auditory motion. Blindness leads to cross-modal recruitment of the visual motion area hMT+ for auditory motion processing. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion. We discuss how this dramatic shift in the cortical basis of motion processing might influence the perceptual experience of motion in early blind individuals. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Ione Fine
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| | - Woon Ju Park
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| |
Collapse
|
5
|
Altmann CF, Yamasaki D, Song Y, Bucher B. Processing of self-initiated sound motion in the human brain. Brain Res 2021; 1762:147433. [PMID: 33737062 DOI: 10.1016/j.brainres.2021.147433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 03/10/2021] [Accepted: 03/11/2021] [Indexed: 12/01/2022]
Abstract
Interacting with objects in our environment usually leads to audible noise. Brain responses to such self-initiated sounds have been shown to be attenuated, in particular the so-called N1 component measured with electroencephalography (EEG). This attenuation has been proposed to be the effect of an internal forward model that allows for cancellation of the sensory consequences of a motor command. In the current study we asked whether the attenuation due to self-initiation of a sound also affects a later event-related potential - the so-called motion-onset response - that arises in response to moving sounds. To this end, volunteers were instructed to move their index fingers either left or rightward which resulted in virtual movement of a sound either to the left or to the right. In Experiment 1, sound motion was induced with in-ear head-phones by shifting interaural time and intensity differences and thus shifting the intracranial sound image. We compared the motion-onset responses under two conditions: a) congruent, and b) incongruent. In the congruent condition, the sound image moved in the direction of the finger movement, while in the incongruent condition sound motion was in the opposite direction of the finger movement. Clear motion-onset responses with a negative cN1 component peaking at about 160 ms and a positive cP2 component peaking at about 230 ms after motion-onset were obtained for both the congruent and incongruent conditions. However, the motion-onset responses did not significantly differ between congruent and incongruent conditions in amplitude or latency. In Experiment 2, in which sounds were presented with loudspeakers, we observed attenuation for self-induced versus externally triggered sound motion-onset, but again, there was no difference between congruent and incongruent conditions. In sum, these two experiments suggest that the motion-onset response measured by EEG can be attenuated for self-generated sounds. However, our result did not indicate that this attenuation depended on congruency of action and sound motion direction.
Collapse
Affiliation(s)
- Christian F Altmann
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan; Parkinson-Klinik Ortenau, 77709 Wolfach, Germany.
| | - Daiki Yamasaki
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto 606-8501, Japan; Japan Society for the Promotion of Science, Tokyo 102-0083, Japan
| | - Yunqing Song
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan
| | - Benoit Bucher
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto 606-8501, Japan
| |
Collapse
|
6
|
Shestopalova LB, Petropavlovskaia EA, Semenova VV, Nikitin NI. Brain oscillations evoked by sound motion. Brain Res 2020; 1752:147232. [PMID: 33385379 DOI: 10.1016/j.brainres.2020.147232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/27/2020] [Accepted: 11/30/2020] [Indexed: 11/25/2022]
Abstract
The present study investigates the event-related oscillations underlying the motion-onset response (MOR) evoked by sounds moving at different velocities. EEG was recorded for stationary sounds and for three patterns of sound motion produced by changes in interaural time differences. We explored the effect of motion velocity on the MOR potential, and also on the event-related spectral perturbation (ERSP) and inter-trial phase coherence (ITC) calculated from the time-frequency decomposition of EEG signals. The phase coherence of slow oscillations increased with an increase in motion velocity similarly to the magnitude of cN1 and cP2 components of the MOR response. The delta-to-alpha inter-trial spectral power remained at the same level up to, but not including, the highest velocity, suggesting that gradual spatial changes within the sound did not induce non-coherent activity. Conversely, the abrupt sound displacement induced theta-alpha oscillations which had low phase consistency. The findings suggest that the MOR potential could be mainly generated by the phase resetting of slow oscillations, and the degree of phase coherence may be considered as a neurophysiological indicator of sound motion processing.
Collapse
Affiliation(s)
- Lidia B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| | | | - Varvara V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| | - Nikolay I Nikitin
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| |
Collapse
|
7
|
Cortical processing of location and frequency changes of sounds in normal hearing listeners. Hear Res 2020; 400:108110. [PMID: 33220506 DOI: 10.1016/j.heares.2020.108110] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 10/09/2020] [Accepted: 11/06/2020] [Indexed: 11/21/2022]
Abstract
Sounds we hear in our daily life contain changes in the acoustic features (e.g., frequency, intensity, and duration or "what" information) and/or changes in location ("where" information). The purpose of this study was to examine the cortical auditory evoked potentials (CAEPs) to the change within a stimulus, the acoustic change complex (ACC), in frequency (F) and location (L) of the sound in normal hearing listeners. Fifteen right-handed young normal hearing listeners participated in the electroencephalographic (EEG) recordings. The acoustic stimuli were pure tones (base frequency at 250 Hz) of 1 s, with a perceivable change either in location (L, 180°), frequency (F, 5% and 50%), or both location and frequency (L+F) in the middle of the tone. Additionally, the 250 Hz tone of 1 sec without any change was used as a reference. The participants were asked to listen passively to the stimuli and not to move their heads during the testing. Compared to the reference tone, by which only the onset-CAEP was elicited, the tones containing changes (L, F, or L+F) elicited both onset-CAEP and the ACC. The waveform analysis of ACCs from the vertex electrode (electrode Cz) showed that, larger sound changes evoked larger peak amplitudes [e.g., (L+50%F)- > L-change; (L+50%F)- > 5%F-change] and shorter the peak latencies ([(L+5%F)- < 5%F-change; 50%F- < 5%F-change; (L+50%F)- < 5%F-change] . The current density patterns for the ACC N1' peak displayed some differences between L-change vs. F-change, supporting different cortical processing for "where" and "what" information of the sound; regardless of the nature of the sound change, larger changes evoked a stronger activation than smaller changes [e.g., L- > 5%F-change; (L+5%F)- > 5%F-change; 50%F- > 5%F-change] in frontal lobe regions including the cingulate gyrus, medial frontal gyrus (MFG), superior frontal gyrus (SFG), the limbic lobe cingulate gyrus, and the parietal lobe postcentral gyrus. The results suggested that sound change-detection involves memory-based acoustic comparison (the neural encoding for the sound change vs. neural encoding for the pre-change stimulus stored in memory) and involuntary attention switch.
Collapse
|
8
|
Shestopalova LB, Petropavlovskaia EA, Semenova VV, Nikitin NI. Lateralization of brain responses to auditory motion: A study using single-trial analysis. Neurosci Res 2020; 162:31-44. [PMID: 32001322 DOI: 10.1016/j.neures.2020.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 12/17/2019] [Accepted: 01/10/2020] [Indexed: 11/19/2022]
Abstract
The present study investigates hemispheric asymmetry of the ERPs and low-frequency oscillatory responses evoked in both hemispheres of the brain by the sound stimuli with delayed onset of motion. EEG was recorded for three patterns of sound motion produced by changes in interaural time differences. Event-related spectral perturbation (ERSP) and inter-trial phase coherence (ITC) were computed from the time-frequency decomposition of EEG signals. The participants either read books of their choice (passive listening) or indicated the sound trajectories perceived using a graphic tablet (active listening). Our goal was to find out whether the lateralization of the motion-onset response (MOR) and oscillatory responses to sound motion were more consistent with the right-hemispheric dominance, contralateral or neglect model of interhemispheric asymmetry. Apparent dominance of the right hemisphere was found only in the ERSP responses. Stronger contralaterality of the left hemisphere corresponding to the "neglect model" of asymmetry was shown by the MOR components and by the phase coherence of the delta-alpha oscillations. Velocity and attention did not change consistently the interhemispheric asymmetry of both the MOR and the oscillatory responses. Our findings demonstrate how the lateralization pattern shown by the MOR potential was interrelated with that of the motion-related single-trial measures.
Collapse
Affiliation(s)
- L B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - E A Petropavlovskaia
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - V V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - N I Nikitin
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| |
Collapse
|
9
|
Auditory motion perception emerges from successive sound localizations integrated over time. Sci Rep 2019; 9:16437. [PMID: 31712688 PMCID: PMC6848124 DOI: 10.1038/s41598-019-52742-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2019] [Accepted: 10/11/2019] [Indexed: 11/18/2022] Open
Abstract
Humans rely on auditory information to estimate the path of moving sound sources. But unlike in vision, the existence of motion-sensitive mechanisms in audition is still open to debate. Psychophysical studies indicate that auditory motion perception emerges from successive localization, but existing models fail to predict experimental results. However, these models do not account for any temporal integration. We propose a new model tracking motion using successive localization snapshots but integrated over time. This model is derived from psychophysical experiments on the upper limit for circular auditory motion perception (UL), defined as the speed above which humans no longer identify the direction of sounds spinning around them. Our model predicts ULs measured with different stimuli using solely static localization cues. The temporal integration blurs these localization cues rendering them unreliable at high speeds, which results in the UL. Our findings indicate that auditory motion perception does not require motion-sensitive mechanisms.
Collapse
|
10
|
Representation of Auditory Motion Directions and Sound Source Locations in the Human Planum Temporale. J Neurosci 2019; 39:2208-2220. [PMID: 30651333 DOI: 10.1523/jneurosci.2289-18.2018] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.
Collapse
|
11
|
Neural tracking of auditory motion is reflected by delta phase and alpha power of EEG. Neuroimage 2018; 181:683-691. [PMID: 30053517 DOI: 10.1016/j.neuroimage.2018.07.054] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 07/10/2018] [Accepted: 07/23/2018] [Indexed: 12/29/2022] Open
Abstract
It is of increasing practical interest to be able to decode the spatial characteristics of an auditory scene from electrophysiological signals. However, the cortical representation of auditory space is not well characterized, and it is unclear how cortical activity reflects the time-varying location of a moving sound. Recently, we demonstrated that cortical response measures to discrete noise bursts can be decoded to determine their origin in space. Here we build on these findings to investigate the cortical representation of a continuously moving auditory stimulus using scalp recorded electroencephalography (EEG). In a first experiment, subjects listened to pink noise over headphones which was spectro-temporally modified to be perceived as randomly moving on a semi-circular trajectory in the horizontal plane. While subjects listened to the stimuli, we recorded their EEG using a 128-channel acquisition system. The data were analysed by 1) building a linear regression model (decoder) mapping the relationship between the stimulus location and a training set of EEG data, and 2) using the decoder to reconstruct an estimate of the time-varying sound source azimuth from the EEG data. The results showed that we can decode sound trajectory with a reconstruction accuracy significantly above chance level. Specifically, we found that the phase of delta (<2 Hz) and power of alpha (8-12 Hz) EEG track the dynamics of a moving auditory object. In a follow-up experiment, we replaced the noise with pulse train stimuli containing only interaural level and time differences (ILDs and ITDs respectively). This allowed us to investigate whether our trajectory decoding is sensitive to both acoustic cues. We found that the sound trajectory can be decoded for both ILD and ITD stimuli. Moreover, their neural signatures were similar and even allowed successful cross-cue classification. This supports the notion of integrated processing of ILD and ITD at the cortical level. These results are particularly relevant for application in devices such as cognitively controlled hearing aids and for the evaluation of virtual acoustic environments.
Collapse
|
12
|
Sound frequency affects the auditory motion-onset response in humans. Exp Brain Res 2018; 236:2713-2726. [PMID: 29998350 DOI: 10.1007/s00221-018-5329-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 07/04/2018] [Indexed: 10/28/2022]
Abstract
The current study examines the modulation of the motion-onset response based on the frequency-range of sound stimuli. Delayed motion-onset and stationary stimuli were presented in a free-field by sequentially activating loudspeakers on an azimuthal plane keeping the natural percept of externalized sound presentation. The sounds were presented in low- or high-frequency ranges and had different motion direction within each hemifield. Difference waves were calculated by contrasting the moving and stationary sounds to isolate the motion-onset responses. Analyses carried out at the peak amplitudes and latencies on the difference waves showed that the early part of the motion response (cN1) was modulated by the frequency range of the sounds with stronger amplitudes elicited by stimuli with high frequency range. Subsequent post hoc analysis of the normalized amplitude of the motion response confirmed the previous finding by excluding the possibility that the frequency range had an overall effect on the waveform, and showing that this effect was instead limited to the motion response. These results support the idea of a modular organization of the motion-onset response with the processing of primary sound motion characteristics being reflected in the early part of the response. Also, the article highlights the importance of specificity in auditory stimulus design.
Collapse
|
13
|
Event-Related Potentials to Sound Stimuli with Delayed Onset of Motion in Conditions of Active and Passive Listening. ACTA ACUST UNITED AC 2017. [DOI: 10.1007/s11055-017-0536-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
14
|
Altmann CF, Ueda R, Bucher B, Furukawa S, Ono K, Kashino M, Mima T, Fukuyama H. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography. Neuroimage 2017; 159:185-194. [DOI: 10.1016/j.neuroimage.2017.07.055] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 06/15/2017] [Accepted: 07/25/2017] [Indexed: 11/29/2022] Open
|
15
|
Asymmetries in behavioral and neural responses to spectral cues demonstrate the generality of auditory looming bias. Proc Natl Acad Sci U S A 2017; 114:9743-9748. [PMID: 28827336 DOI: 10.1073/pnas.1703247114] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Studies of auditory looming bias have shown that sources increasing in intensity are more salient than sources decreasing in intensity. Researchers have argued that listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evolutionary pressure. However, these studies only manipulated overall sound intensity; therefore, it is unclear whether looming bias is truly a perceptual bias for changes in source distance, or only in sound intensity. Here we demonstrate both behavioral and neural correlates of looming bias without manipulating overall sound intensity. In natural environments, the pinnae induce spectral cues that give rise to a sense of externalization; when spectral cues are unnatural, sounds are perceived as closer to the listener. We manipulated the contrast of individually tailored spectral cues to create sounds of similar intensity but different naturalness. We confirmed that sounds were perceived as approaching when spectral contrast decreased, and perceived as receding when spectral contrast increased. We measured behavior and electroencephalography while listeners judged motion direction. Behavioral responses showed a looming bias in that responses were more consistent for sounds perceived as approaching than for sounds perceived as receding. In a control experiment, looming bias disappeared when spectral contrast changes were discontinuous, suggesting that perceived motion in distance and not distance itself was driving the bias. Neurally, looming bias was reflected in an asymmetry of late event-related potentials associated with motion evaluation. Hence, both our behavioral and neural findings support a generalization of the auditory looming bias, representing a perceptual preference for approaching auditory objects.
Collapse
|
16
|
Ozmeral EJ, Eddins DA, Eddins AC. Reduced temporal processing in older, normal-hearing listeners evident from electrophysiological responses to shifts in interaural time difference. J Neurophysiol 2016; 116:2720-2729. [PMID: 27683889 PMCID: PMC5133308 DOI: 10.1152/jn.00560.2016] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Accepted: 09/24/2016] [Indexed: 11/22/2022] Open
Abstract
Previous electrophysiological studies of interaural time difference (ITD) processing have demonstrated that ITDs are represented by a nontopographic population rate code. Rather than narrow tuning to ITDs, neural channels have broad tuning to ITDs in either the left or right auditory hemifield, and the relative activity between the channels determines the perceived lateralization of the sound. With advancing age, spatial perception weakens and poor temporal processing contributes to declining spatial acuity. At present, it is unclear whether age-related temporal processing deficits are due to poor inhibitory controls in the auditory system or degraded neural synchrony at the periphery. Cortical processing of spatial cues based on a hemifield code are susceptible to potential age-related physiological changes. We consider two distinct predictions of age-related changes to ITD sensitivity: declines in inhibitory mechanisms would lead to increased excitation and medial shifts to rate-azimuth functions, whereas a general reduction in neural synchrony would lead to reduced excitation and shallower slopes in the rate-azimuth function. The current study tested these possibilities by measuring an evoked response to ITD shifts in a narrow-band noise. Results were more in line with the latter outcome, both from measured latencies and amplitudes of the global field potentials and source-localized waveforms in the left and right auditory cortices. The measured responses for older listeners also tended to have reduced asymmetric distribution of activity in response to ITD shifts, which is consistent with other sensory and cognitive processing models of aging.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - Ann C Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| |
Collapse
|
17
|
Shestopalova L, Petropavlovskaia E, Vaitulevich S, Nikitin N. Hemispheric asymmetry of ERPs and MMNs evoked by slow, fast and abrupt auditory motion. Neuropsychologia 2016; 91:465-479. [DOI: 10.1016/j.neuropsychologia.2016.09.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 08/25/2016] [Accepted: 09/13/2016] [Indexed: 10/21/2022]
|
18
|
Carlile S, Leung J. The Perception of Auditory Motion. Trends Hear 2016; 20:2331216516644254. [PMID: 27094029 PMCID: PMC4871213 DOI: 10.1177/2331216516644254] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Revised: 03/22/2016] [Accepted: 03/22/2016] [Indexed: 11/16/2022] Open
Abstract
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception.
Collapse
Affiliation(s)
- Simon Carlile
- School of Medical Sciences, University of Sydney, NSW, Australia Starkey Hearing Research Center, Berkeley, CA, USA
| | - Johahn Leung
- School of Medical Sciences, University of Sydney, NSW, Australia
| |
Collapse
|
19
|
Yost WA, Zhong X, Najam A. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:3293-310. [PMID: 26627802 DOI: 10.1121/1.4935091] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Collapse
Affiliation(s)
- William A Yost
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - Xuan Zhong
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - Anbar Najam
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| |
Collapse
|
20
|
Grzeschik R, Lewald J, Verhey JL, Hoffmann MB, Getzmann S. Absence of direction-specific cross-modal visual-auditory adaptation in motion-onset event-related potentials. Eur J Neurosci 2015; 43:66-77. [PMID: 26469706 DOI: 10.1111/ejn.13102] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Revised: 09/10/2015] [Accepted: 10/08/2015] [Indexed: 11/28/2022]
Abstract
Adaptation to visual or auditory motion affects within-modality motion processing as reflected by visual or auditory free-field motion-onset evoked potentials (VEPs, AEPs). Here, a visual-auditory motion adaptation paradigm was used to investigate the effect of visual motion adaptation on VEPs and AEPs to leftward motion-onset test stimuli. Effects of visual adaptation to (i) scattered light flashes, and motion in the (ii) same or in the (iii) opposite direction of the test stimulus were compared. For the motion-onset VEPs, i.e. the intra-modal adaptation conditions, direction-specific adaptation was observed--the change-N2 (cN2) and change-P2 (cP2) amplitudes were significantly smaller after motion adaptation in the same than in the opposite direction. For the motion-onset AEPs, i.e. the cross-modal adaptation condition, there was an effect of motion history only in the change-P1 (cP1), and this effect was not direction-specific--cP1 was smaller after scatter than after motion adaptation to either direction. No effects were found for later components of motion-onset AEPs. While the VEP results provided clear evidence for the existence of a direction-specific effect of motion adaptation within the visual modality, the AEP findings suggested merely a motion-related, but not a direction-specific effect. In conclusion, the adaptation of veridical auditory motion detectors by visual motion is not reflected by the AEPs of the present study.
Collapse
Affiliation(s)
- Ramona Grzeschik
- Department of Experimental Audiology, Otto-von-Guericke-University Magdeburg, Leipziger Straße 44, Magdeburg, D-39120, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Auditory Cognitive Neuroscience Laboratory, Ruhr University Bochum, Bochum, Germany.,Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jesko L Verhey
- Department of Experimental Audiology, Otto-von-Guericke-University Magdeburg, Leipziger Straße 44, Magdeburg, D-39120, Germany.,Department of Ophthalmology, Visual Processing Laboratory, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Michael B Hoffmann
- Center for Behavioral Brain Sciences, Magdeburg, Germany.,Department of Ophthalmology, Visual Processing Laboratory, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Stephan Getzmann
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
21
|
Shestopalova L, Petropavlovskaia E, Vaitulevich S, Nikitin N. Contextual effects on preattentive processing of sound motion as revealed by spatial MMN. Int J Psychophysiol 2015; 96:49-56. [DOI: 10.1016/j.ijpsycho.2015.02.021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2014] [Revised: 02/13/2015] [Accepted: 02/17/2015] [Indexed: 10/24/2022]
|
22
|
Abstract
The auditory system derives locations of sound sources from spatial cues provided by the interaction of sound with the head and external ears. Those cues are analyzed in specific brainstem pathways and then integrated as cortical representation of locations. The principal cues for horizontal localization are interaural time differences (ITDs) and interaural differences in sound level (ILDs). Vertical and front/back localization rely on spectral-shape cues derived from direction-dependent filtering properties of the external ears. The likely first sites of analysis of these cues are the medial superior olive (MSO) for ITDs, lateral superior olive (LSO) for ILDs, and dorsal cochlear nucleus (DCN) for spectral-shape cues. Localization in distance is much less accurate than that in horizontal and vertical dimensions, and interpretation of the basic cues is influenced by additional factors, including acoustics of the surroundings and familiarity of source spectra and levels. Listeners are quite sensitive to sound motion, but it remains unclear whether that reflects specific motion detection mechanisms or simply detection of changes in static location. Intact auditory cortex is essential for normal sound localization. Cortical representation of sound locations is highly distributed, with no evidence for point-to-point topography. Spatial representation is strictly contralateral in laboratory animals that have been studied, whereas humans show a prominent right-hemisphere dominance.
Collapse
Affiliation(s)
- John C Middlebrooks
- Departments of Otolaryngology, Neurobiology and Behavior, Cognitive Sciences, and Biomedical Engineering, University of California at Irvine, Irvine, CA, USA.
| |
Collapse
|
23
|
Abstract
Neurophysiological findings suggested that auditory and visual motion information is integrated at an early stage of auditory cortical processing, already starting in primary auditory cortex. Here, the effect of visual motion on processing of auditory motion was investigated by employing electrotomography in combination with free-field sound motion. A delayed-motion paradigm was used in which the onset of motion was delayed relative to the onset of an initially stationary stimulus. The results indicated that activity related to the motion-onset response, a neurophysiological correlate of auditory motion processing, interacts with the processing of visual motion at quite early stages of auditory analysis in the dimensions of both the time and the location of cortical processing. A modulation of auditory motion processing by concurrent visual motion was found already around 170 ms after motion onset (cN1 component) in the regions of primary auditory cortex and posterior superior temporal gyrus: Incongruent visual motion enhanced the auditory motion onset response in auditory regions ipsilateral to the sound motion stimulus, thus reducing the pattern of contralaterality observed with unimodal auditory stimuli. No modulation was found in parietal cortex nor around 250 ms after motion onset (cP2 component) in any auditory region of interest. These findings may reflect the integration of auditory and visual motion information in low-level areas of the auditory cortical system at relatively early points in time.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
24
|
Wang Q, Bao M, Chen L. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display. Exp Brain Res 2013; 232:273-82. [PMID: 24141518 DOI: 10.1007/s00221-013-3738-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Accepted: 10/03/2013] [Indexed: 11/30/2022]
Abstract
Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.
Collapse
Affiliation(s)
- Qingcui Wang
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, 100190, China,
| | | | | |
Collapse
|
25
|
Lewald J, Getzmann S. Ventral and dorsal visual pathways support auditory motion processing in the blind: evidence from electrical neuroimaging. Eur J Neurosci 2013; 38:3201-9. [DOI: 10.1111/ejn.12306] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Revised: 06/05/2013] [Accepted: 06/10/2013] [Indexed: 11/30/2022]
Affiliation(s)
- Jörg Lewald
- Ruhr University Bochum; Faculty of Psychology; D-44780 Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| | - Stephan Getzmann
- Ruhr University Bochum; Faculty of Psychology; D-44780 Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| |
Collapse
|
26
|
Grzeschik R, Böckmann-Barthel M, Mühler R, Verhey JL, Hoffmann MB. Direction-specific adaptation of motion-onset auditory evoked potentials. Eur J Neurosci 2013; 38:2557-65. [PMID: 23725339 DOI: 10.1111/ejn.12264] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2012] [Revised: 04/12/2013] [Accepted: 04/26/2013] [Indexed: 11/26/2022]
Abstract
Auditory evoked potentials (AEPs) to motion onset in humans are dominated by a fronto-central complex, with a change-negative deflection 1 (cN1) and a change-positive deflection 2 (cP2) component. Here the contribution of veridical motion detectors to motion-onset AEPs was investigated with the hypothesis that direction-specific adaptation effects would indicate the contribution of such motion detectors. AEPs were recorded from 33 electroencephalographic channels to the test stimulus, i.e. motion onset of horizontal virtual auditory motion (60° per s) from straight ahead to the left. AEPs were compared in two experiments for three conditions, which differed in their history prior to the motion-onset test stimulus: (i) without motion history (Baseline), (ii) with motion history in the same direction as the test stimulus (Adaptation Same), and (iii) a reference condition with auditory history. For Experiment 1, condition (iii) comprised motion in the opposite direction (Adaptation Opposite). For Experiment 2, a noise in the absence of coherent motion (Matched Noise) was used as the reference condition. In Experiment 1, the amplitude difference cP2 - cN1 obtained for Adaptation Same was significantly smaller than for Baseline and Adaptation Opposite. In Experiment 2, it was significantly smaller than for Matched Noise. Adaptation effects were absent for cN1 and cP2 latencies. These findings demonstrate direction-specific adaptation of the motion-onset AEP. This suggests that veridical auditory motion detectors contribute to the motion-onset AEP.
Collapse
Affiliation(s)
- Ramona Grzeschik
- Department of Ophthalmology, Visual Processing Laboratory, Otto von Guericke University Magdeburg, Leipziger Strasse 44, 39120, Magdeburg, Germany
| | | | | | | | | |
Collapse
|