1
|
Shestopalova LB, Petropavlovskaia EA, Salikova DA, Semenova VV. Temporal integration of sound motion: Motion-onset response and perception. Hear Res 2024; 441:108922. [PMID: 38043403 DOI: 10.1016/j.heares.2023.108922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 11/14/2023] [Accepted: 11/20/2023] [Indexed: 12/05/2023]
Abstract
The purpose of our study was to estimate the time interval required for integrating the acoustical changes related to sound motion using both psychophysical and EEG measures. Healthy listeners performed direction identification tasks under dichotic conditions in the delayed-motion paradigm. Minimal audible movement angle (MAMA) has been measured over the range of velocities from 60 to 360 deg/s. We also measured minimal duration of motion, at which the listeners could identify its direction. EEG was recorded in the same group of subjects during passive listening. Motion onset responses (MOR) were analyzed. MAMA increased linearly with motion velocity. Minimum audible angle (MAA) calculated from this linear function was about 2 deg. For higher velocities of the delayed motion, we found 2- to 3-fold better spatial resolution than the one previously reported for motion starting at the sound onset. The time required for optimal discrimination of motion direction was about 34 ms. The main finding of our study was that both direction identification time obtained in the behavioral task and cN1 latency behaved like hyperbolic functions of the sound's velocity. Direction identification time decreased asymptotically to 8 ms, which was considered minimal integration time for the instantaneous shift detection. Peak latency of cN1 also decreased with increasing velocity and asymptotically approached 137 ms. This limit corresponded to the latency of response to the instantaneous sound shift and was 37 ms later than the latency of the sound-onset response. The direction discrimination time (34 ms) was of the same magnitude as the additional time required for motion processing to be reflected in the MOR potential. Thus, MOR latency can be viewed as a neurophysiological index of temporal integration. Based on the findings obtained, we may assume that no measurable MOR would be evoked by slowly moving stimuli as they would reach their MAMAs in a time longer than the optimal integration time.
Collapse
Affiliation(s)
- Lidia B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia.
| | | | - Diana A Salikova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia
| | - Varvara V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia
| |
Collapse
|
2
|
Zhang H, Xie J, Tao Q, Xiao Y, Cui G, Fang W, Zhu X, Xu G, Li M, Han C. The effect of motion frequency and sound source frequency on steady-state auditory motion evoked potential. Hear Res 2023; 439:108897. [PMID: 37871451 DOI: 10.1016/j.heares.2023.108897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/18/2023] [Accepted: 10/12/2023] [Indexed: 10/25/2023]
Abstract
The ability of humans to perceive motion sound sources is important for accurate response to the living environment. Periodic motion sound sources can elicit steady-state motion auditory evoked potential (SSMAEP). The purpose of this study was to investigate the effects of different motion frequencies and different frequencies of sound source on SSMAEP. The stimulation paradigms for simulating periodic motion of sound sources were designed utilizing head-related transfer function (HRTF) techniques in this study. The motion frequencies of the paradigm are set respectively to 1-10 Hz, 15 Hz, 20 Hz, 30 Hz, 40 Hz, 60 Hz, and 80 Hz. In addition, the frequencies of sound source of the paradigms were set to 500 Hz, 1000 Hz, 2000 Hz, 3000 Hz, and 4000 Hz at motion frequencies of 6 Hz and 40 Hz. Fourteen subjects with normal hearing were recruited for the study. SSMAEP was elicited by 500 Hz pure tone at motion frequencies of 1-10 Hz, 15 Hz, 20 Hz, 30 Hz, 40 Hz, 60 Hz, and 80 Hz. SSMAEP was strongest at motion frequencies of 6 Hz. Moreover, at 6 Hz motion frequency, the SSMAEP amplitude was largest at the tone frequency of 500 Hz and smallest at 4000 Hz. Whilst SSMAEP elicited by 4000 Hz pure tone was significantly the strongest at motion frequency of 40 Hz. SSMAEP can be elicited by periodic motion sound sources at motion frequencies up to 80 Hz. SSMAEP also has a strong response at lower frequency. Low-frequency pure tones are beneficial to enhance SSMAEP at low-frequency sound source motion, whilst high-frequency pure tones help to enhance SSMAEP at high-frequency sound source motion. The study provides new insight into the brain's perception of rhythmic auditory motion.
Collapse
Affiliation(s)
- Huanqing Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jun Xie
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; School of Mechanical Engineering, Xinjiang University, Urumqi, China; National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China.
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Urumqi, China.
| | - Yi Xiao
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Guiling Cui
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Wenhu Fang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xinyu Zhu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Min Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Chengcheng Han
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
3
|
Poirier C, Baumann S, Dheerendra P, Joly O, Hunter D, Balezeau F, Sun L, Rees A, Petkov CI, Thiele A, Griffiths TD. Auditory motion-specific mechanisms in the primate brain. PLoS Biol 2017; 15:e2001379. [PMID: 28472038 PMCID: PMC5417421 DOI: 10.1371/journal.pbio.2001379] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 04/07/2017] [Indexed: 12/25/2022] Open
Abstract
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.
Collapse
Affiliation(s)
- Colline Poirier
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| | - Simon Baumann
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Pradeep Dheerendra
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Olivier Joly
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - David Hunter
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Fabien Balezeau
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Li Sun
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Christopher I. Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Timothy D. Griffiths
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| |
Collapse
|
4
|
Davis TJ, Grantham DW, Gifford RH. Effect of motion on speech recognition. Hear Res 2016; 337:80-8. [PMID: 27240478 DOI: 10.1016/j.heares.2016.05.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Revised: 04/11/2016] [Accepted: 05/08/2016] [Indexed: 11/16/2022]
Abstract
The benefit of spatial separation for talkers in a multi-talker environment is well documented. However, few studies have examined the effect of talker motion on speech recognition. In the current study, we evaluated the effects of (1) motion of the target or distracters, (2) a priori information about the target and distracter spatial configurations, and (3) target and distracter location. In total, seventeen young adults with normal hearing were tested in a large anechoic chamber in two experiments. In Experiment 1, seven stimulus conditions were tested using the Coordinate Response Measure (Bolia et al., 2000) speech corpus, in which subjects were required to report the key words in a target sentence presented simultaneously with two distracter sentences. As in previous studies, there was a significant improvement in key word identification for conditions in which the target and distracters were spatially separated as compared to the co-located conditions. In addition, 1) motion of either talker or distracter resulted in improved performance compared to stationary presentation (talker motion yielded significantly better performance than distracter motion) 2) a priori information regarding stimulus configuration was not beneficial, and 3) performance was significantly better with key words at 0° azimuth as compared to -60° (on the listener's left). Experiment 2 included two additional conditions designed to assess whether the benefit of motion observed in Experiment 1 was due to the motion itself or to the fact that the motion conditions introduced small spatial separations in the target and distracter key words. Results showed that small spatial separations (on the order of 5-8°) resulted in improved performance (relative to co-located key words) whether the sentences were moving or stationary. These results suggest that in the presence of distracting messages, motion of either target or distracters and/or small spatial separation of the key words may be beneficial for sound source segregation and thus for improved speech recognition.
Collapse
Affiliation(s)
- Timothy J Davis
- Vanderbilt University, Department of Hearing and Speech Sciences, Nashville, TN, USA.
| | - D Wesley Grantham
- Vanderbilt University, Department of Hearing and Speech Sciences, Nashville, TN, USA
| | - René H Gifford
- Vanderbilt University, Department of Hearing and Speech Sciences, Nashville, TN, USA
| |
Collapse
|
5
|
Blind(fold)ed by science: a constant target-heading angle is used in visual and nonvisual pursuit. Psychon Bull Rev 2013; 20:923-34. [PMID: 23440726 DOI: 10.3758/s13423-013-0412-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Previous work investigating the strategies that observers use to intercept moving targets has shown that observers maintain a constant target-heading angle (CTHA) to achieve interception. Most of this work has concluded or indirectly assumed that vision is necessary to do this. We investigated whether blindfolded pursuers chasing a ball carrier holding a beeping football would utilize the same strategy that sighted observers use to chase a ball carrier. Results confirm that both blindfolded and sighted pursuers use a CTHA strategy in order to intercept targets, whether jogging or walking and irrespective of football experience and path and speed deviations of the ball carrier during the course of the pursuit. This work shows that the mechanisms involved in intercepting moving targets may be designed to use different sensory mechanisms in order to drive behavior that leads to the same end result. This has potential implications for the supramodal representation of motion perception in the human brain.
Collapse
|
6
|
Lewald J, Getzmann S. Ventral and dorsal visual pathways support auditory motion processing in the blind: evidence from electrical neuroimaging. Eur J Neurosci 2013; 38:3201-9. [DOI: 10.1111/ejn.12306] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Revised: 06/05/2013] [Accepted: 06/10/2013] [Indexed: 11/30/2022]
Affiliation(s)
- Jörg Lewald
- Ruhr University Bochum; Faculty of Psychology; D-44780 Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| | - Stephan Getzmann
- Ruhr University Bochum; Faculty of Psychology; D-44780 Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| |
Collapse
|
7
|
Grzeschik R, Böckmann-Barthel M, Mühler R, Verhey JL, Hoffmann MB. Direction-specific adaptation of motion-onset auditory evoked potentials. Eur J Neurosci 2013; 38:2557-65. [PMID: 23725339 DOI: 10.1111/ejn.12264] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2012] [Revised: 04/12/2013] [Accepted: 04/26/2013] [Indexed: 11/26/2022]
Abstract
Auditory evoked potentials (AEPs) to motion onset in humans are dominated by a fronto-central complex, with a change-negative deflection 1 (cN1) and a change-positive deflection 2 (cP2) component. Here the contribution of veridical motion detectors to motion-onset AEPs was investigated with the hypothesis that direction-specific adaptation effects would indicate the contribution of such motion detectors. AEPs were recorded from 33 electroencephalographic channels to the test stimulus, i.e. motion onset of horizontal virtual auditory motion (60° per s) from straight ahead to the left. AEPs were compared in two experiments for three conditions, which differed in their history prior to the motion-onset test stimulus: (i) without motion history (Baseline), (ii) with motion history in the same direction as the test stimulus (Adaptation Same), and (iii) a reference condition with auditory history. For Experiment 1, condition (iii) comprised motion in the opposite direction (Adaptation Opposite). For Experiment 2, a noise in the absence of coherent motion (Matched Noise) was used as the reference condition. In Experiment 1, the amplitude difference cP2 - cN1 obtained for Adaptation Same was significantly smaller than for Baseline and Adaptation Opposite. In Experiment 2, it was significantly smaller than for Matched Noise. Adaptation effects were absent for cN1 and cP2 latencies. These findings demonstrate direction-specific adaptation of the motion-onset AEP. This suggests that veridical auditory motion detectors contribute to the motion-onset AEP.
Collapse
Affiliation(s)
- Ramona Grzeschik
- Department of Ophthalmology, Visual Processing Laboratory, Otto von Guericke University Magdeburg, Leipziger Strasse 44, 39120, Magdeburg, Germany
| | | | | | | | | |
Collapse
|
8
|
Richter N, Schröger E, Rübsamen R. Differences in evoked potentials during the active processing of sound location and motion. Neuropsychologia 2013; 51:1204-14. [PMID: 23499852 DOI: 10.1016/j.neuropsychologia.2013.03.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2012] [Revised: 02/25/2013] [Accepted: 03/04/2013] [Indexed: 10/27/2022]
Abstract
Difference in the processing of motion and static sounds in the human cortex was studied by electroencephalography with subjects performing an active discrimination task. Sound bursts were presented in the acoustic free-field between 47° to the left and 47° to the right under three different stimulus conditions: (i) static, (ii) leftward motion, and (iii) rightward motion. In an active oddball design, subject was asked to detect target stimuli which were randomly embedded within a stream of frequently occurring non-target events (i.e. 'standards') and rare non-target stimuli (i.e. 'deviants'). The respective acoustic stimuli were presented in blocks with each stimulus type presented in either of three stimulus conditions: as target, as non-target, or as standard. The analysis focussed on the event related potentials evoked by the different stimulus types under the respective standard condition. Same as in previous studies, all three different acoustic stimuli elicited the obligatory P1/N1/P2 complex in the range of 50-200 ms. However, comparisons of ERPs elicited by static stimuli and both kinds of motion stimuli yielded differences as early as ~100 ms after stimulus-onset, i.e. at the level of the exogenous N1 and P2 components. Differences in signal amplitudes were also found in a time window 300-400 ms ('d300-400 ms' component in 'motion-minus-static' difference wave). For motion stimuli, the N1 amplitudes were larger over the hemisphere contralateral to the origin of motion, while for static stimuli N1 amplitudes over both hemispheres were in the same range. Contrary to the N1 component, the ERP in the 'd300-400 ms' time period showed stronger responses over the hemisphere contralateral to motion termination, with the static stimuli again yielding equal bilateral amplitudes. For the P2 component a motion-specific effect with larger signal amplitudes over the left hemisphere was found compared to static stimuli. The presently documented N1 components comply with the results of previous studies on auditory space processing and suggest a contralateral dominance during the process of cortical integration of spatial acoustic information. Additionally, the cortical activity in the 'd300-400 ms' time period indicates, that in addition to the motion origin (as reflected by the N1) also the direction of motion (leftward/ rightward motion) or rather motion termination is cortically encoded. These electrophysiological results are in accordance with the 'snap shot' hypothesis, assuming that auditory motion processing is not based on a genuine motion-sensitive system, but rather on a comparison process of spatial positions of motion origin (onset) and motion termination (offset). Still, specificities of the present P2 component provides evidence for additional motion-specific processes possibly associated with the evaluation of motion-specific attributes, i.e. motion direction and/or velocity which is preponderant in the left hemisphere.
Collapse
Affiliation(s)
- Nicole Richter
- University of Leipzig, Institute for Biology, Talstr 33, 04103 Leipzig, Germany.
| | | | | |
Collapse
|
9
|
Getzmann S, Lewald J. Cortical processing of change in sound location: Smooth motion versus discontinuous displacement. Brain Res 2012; 1466:119-27. [DOI: 10.1016/j.brainres.2012.05.033] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2011] [Revised: 03/29/2012] [Accepted: 05/17/2012] [Indexed: 10/28/2022]
|
10
|
Getzmann S. Auditory motion perception: onset position and motion direction are encoded in discrete processing stages. Eur J Neurosci 2011; 33:1339-50. [DOI: 10.1111/j.1460-9568.2011.07617.x] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
11
|
Getzmann S, Lewald J. The effect of spatial adaptation on auditory motion processing. Hear Res 2011; 272:21-9. [DOI: 10.1016/j.heares.2010.11.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2010] [Revised: 11/10/2010] [Accepted: 11/11/2010] [Indexed: 10/18/2022]
|
12
|
Grzeschik R, Böckmann-Barthel M, Mühler R, Hoffmann MB. Motion-onset auditory-evoked potentials critically depend on history. Exp Brain Res 2010; 203:159-68. [DOI: 10.1007/s00221-010-2221-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2010] [Accepted: 03/05/2010] [Indexed: 11/30/2022]
|
13
|
Getzmann S. Effect of auditory motion velocity on reaction time and cortical processes. Neuropsychologia 2009; 47:2625-33. [PMID: 19467249 DOI: 10.1016/j.neuropsychologia.2009.05.012] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2008] [Revised: 05/12/2009] [Accepted: 05/17/2009] [Indexed: 10/20/2022]
Abstract
The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere contralateral to the direction of motion. The contralaterality of the motion onset response decreased when the velocity was reduced. Higher motion velocity was associated with larger and earlier cortical responses and with shorter reaction times to motion onset. The results indicate a close correspondence of brain activity and behavioral performance in auditory motion detection.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany.
| |
Collapse
|