1
|
Rosso M, Gener CN, Moens B, Maes PJ, Leman M. Perceptual coupling in human dyads: Kinematics does not affect interpersonal synchronization. Heliyon 2024; 10:e33831. [PMID: 39027589 PMCID: PMC11255578 DOI: 10.1016/j.heliyon.2024.e33831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 06/10/2024] [Accepted: 06/27/2024] [Indexed: 07/20/2024] Open
Abstract
The minimal, essential condition for individuals to interact is that they exchange information via at least one sensory channel. Once informational coupling is established, it enables basic forms of coordinated behavior to spontaneously emerge from the interaction. Our previous study revealed different coordination dynamics in dyads engaged in a joint finger-tapping task based on visual versus auditory coupling. This observation led us to propose the 'modality-dependent hypothesis', which posits that coordination dynamics are influenced by the sensory modality mediating informational coupling. However, recognizing that different modalities have inherent differences in accessing spatiotemporal features of perceived movement, we formulated the alternative 'kinematic hypothesis'. This hypothesis posits that differences in dynamics would vanish given equivalent kinematic information across modalities. The study involved forty (N = 40) participants, grouped into twenty (N = 20) dyads, who engaged in a joint finger-tapping task. This task was conducted under varying conditions of visual and auditory coupling, with manipulations in the access to kinematic information, categorized as discrete and continuous. Contrary to our initial predictions, the results strongly supported the 'modality-dependent hypothesis'. We observed that visual and auditory coupling consistently yielded distinct attractor dynamics, regardless of the access to kinematic information. Furthermore, all conditions of auditory coupling resulted in higher levels of synchronization than their visual counterparts. These findings suggest that the differences in interpersonal synchronization are predominantly influenced by the sensory modality, rather than the continuity of kinematic information. Our study highlights the significance of sensorimotor interactions in interpersonal synchronization and addresses the potential of sonification strategies in supporting motor training and rehabilitation.
Collapse
Affiliation(s)
- Mattia Rosso
- IPEM - Institute for Systematic Musicology, Ghent University, Ghent, Flanders, 9000, Belgium
| | - Canan Nuran Gener
- IPEM - Institute for Systematic Musicology, Ghent University, Ghent, Flanders, 9000, Belgium
| | - Bart Moens
- IPEM - Institute for Systematic Musicology, Ghent University, Ghent, Flanders, 9000, Belgium
| | - Pieter-Jan Maes
- IPEM - Institute for Systematic Musicology, Ghent University, Ghent, Flanders, 9000, Belgium
| | - Marc Leman
- IPEM - Institute for Systematic Musicology, Ghent University, Ghent, Flanders, 9000, Belgium
| |
Collapse
|
2
|
Gabdreshov G, Magzymov D, Yensebayev N. Preliminary investigation of SEZUAL device for basic material identification and simple spatial navigation for blind and visually impaired people. Disabil Rehabil Assist Technol 2024; 19:1343-1350. [PMID: 36756982 DOI: 10.1080/17483107.2023.2176555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 12/27/2022] [Accepted: 01/31/2023] [Indexed: 02/10/2023]
Abstract
PURPOSE we present a preliminary set of experimental studies that demonstrates device-aided echolocation enabling in blind and visually impaired individuals. The proposed device emits a click-like sound into the surrounding space and returning sound is perceived by participants to infer the surrounding environment. MATERIALS AND METHODS two sets of experiments were set up to evaluate the echolocation abilities of nine blind participants. The first setup was designed to identify four material types based on the sound reflection properties of materials, such as glass, metal, wood, and ceramics. The second setup was navigation through a basic maze with the device. RESULTS experimental data demonstrate that the use of the proposed device enables active echolocation abilities in blind participants, particularly for material identification and spatial mobility. CONCLUSION the proposed device can potentially be used to rehabilitate disabled blind and visually impaired individuals in terms of spatial mobility and orientation.
Collapse
|
3
|
Zhang H, Xie J, Tao Q, Xiao Y, Cui G, Fang W, Zhu X, Xu G, Li M, Han C. The effect of motion frequency and sound source frequency on steady-state auditory motion evoked potential. Hear Res 2023; 439:108897. [PMID: 37871451 DOI: 10.1016/j.heares.2023.108897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/18/2023] [Accepted: 10/12/2023] [Indexed: 10/25/2023]
Abstract
The ability of humans to perceive motion sound sources is important for accurate response to the living environment. Periodic motion sound sources can elicit steady-state motion auditory evoked potential (SSMAEP). The purpose of this study was to investigate the effects of different motion frequencies and different frequencies of sound source on SSMAEP. The stimulation paradigms for simulating periodic motion of sound sources were designed utilizing head-related transfer function (HRTF) techniques in this study. The motion frequencies of the paradigm are set respectively to 1-10 Hz, 15 Hz, 20 Hz, 30 Hz, 40 Hz, 60 Hz, and 80 Hz. In addition, the frequencies of sound source of the paradigms were set to 500 Hz, 1000 Hz, 2000 Hz, 3000 Hz, and 4000 Hz at motion frequencies of 6 Hz and 40 Hz. Fourteen subjects with normal hearing were recruited for the study. SSMAEP was elicited by 500 Hz pure tone at motion frequencies of 1-10 Hz, 15 Hz, 20 Hz, 30 Hz, 40 Hz, 60 Hz, and 80 Hz. SSMAEP was strongest at motion frequencies of 6 Hz. Moreover, at 6 Hz motion frequency, the SSMAEP amplitude was largest at the tone frequency of 500 Hz and smallest at 4000 Hz. Whilst SSMAEP elicited by 4000 Hz pure tone was significantly the strongest at motion frequency of 40 Hz. SSMAEP can be elicited by periodic motion sound sources at motion frequencies up to 80 Hz. SSMAEP also has a strong response at lower frequency. Low-frequency pure tones are beneficial to enhance SSMAEP at low-frequency sound source motion, whilst high-frequency pure tones help to enhance SSMAEP at high-frequency sound source motion. The study provides new insight into the brain's perception of rhythmic auditory motion.
Collapse
Affiliation(s)
- Huanqing Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jun Xie
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; School of Mechanical Engineering, Xinjiang University, Urumqi, China; National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China.
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Urumqi, China.
| | - Yi Xiao
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Guiling Cui
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Wenhu Fang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xinyu Zhu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Min Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Chengcheng Han
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
4
|
Martolini C, Amadeo MB, Campus C, Cappagli G, Gori M. Effects of audio-motor training on spatial representations in long-term late blindness. Neuropsychologia 2022; 176:108391. [DOI: 10.1016/j.neuropsychologia.2022.108391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 08/16/2022] [Accepted: 10/01/2022] [Indexed: 11/15/2022]
|
5
|
Chen BW, Yang SH, Kuo CH, Chen JW, Lo YC, Kuo YT, Lin YC, Chang HC, Lin SH, Yu X, Qu B, Ro SCV, Lai HY, Chen YY. Neuro-Inspired Reinforcement Learning To Improve Trajectory Prediction In Reward-Guided Behavior. Int J Neural Syst 2022; 32:2250038. [DOI: 10.1142/s0129065722500381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
6
|
Wei Z, Fan Z, Qi Z, Tong Y, Guo Q, Chen L. Reorganization of auditory-visual network interactions in long-term unilateral postlingual hearing loss. J Clin Neurosci 2021; 87:97-102. [PMID: 33863544 DOI: 10.1016/j.jocn.2021.02.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Revised: 12/22/2020] [Accepted: 02/15/2021] [Indexed: 12/17/2022]
Abstract
Long-term unilateral hearing loss could reorganize the functional network association between the bilateral auditory cortices, while alterations of other functional networks need to be further explored. We attempted to investigate the pattern of the reorganization of functional network associations between the auditory and visual cortex caused by long-term postlingual unilateral hearing loss (UHI) and its relationship with clinical characteristics. Therefore, 48 patients with hearing loss caused by unilateral acoustic tumors and 52 matched healthy controls were enrolled, and their high-resolution structural MRI and resting-state functional MRI data were also collected to depict the brain network. Degree centrality (DC) was employed to evaluate the functional network association of the auditory-visual network interaction. Group comparisons were performed to investigate the network reorganization, and its correlations with clinical data were calculated. Compared with the healthy control group, patients with UHI showed significantly increased DC between the auditory network (superior temporal gyrus and the medial geniculate body) and the visual network. Meanwhile, this difference was positively correlated with the extent of hearing impairment, and the correlation was more significant with the ipsilateral superior temporal gyrus in cases of acoustic neuroma. These results suggest that long-term unilateral hearing impairment may lead to enhancement of the visual-auditory network interactions and that the degree of reorganization is positively correlated with the pure tone average (PTA) and is more significant for the ipsilateral superior temporal gyrus, which provides clinical evidence regarding cross-modal plasticity in the UHI and its lateralization.
Collapse
Affiliation(s)
- Zixuan Wei
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, China
| | - Zhen Fan
- Neurosurgical Institute of Fudan University, China
| | - Zengxin Qi
- Shanghai Clinical Medical Center of Neurosurgery, China
| | - Yusheng Tong
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, China; Neurosurgical Institute of Fudan University, China; Shanghai Clinical Medical Center of Neurosurgery, China
| | - Qinglong Guo
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, China; Neurosurgical Institute of Fudan University, China; Shanghai Clinical Medical Center of Neurosurgery, China
| | - Liang Chen
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, China; Neurosurgical Institute of Fudan University, China; Shanghai Clinical Medical Center of Neurosurgery, China.
| |
Collapse
|
7
|
Henschke JU, Price AT, Pakan JMP. Enhanced modulation of cell-type specific neuronal responses in mouse dorsal auditory field during locomotion. Cell Calcium 2021; 96:102390. [PMID: 33744780 DOI: 10.1016/j.ceca.2021.102390] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 03/05/2021] [Accepted: 03/10/2021] [Indexed: 11/16/2022]
Abstract
As we move through the environment we experience constantly changing sensory input that must be merged with our ongoing motor behaviors - creating dynamic interactions between our sensory and motor systems. Active behaviors such as locomotion generally increase the sensory-evoked neuronal activity in visual and somatosensory cortices, but evidence suggests that locomotion largely suppresses neuronal responses in the auditory cortex. However, whether this effect is ubiquitous across different anatomical regions of the auditory cortex is largely unknown. In mice, auditory association fields such as the dorsal auditory cortex (AuD), have been shown to have different physiological response properties, protein expression patterns, and cortical as well as subcortical connections, in comparison to primary auditory regions (A1) - suggesting there may be important functional differences. Here we examined locomotion-related modulation of neuronal activity in cortical layers ⅔ of AuD and A1 using two-photon Ca2+ imaging in head-fixed behaving mice that are able to freely run on a spherical treadmill. We determined the proportion of neurons in these two auditory regions that show enhanced and suppressed sensory-evoked responses during locomotion and quantified the depth of modulation. We found that A1 shows more suppression and AuD more enhanced responses during locomotion periods. We further revealed differences in the circuitry between these auditory regions and motor cortex, and found that AuD is more highly connected to motor cortical regions. Finally, we compared the cell-type specific locomotion-evoked modulation of responses in AuD and found that, while subpopulations of PV-expressing interneurons showed heterogeneous responses, the population in general was largely suppressed during locomotion, while excitatory population responses were generally enhanced in AuD. Therefore, neurons in primary and dorsal auditory fields have distinct response properties, with dorsal regions exhibiting enhanced activity in response to movement. This functional distinction may be important for auditory processing during navigation and acoustically guided behavior.
Collapse
Affiliation(s)
- Julia U Henschke
- Institute of Cognitive Neurology and Dementia Research, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany; German Centre for Neurodegenerative Diseases, Leipziger Str. 44, 39120, Magdeburg, Germany
| | - Alan T Price
- Institute of Cognitive Neurology and Dementia Research, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany; German Centre for Neurodegenerative Diseases, Leipziger Str. 44, 39120, Magdeburg, Germany; Cognitive Neurophysiology group, Leibniz Institute for Neurobiology (LIN), 39118, Magdeburg, Germany
| | - Janelle M P Pakan
- Institute of Cognitive Neurology and Dementia Research, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany; German Centre for Neurodegenerative Diseases, Leipziger Str. 44, 39120, Magdeburg, Germany; Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany.
| |
Collapse
|
8
|
Schäfer E, Vedoveli AE, Righetti G, Gamerdinger P, Knipper M, Tropitzsch A, Karnath HO, Braun C, Li Hegner Y. Activities of the Right Temporo-Parieto-Occipital Junction Reflect Spatial Hearing Ability in Cochlear Implant Users. Front Neurosci 2021; 15:613101. [PMID: 33776632 PMCID: PMC7994335 DOI: 10.3389/fnins.2021.613101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 02/18/2021] [Indexed: 11/13/2022] Open
Abstract
Spatial hearing is critical for us not only to orient ourselves in space, but also to follow a conversation with multiple speakers involved in a complex sound environment. The hearing ability of people who suffered from severe sensorineural hearing loss can be restored by cochlear implants (CIs), however, with a large outcome variability. Yet, the causes of the CI performance variability remain incompletely understood. Despite the CI-based restoration of the peripheral auditory input, central auditory processing might still not function fully. Here we developed a multi-modal repetition suppression (MMRS) paradigm that is capable of capturing stimulus property-specific processing, in order to identify the neural correlates of spatial hearing and potential central neural indexes useful for the rehabilitation of sound localization in CI users. To this end, 17 normal hearing and 13 CI participants underwent the MMRS task while their brain activity was recorded with a 256-channel electroencephalography (EEG). The participants were required to discriminate between the probe sound location coming from a horizontal array of loudspeakers. The EEG MMRS response following the probe sound was elicited at various brain regions and at different stages of processing. Interestingly, the more similar this differential MMRS response in the right temporo-parieto-occipital (TPO) junction in CI users was to the normal hearing group, the better was the spatial hearing performance in individual CI users. Based on this finding, we suggest that the differential MMRS response at the right TPO junction could serve as a central neural index for intact or impaired sound localization abilities.
Collapse
Affiliation(s)
| | | | | | | | - Marlies Knipper
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
| | - Anke Tropitzsch
- Comprehensive Cochlear Implant Center, ENT Clinic Tübingen, Tübingen University Hospital, Tübingen, Germany
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Christoph Braun
- MEG Center, University of Tübingen, Tübingen, Germany.,CIMeC, Center for Mind/Brain Research, University of Trento, Rovereto, Italy.,DiPsCo, Department of Psychology and Cognitive Science, Rovereto, Italy
| | - Yiwen Li Hegner
- MEG Center, University of Tübingen, Tübingen, Germany.,Center of Neurology, Department of Neurology and Epileptology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| |
Collapse
|
9
|
Smieja DA, Dunkley BT, Papsin BC, Easwar V, Yamazaki H, Deighton M, Gordon KA. Interhemispheric auditory connectivity requires normal access to sound in both ears during development. Neuroimage 2020; 208:116455. [DOI: 10.1016/j.neuroimage.2019.116455] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 11/21/2019] [Accepted: 12/09/2019] [Indexed: 10/25/2022] Open
|
10
|
Todd J, Frost J, Fitzgerald K, Winkler I. Setting precedent: Initial feature variability affects the subsequent precision of regularly varying sound contexts. Psychophysiology 2020; 57:e13528. [DOI: 10.1111/psyp.13528] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2019] [Revised: 12/11/2019] [Accepted: 12/12/2019] [Indexed: 11/28/2022]
Affiliation(s)
- Juanita Todd
- School of Psychology University of Newcastle Callaghan NSW Australia
| | - Jade Frost
- School of Psychology University of Newcastle Callaghan NSW Australia
| | | | - István Winkler
- Institute of Cognitive Neuroscience and Psychology Research Centre for Natural Sciences Budapest Hungary
| |
Collapse
|
11
|
The Cross-Modal Effects of Sensory Deprivation on Spatial and Temporal Processes in Vision and Audition: A Systematic Review on Behavioral and Neuroimaging Research since 2000. Neural Plast 2019; 2019:9603469. [PMID: 31885540 PMCID: PMC6914961 DOI: 10.1155/2019/9603469] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Revised: 07/06/2019] [Accepted: 10/31/2019] [Indexed: 01/12/2023] Open
Abstract
One of the most significant effects of neural plasticity manifests in the case of sensory deprivation when cortical areas that were originally specialized for the functions of the deprived sense take over the processing of another modality. Vision and audition represent two important senses needed to navigate through space and time. Therefore, the current systematic review discusses the cross-modal behavioral and neural consequences of deafness and blindness by focusing on spatial and temporal processing abilities, respectively. In addition, movement processing is evaluated as compiling both spatial and temporal information. We examine whether the sense that is not primarily affected changes in its own properties or in the properties of the deprived modality (i.e., temporal processing as the main specialization of audition and spatial processing as the main specialization of vision). References to the metamodal organization, supramodal functioning, and the revised neural recycling theory are made to address global brain organization and plasticity principles. Generally, according to the reviewed studies, behavioral performance is enhanced in those aspects for which both the deprived and the overtaking senses provide adequate processing resources. Furthermore, the behavioral enhancements observed in the overtaking sense (i.e., vision in the case of deafness and audition in the case of blindness) are clearly limited by the processing resources of the overtaking modality. Thus, the brain regions that were previously recruited during the behavioral performance of the deprived sense now support a similar behavioral performance for the overtaking sense. This finding suggests a more input-unspecific and processing principle-based organization of the brain. Finally, we highlight the importance of controlling for and stating factors that might impact neural plasticity and the need for further research into visual temporal processing in deaf subjects.
Collapse
|
12
|
Hanenberg C, Getzmann S, Lewald J. Transcranial direct current stimulation of posterior temporal cortex modulates electrophysiological correlates of auditory selective spatial attention in posterior parietal cortex. Neuropsychologia 2019; 131:160-170. [PMID: 31145907 DOI: 10.1016/j.neuropsychologia.2019.05.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 05/03/2019] [Accepted: 05/25/2019] [Indexed: 01/12/2023]
Abstract
Speech perception in "cocktail-party" situations, in which a sound source of interest has to be extracted out of multiple irrelevant sounds, poses a remarkable challenge to the human auditory system. Studies on structural and electrophysiological correlates of auditory selective spatial attention revealed critical roles of the posterior temporal cortex and the N2 event-related potential (ERP) component in the underlying processes. Here, we explored effects of transcranial direct current stimulation (tDCS) to posterior temporal cortex on neurophysiological correlates of auditory selective spatial attention, with a specific focus on the N2. In a single-blind, sham-controlled crossover design with baseline and follow-up measurements, monopolar anodal and cathodal tDCS was applied for 16 min to the right posterior superior temporal cortex. Two age groups of human subjects, a younger (n = 20; age 18-30 yrs) and an older group (n = 19; age 66-77 yrs), completed an auditory free-field multiple-speakers localization task while ERPs were recorded. The ERP data showed an offline effect of anodal, but not cathodal, tDCS immediately after DC offset for targets contralateral, but not ipsilateral, to the hemisphere of tDCS, without differences between groups. This effect mainly consisted in a substantial increase of the N2 amplitude by 0.9 μV (SE 0.4 μV; d = 0.40) compared with sham tDCS. At the same point in time, cortical source localization revealed a reduction of activity in ipsilateral (right) posterior parietal cortex. Also, localization error was improved after anodal, but not cathodal, tDCS. Given that both the N2 and the posterior parietal cortex are involved in processes of auditory selective spatial attention, these results suggest that anodal tDCS specifically enhanced inhibitory attentional brain processes underlying the focusing onto a target sound source, possibly by improved suppression of irrelevant distracters.
Collapse
Affiliation(s)
- Christina Hanenberg
- Ruhr University Bochum, Faculty of Psychology, D-44780, Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, D-44139, Dortmund, Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, D-44139, Dortmund, Germany
| | - Jörg Lewald
- Ruhr University Bochum, Faculty of Psychology, D-44780, Bochum, Germany.
| |
Collapse
|
13
|
Abstract
The purpose of this paper is to spark discussions on the recent trends of designing vineyard and surround-type concert halls. We understand that these halls could be architecturally unique and many conductors like them, however, as outlined in this paper, they do not always serve the best for music acoustically. The motivation for visual proximity is easily understandable, but it should not overrule the acoustical conditions. We hope that this paper helps designers of new concert venues. We also hope to see more research and discussion on the acoustical qualities of these modern concert halls.
Collapse
|
14
|
Bihemispheric anodal transcranial direct-current stimulation over temporal cortex enhances auditory selective spatial attention. Exp Brain Res 2019; 237:1539-1549. [PMID: 30927041 DOI: 10.1007/s00221-019-05525-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2019] [Accepted: 03/20/2019] [Indexed: 10/27/2022]
Abstract
The capacity to selectively focus on a particular speaker of interest in a complex acoustic environment with multiple persons speaking simultaneously-a so-called "cocktail-party" situation-is of decisive importance for human verbal communication. Here, the efficacy of single-dose transcranial direct-current stimulation (tDCS) in improving this ability was tested in young healthy adults (n = 24), using a spatial task that required the localization of a target word in a simulated "cocktail-party" situation. In a sham-controlled crossover design, offline bihemispheric double-monopolar anodal tDCS was applied for 30 min at 1 mA over auditory regions of temporal lobe, and the participant's performance was assessed prior to tDCS, immediately after tDCS, and 1 h after tDCS. A significant increase in the amount of correct localizations by on average 3.7 percentage points (d = 1.04) was found after active, relative to sham, tDCS, with only insignificant reduction of the effect within 1 h after tDCS offset. Thus, the method of bihemispheric tDCS could be a promising tool for enhancement of human auditory attentional functions that are relevant for spatial orientation and communication in everyday life.
Collapse
|
15
|
Peter V, Fratturo L, Sharma M. Electrophysiological and behavioural study of localisation in presence of noise. Int J Audiol 2019; 58:345-354. [PMID: 30890004 DOI: 10.1080/14992027.2019.1575989] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
OBJECTIVE The ability to determine the location of the sound source is often important for effective communication. However, it is not clear how the localisation is affected by background noise. In the current study, localisation in quiet versus noise was evaluated in adults both behaviourally, and using MMN and P3b. DESIGN The speech token/da/was presented in a multi-deviant oddball paradigm in quiet and in presence of speech babble at +5 dB SNR. The deviants were presented at locations that differed from the standard by 30°, 60° and 90°. STUDY SAMPLE Sixteen normal hearing adults between the age range of 18-35 years participated in the study. RESULTS The results showed that participants were significantly faster and more accurate at identifying deviants presented at 60° and 90° as compared to 30°. Neither reaction times nor electrophysiological measures (MMN/P3b) were affected by the background noise. The deviance magnitude (30°, 60° and 90°) did not affect the MMN amplitude, but the smaller deviant (30°) generated P3b with smaller amplitude. CONCLUSIONS Under the stimulus paradigm and measures employed in this study, localisation ability as effectively sampled appeared resistant to speech babble interference.
Collapse
Affiliation(s)
- Varghese Peter
- a MARCS Institute for Brain, Behaviour and Development , Western Sydney University , Penrith , Australia.,b Department of Linguistics , Macquarie University , North Ryde , Australia.,c The HEARing Cooperative Research Centre , Melbourne , Victoria , Australia
| | - Luke Fratturo
- d The Balance Clinic and Laboratory , Royal Prince Alfred Hospital , Camperdown , Australia
| | - Mridula Sharma
- b Department of Linguistics , Macquarie University , North Ryde , Australia.,c The HEARing Cooperative Research Centre , Melbourne , Victoria , Australia
| |
Collapse
|
16
|
Eddins AC, Ozmeral EJ, Eddins DA. How aging impacts the encoding of binaural cues and the perception of auditory space. Hear Res 2018; 369:79-89. [PMID: 29759684 PMCID: PMC6196106 DOI: 10.1016/j.heares.2018.05.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Revised: 04/23/2018] [Accepted: 05/02/2018] [Indexed: 10/17/2022]
Abstract
Over the years, the effect of aging on auditory function has been investigated in animal models and humans in an effort to characterize age-related changes in both perception and physiology. Here, we review how aging may impact neural encoding and processing of binaural and spatial cues in human listeners with a focus on recent work by the authors as well as others. Age-related declines in monaural temporal processing, as estimated from measures of gap detection and temporal fine structure discrimination, have been associated with poorer performance on binaural tasks that require precise temporal processing. In lateralization and localization tasks, as well as in the detection of signals in noise, marked age-related changes have been demonstrated in both behavioral and electrophysiological measures and have been attributed to declines in neural synchrony and reduced central inhibition with advancing age. Evidence for such mechanisms, however, are influenced by the task (passive vs. attending) and the stimulus paradigm (e.g., static vs. continuous with dynamic change). That is, cortical auditory evoked potentials (CAEP) measured in response to static interaural time differences (ITDs) are larger in older versus younger listeners, consistent with reduced inhibition, while continuous stimuli with dynamic ITD changes lead to smaller responses in older compared to younger adults, suggestive of poorer neural synchrony. Additionally, the distribution of cortical activity is broader and less asymmetric in older than younger adults, consistent with the hemispheric asymmetry reduction in older adults model of cognitive aging. When older listeners attend to selected target locations in the free field, their CAEP components (N1, P2, P3) are again consistently smaller relative to younger listeners, and the reduced asymmetry in the distribution of cortical activity is maintained. As this research matures, proper neural biomarkers for changes in spatial hearing can provide objective evidence of impairment and targets for remediation. Future research should focus on the development and evaluation of effective approaches for remediating these spatial processing deficits associated with aging and hearing loss.
Collapse
Affiliation(s)
- Ann Clock Eddins
- Department of Communication Sciences and Disorders, University of South Florida, USA.
| | - Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, USA
| | - David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, USA; Department of Chemical and Biomedical Engineering, University of South Florida, USA
| |
Collapse
|
17
|
Herweg NA, Kahana MJ. Spatial Representations in the Human Brain. Front Hum Neurosci 2018; 12:297. [PMID: 30104966 PMCID: PMC6078001 DOI: 10.3389/fnhum.2018.00297] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Accepted: 07/06/2018] [Indexed: 11/13/2022] Open
Abstract
While extensive research on the neurophysiology of spatial memory has been carried out in rodents, memory research in humans had traditionally focused on more abstract, language-based tasks. Recent studies have begun to address this gap using virtual navigation tasks in combination with electrophysiological recordings in humans. These studies suggest that the human medial temporal lobe (MTL) is equipped with a population of place and grid cells similar to that previously observed in the rodent brain. Furthermore, theta oscillations have been linked to spatial navigation and, more specifically, to the encoding and retrieval of spatial information. While some studies suggest a single navigational theta rhythm which is of lower frequency in humans than rodents, other studies advocate for the existence of two functionally distinct delta-theta frequency bands involved in both spatial and episodic memory. Despite the general consensus between rodent and human electrophysiology, behavioral work in humans does not unequivocally support the use of a metric Euclidean map for navigation. Formal models of navigational behavior, which specifically consider the spatial scale of the environment and complementary learning mechanisms, may help to better understand different navigational strategies and their neurophysiological mechanisms. Finally, the functional overlap of spatial and declarative memory in the MTL calls for a unified theory of MTL function. Such a theory will critically rely upon linking task-related phenomena at multiple temporal and spatial scales. Understanding how single cell responses relate to ongoing theta oscillations during both the encoding and retrieval of spatial and non-spatial associations appears to be key toward developing a more mechanistic understanding of memory processes in the MTL.
Collapse
Affiliation(s)
- Nora A. Herweg
- Computational Memory Lab, Department of Psychology, University of Pennsylvania, Philadelphia, PA, United States
| | - Michael J. Kahana
- Computational Memory Lab, Department of Psychology, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
18
|
Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2,000 ms. Atten Percept Psychophys 2018; 80:1520-1538. [PMID: 29696570 DOI: 10.3758/s13414-018-1531-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.
Collapse
|
19
|
Salminen NH, Jones SJ, Christianson GB, Marquardt T, McAlpine D. A common periodic representation of interaural time differences in mammalian cortex. Neuroimage 2018; 167:95-103. [PMID: 29122721 PMCID: PMC5854251 DOI: 10.1016/j.neuroimage.2017.11.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 10/01/2017] [Accepted: 11/04/2017] [Indexed: 11/16/2022] Open
Abstract
Binaural hearing, the ability to detect small differences in the timing and level of sounds at the two ears, underpins the ability to localize sound sources along the horizontal plane, and is important for decoding complex spatial listening environments into separate objects – a critical factor in ‘cocktail-party listening’. For human listeners, the most important spatial cue is the interaural time difference (ITD). Despite many decades of neurophysiological investigations of ITD sensitivity in small mammals, and computational models aimed at accounting for human perception, a lack of concordance between these studies has hampered our understanding of how the human brain represents and processes ITDs. Further, neural coding of spatial cues might depend on factors such as head-size or hearing range, which differ considerably between humans and commonly used experimental animals. Here, using magnetoencephalography (MEG) in human listeners, and electro-corticography (ECoG) recordings in guinea pig—a small mammal representative of a range of animals in which ITD coding has been assessed at the level of single-neuron recordings—we tested whether processing of ITDs in human auditory cortex accords with a frequency-dependent periodic code of ITD reported in small mammals, or whether alternative or additional processing stages implemented in psychoacoustic models of human binaural hearing must be assumed. Our data were well accounted for by a model consisting of periodically tuned ITD-detectors, and were highly consistent across the two species. The results suggest that the representation of ITD in human auditory cortex is similar to that found in other mammalian species, a representation in which neural responses to ITD are determined by phase differences relative to sound frequency rather than, for instance, the range of ITDs permitted by head size or the absolute magnitude or direction of ITD. ITD tuning is studied in human MEG and guinea pig ECoG with identical stimuli. Auditory cortical tuning to ITD is highly consistent across species. Results are consistent with a periodic, frequency-dependent code.
Collapse
Affiliation(s)
- Nelli H Salminen
- Brain and Mind Laboratory, Dept. of Neuroscience and Biomedical Engineering, MEG Core, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland.
| | - Simon J Jones
- UCL Ear Institute, 332 Gray's Inn Road, London, WC1X 8EE, UK
| | | | | | - David McAlpine
- UCL Ear Institute, 332 Gray's Inn Road, London, WC1X 8EE, UK; Dept of Linguistics, Australian Hearing Hub, Macquarie University, Sydney, NSW 2109, Australia
| |
Collapse
|
20
|
Golob EJ, Lewald J, Getzmann S, Mock JR. Numerical value biases sound localization. Sci Rep 2017; 7:17252. [PMID: 29222526 PMCID: PMC5722947 DOI: 10.1038/s41598-017-17429-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 11/27/2017] [Indexed: 11/18/2022] Open
Abstract
Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1–9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.
Collapse
Affiliation(s)
- Edward J Golob
- Department of Psychology, Tulane University, New Orleans, LA, USA. .,Program in Neuroscience, Tulane University, New Orleans, LA, USA. .,Department of Psychology, University of Texas, San Antonio, USA.
| | - Jörg Lewald
- Faculty of Psychology, Ruhr University Bochum, D-44780, Bochum, Germany.,Leibniz Research Centre for Working Environment and Human Factors, Ardeystrasse 67, D-44139, Dortmund, Germany
| | - Stephan Getzmann
- Faculty of Psychology, Ruhr University Bochum, D-44780, Bochum, Germany.,Leibniz Research Centre for Working Environment and Human Factors, Ardeystrasse 67, D-44139, Dortmund, Germany
| | - Jeffrey R Mock
- Department of Psychology, Tulane University, New Orleans, LA, USA.,Department of Psychology, University of Texas, San Antonio, USA
| |
Collapse
|
21
|
Shrem T, Murray MM, Deouell LY. Auditory-visual integration modulates location-specific repetition suppression of auditory responses. Psychophysiology 2017; 54:1663-1675. [PMID: 28752567 DOI: 10.1111/psyp.12955] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2016] [Revised: 05/10/2017] [Accepted: 06/03/2017] [Indexed: 11/28/2022]
Abstract
Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information.
Collapse
Affiliation(s)
- Talia Shrem
- Human Cognitive Neuroscience Lab, Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Micah M Murray
- Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, and Neuropsychology and Neurorehabilitation Service, University Hospital Center and University of Lausanne, Lausanne, Switzerland.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM), Lausanne, Switzerland.,Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Lausanne, Switzerland.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA
| | - Leon Y Deouell
- Human Cognitive Neuroscience Lab, Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel.,The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
22
|
Ozmeral EJ, Eddins DA, Eddins AC. Reduced temporal processing in older, normal-hearing listeners evident from electrophysiological responses to shifts in interaural time difference. J Neurophysiol 2016; 116:2720-2729. [PMID: 27683889 PMCID: PMC5133308 DOI: 10.1152/jn.00560.2016] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Accepted: 09/24/2016] [Indexed: 11/22/2022] Open
Abstract
Previous electrophysiological studies of interaural time difference (ITD) processing have demonstrated that ITDs are represented by a nontopographic population rate code. Rather than narrow tuning to ITDs, neural channels have broad tuning to ITDs in either the left or right auditory hemifield, and the relative activity between the channels determines the perceived lateralization of the sound. With advancing age, spatial perception weakens and poor temporal processing contributes to declining spatial acuity. At present, it is unclear whether age-related temporal processing deficits are due to poor inhibitory controls in the auditory system or degraded neural synchrony at the periphery. Cortical processing of spatial cues based on a hemifield code are susceptible to potential age-related physiological changes. We consider two distinct predictions of age-related changes to ITD sensitivity: declines in inhibitory mechanisms would lead to increased excitation and medial shifts to rate-azimuth functions, whereas a general reduction in neural synchrony would lead to reduced excitation and shallower slopes in the rate-azimuth function. The current study tested these possibilities by measuring an evoked response to ITD shifts in a narrow-band noise. Results were more in line with the latter outcome, both from measured latencies and amplitudes of the global field potentials and source-localized waveforms in the left and right auditory cortices. The measured responses for older listeners also tended to have reduced asymmetric distribution of activity in response to ITD shifts, which is consistent with other sensory and cognitive processing models of aging.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - Ann C Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| |
Collapse
|
23
|
Shestopalova L, Petropavlovskaia E, Vaitulevich S, Nikitin N. Hemispheric asymmetry of ERPs and MMNs evoked by slow, fast and abrupt auditory motion. Neuropsychologia 2016; 91:465-479. [DOI: 10.1016/j.neuropsychologia.2016.09.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 08/25/2016] [Accepted: 09/13/2016] [Indexed: 10/21/2022]
|
24
|
Dykstra AR, Burchard D, Starzynski C, Riedel H, Rupp A, Gutschalk A. Lateralization and Binaural Interaction of Middle-Latency and Late-Brainstem Components of the Auditory Evoked Response. J Assoc Res Otolaryngol 2016; 17:357-70. [PMID: 27197812 DOI: 10.1007/s10162-016-0572-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Accepted: 05/02/2016] [Indexed: 01/22/2023] Open
Abstract
We used magnetoencephalography to examine lateralization and binaural interaction of the middle-latency and late-brainstem components of the auditory evoked response (the MLR and SN10, respectively). Click stimuli were presented either monaurally, or binaurally with left- or right-leading interaural time differences (ITDs). While early MLR components, including the N19 and P30, were larger for monaural stimuli presented contralaterally (by approximately 30 and 36 % in the left and right hemispheres, respectively), later components, including the N40 and P50, were larger ipsilaterally. In contrast, MLRs elicited by binaural clicks with left- or right-leading ITDs did not differ. Depending on filter settings, weak binaural interaction could be observed as early as the P13 but was clearly much larger for later components, beginning at the P30, indicating some degree of binaural linearity up to early stages of cortical processing. The SN10, an obscure late-brainstem component, was observed consistently in individuals and showed linear binaural additivity. The results indicate that while the MLR is lateralized in response to monaural stimuli-and not ITDs-this lateralization reverses from primarily contralateral to primarily ipsilateral as early as 40 ms post stimulus and is never as large as that seen with fMRI.
Collapse
Affiliation(s)
- Andrew R Dykstra
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany.
| | - Daniel Burchard
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany.,Department of Human Neurobiology, Center for Cognitive Science, Universität Bremen, Bremen, Germany
| | - Christian Starzynski
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Helmut Riedel
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Andre Rupp
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| |
Collapse
|
25
|
Lewald J. Modulation of human auditory spatial scene analysis by transcranial direct current stimulation. Neuropsychologia 2016; 84:282-93. [PMID: 26825012 DOI: 10.1016/j.neuropsychologia.2016.01.030] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2015] [Revised: 01/24/2016] [Accepted: 01/25/2016] [Indexed: 10/22/2022]
Abstract
Localizing and selectively attending to the source of a sound of interest in a complex auditory environment is an important capacity of the human auditory system. The underlying neural mechanisms have, however, still not been clarified in detail. This issue was addressed by using bilateral bipolar-balanced transcranial direct current stimulation (tDCS) in combination with a task demanding free-field sound localization in the presence of multiple sound sources, thus providing a realistic simulation of the so-called "cocktail-party" situation. With left-anode/right-cathode, but not with right-anode/left-cathode, montage of bilateral electrodes, tDCS over superior temporal gyrus, including planum temporale and auditory cortices, was found to improve the accuracy of target localization in left hemispace. No effects were found for tDCS over inferior parietal lobule or with off-target active stimulation over somatosensory-motor cortex that was used to control for non-specific effects. Also, the absolute error in localization remained unaffected by tDCS, thus suggesting that general response precision was not modulated by brain polarization. This finding can be explained in the framework of a model assuming that brain polarization modulated the suppression of irrelevant sound sources, thus resulting in more effective spatial separation of the target from the interfering sound in the complex auditory scene.
Collapse
Affiliation(s)
- Jörg Lewald
- Auditory Cognitive Neuroscience Laboratory, Department of Cognitive Psychology, Ruhr University Bochum, D-44780 Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Ardeystraße 67, D-44139 Dortmund, Germany.
| |
Collapse
|
26
|
Odegaard B, Wozny DR, Shams L. Biases in Visual, Auditory, and Audiovisual Perception of Space. PLoS Comput Biol 2015; 11:e1004649. [PMID: 26646312 PMCID: PMC4672909 DOI: 10.1371/journal.pcbi.1004649] [Citation(s) in RCA: 76] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2015] [Accepted: 11/09/2015] [Indexed: 11/18/2022] Open
Abstract
Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the precision of perceptual estimates, but also the accuracy.
Collapse
Affiliation(s)
- Brian Odegaard
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States of America
| | - David R. Wozny
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States of America
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States of America
- Department of BioEngineering, University of California, Los Angeles, Los Angeles, California, United States of America
- Neuroscience Interdepartmental Program, University of California, Los Angeles, Los Angeles, California, United States of America
| |
Collapse
|
27
|
Freigang C, Richter N, Rübsamen R, Ludwig AA. Age-related changes in sound localisation ability. Cell Tissue Res 2015; 361:371-86. [PMID: 26077928 DOI: 10.1007/s00441-015-2230-8] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2015] [Accepted: 05/26/2015] [Indexed: 10/23/2022]
Abstract
Auditory spatial processing is an important ability in everyday life and allows the processing of omnidirectional information. In this review, we report and compare data from psychoacoustic and electrophysiological experiments on sound localisation accuracy and auditory spatial discrimination in infants, children, and young and older adults. The ability to process auditory spatial information changes over lifetime: the perception of the acoustic space develops from an initially imprecise representation in infants and young children to a concise representation of spatial positions in young adults and the respective performance declines again in older adults. Localisation accuracy shows a strong deterioration in older adults, presumably due to declined processing of binaural temporal and monaural spectro-temporal cues. When compared to young adults, the thresholds for spatial discrimination were strongly elevated both in young children and older adults. Despite the consistency of the measured values the underlying causes for the impaired performance might be different: (1) the effect is due to reduced cognitive processing ability and is thus task-related; (2) the effect is due to reduced information about the auditory space and caused by declined processing in auditory brain stem circuits; and (3) the auditory space processing regime in young children is still undergoing developmental changes and the interrelation with spatial visual processing is not yet established. In conclusion, we argue that for studying auditory space processing over the life course, it is beneficial to investigate spatial discrimination ability instead of localisation accuracy because it more reliably indicates changes in the processing ability.
Collapse
Affiliation(s)
- Claudia Freigang
- Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Talstrasse 33, 04103, Leipzig, Germany,
| | | | | | | |
Collapse
|
28
|
Trapeau R, Schönwiesner M. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex. Neuroimage 2015; 118:26-38. [PMID: 26054873 DOI: 10.1016/j.neuroimage.2015.06.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 05/06/2015] [Accepted: 06/02/2015] [Indexed: 11/29/2022] Open
Abstract
The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere.
Collapse
Affiliation(s)
- Régis Trapeau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada; Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
29
|
Mock JR, Seay MJ, Charney DR, Holmes JL, Golob EJ. Rapid cortical dynamics associated with auditory spatial attention gradients. Front Neurosci 2015; 9:179. [PMID: 26082679 PMCID: PMC4451343 DOI: 10.3389/fnins.2015.00179] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2015] [Accepted: 05/04/2015] [Indexed: 11/13/2022] Open
Abstract
Behavioral and EEG studies suggest spatial attention is allocated as a gradient in which processing benefits decrease away from an attended location. Yet the spatiotemporal dynamics of cortical processes that contribute to attentional gradients are unclear. We measured EEG while participants (n = 35) performed an auditory spatial attention task that required a button press to sounds at one target location on either the left or right. Distractor sounds were randomly presented at four non-target locations evenly spaced up to 180° from the target location. Attentional gradients were quantified by regressing ERP amplitudes elicited by distractors against their spatial location relative to the target. Independent component analysis was applied to each subject's scalp channel data, allowing isolation of distinct cortical sources. Results from scalp ERPs showed a tri-phasic response with gradient slope peaks at ~300 ms (frontal, positive), ~430 ms (posterior, negative), and a plateau starting at ~550 ms (frontal, positive). Corresponding to the first slope peak, a positive gradient was found within a central component when attending to both target locations and for two lateral frontal components when contralateral to the target location. Similarly, a central posterior component had a negative gradient that corresponded to the second slope peak regardless of target location. A right posterior component had both an ipsilateral followed by a contralateral gradient. Lateral posterior clusters also had decreases in α and β oscillatory power with a negative slope and contralateral tuning. Only the left posterior component (120-200 ms) corresponded to absolute sound location. The findings indicate a rapid, temporally-organized sequence of gradients thought to reflect interplay between frontal and parietal regions. We conclude these gradients support a target-based saliency map exhibiting aspects of both right-hemisphere dominance and opponent process models.
Collapse
Affiliation(s)
- Jeffrey R Mock
- Department of Psychology, Tulane University New Orleans, LA, USA
| | - Michael J Seay
- Department of Psychology, Tulane University New Orleans, LA, USA
| | | | - John L Holmes
- Department of Psychology, Tulane University New Orleans, LA, USA
| | - Edward J Golob
- Department of Psychology, Tulane University New Orleans, LA, USA ; Program in Neuroscience, Tulane University New Orleans, LA, USA ; Program in Aging, Tulane University New Orleans, LA, USA
| |
Collapse
|
30
|
Salminen NH, Takanen M, Santala O, Alku P, Pulkki V. Neural realignment of spatially separated sound components. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:3356-3365. [PMID: 26093425 DOI: 10.1121/1.4921605] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Natural auditory scenes often consist of several sound sources overlapping in time, but separated in space. Yet, location is not fully exploited in auditory grouping: spatially separated sounds can get perceptually fused into a single auditory object and this leads to difficulties in the identification and localization of concurrent sounds. Here, the brain mechanisms responsible for grouping across spatial locations were explored in magnetoencephalography (MEG) recordings. The results show that the cortical representation of a vowel spatially separated into two locations reflects the perceived location of the speech sound rather than the physical locations of the individual components. In other words, the auditory scene is neurally rearranged to bring components into spatial alignment when they were deemed to belong to the same object. This renders the original spatial information unavailable at the level of the auditory cortex and may contribute to difficulties in concurrent sound segregation.
Collapse
Affiliation(s)
- Nelli H Salminen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University School of Science, P.O. Box 12200, Aalto, FI-00076, Finland
| | - Marko Takanen
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Olli Santala
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Paavo Alku
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| |
Collapse
|
31
|
Salminen NH, Altoè A, Takanen M, Santala O, Pulkki V. Human cortical sensitivity to interaural time difference in high-frequency sounds. Hear Res 2015; 323:99-106. [PMID: 25668126 DOI: 10.1016/j.heares.2015.01.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Revised: 01/22/2015] [Accepted: 01/27/2015] [Indexed: 11/18/2022]
Abstract
Human sound source localization relies on various acoustical cues one of the most important being the interaural time difference (ITD). ITD is best detected in the fine structure of low-frequency sounds but it may also contribute to spatial hearing at higher frequencies if extracted from the sound envelope. The human brain mechanisms related to this envelope ITD cue remain unexplored. Here, we tested the sensitivity of the human auditory cortex to envelope ITD in magnetoencephalography (MEG) recordings. We found two types of sensitivity to envelope ITD. First, the amplitude of the auditory cortical N1m response was smaller for zero envelope ITD than for long envelope ITDs corresponding to the sound being in opposite phase in the two ears. Second, the N1m response amplitude showed ITD-specific adaptation for both fine-structure and for envelope ITD. The auditory cortical sensitivity was weaker for envelope ITD in high-frequency sounds than for fine-structure ITD in low-frequency sounds but occurred within a range of ITDs that are encountered in natural conditions. Finally, the participants were briefly tested for their behavioral ability to detect envelope ITD. Interestingly, we found a correlation between the behavioral performance and the neural sensitivity to envelope ITD. In conclusion, our findings show that the human auditory cortex is sensitive to ITD in the envelope of high-frequency sounds and this sensitivity may have behavioral relevance.
Collapse
Affiliation(s)
- Nelli H Salminen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University School of Science, P.O. Box 12200, FI-00076 Aalto, Finland; MEG Core, Aalto NeuroImaging, Aalto University School of Science, Finland.
| | - Alessandro Altoè
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| | - Marko Takanen
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| | - Olli Santala
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 Aalto, Finland
| |
Collapse
|
32
|
A neurocomputational analysis of the sound-induced flash illusion. Neuroimage 2014; 92:248-66. [DOI: 10.1016/j.neuroimage.2014.02.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2013] [Revised: 01/14/2014] [Accepted: 02/01/2014] [Indexed: 11/18/2022] Open
|
33
|
Free-field study on auditory localization and discrimination performance in older adults. Exp Brain Res 2014; 232:1157-72. [PMID: 24449009 DOI: 10.1007/s00221-014-3825-0] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2013] [Accepted: 01/04/2014] [Indexed: 10/25/2022]
Abstract
Localization accuracy and acuity for low- (0.375-0.75 kHz; LN) and high-frequency (2.25-4.5 kHz; HN) noise bands were examined in young (20-29 years) and older adults (65-83 years) in the acoustic free-field. A pointing task was applied to quantify accuracy, while acuity was inferred from minimum audible angle (MAA) thresholds measured with an adaptive 3-alternative forced-choice procedure. Accuracy decreased with laterality and age. From young to older adults, the accuracy declined by up to 23 % for the low-frequency noise band across all lateralities. The mean age effect was even more pronounced on MAA thresholds. Thus, age was a strong predictor for MAA thresholds for both LN and HN bands. There was no significant correlation between hearing status and localization performance. These results suggest that central auditory processing of space declines with age and is mainly driven by age-related changes in the processing of binaural cues (interaural time difference and interaural intensity difference) and not directly induced by peripheral hearing loss. We conclude that the representation of the location of sound sources becomes blurred with age as a consequence of declined temporal processing, the effect of which becomes particularly evident for MAA thresholds, where two closely adjoining sound sources have to be separated. While localization accuracy and MAA were not correlated in older adults, only a weak correlation was found in young adults. These results point to an employment of different processing strategies for localization accuracy and acuity.
Collapse
|
34
|
Kyweriga M, Stewart W, Wehr M. Neuronal interaural level difference response shifts are level-dependent in the rat auditory cortex. J Neurophysiol 2013; 111:930-8. [PMID: 24335208 DOI: 10.1152/jn.00648.2013] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
How does the brain accomplish sound localization with invariance to total sound level? Sensitivity to interaural level differences (ILDs) is first computed at the lateral superior olive (LSO) and is observed at multiple levels of the auditory pathway, including the central nucleus of inferior colliculus (ICC) and auditory cortex. In LSO, this ILD sensitivity is level-dependent, such that ILD response functions shift toward the ipsilateral (excitatory) ear with increasing sound level. Thus early in the processing pathway changes in firing rate could indicate changes in sound location, sound level, or both. In ICC, while ILD responses can shift toward either ear in individual neurons, there is no net ILD response shift at the population level. In behavioral studies of human sound localization acuity, ILD sensitivity is invariant to increasing sound levels. Level-invariant sound localization would suggest transformation in level sensitivity between LSO and perception of sound sources. Whether this transformation is completed at the level of the ICC or continued at higher levels remains unclear. It also remains unknown whether perceptual sound localization is level-invariant in rats, as it is in humans. We asked whether ILD sensitivity is level-invariant in rat auditory cortex. We performed single-unit and whole cell recordings in rat auditory cortex under ketamine anesthesia and measured responses to white noise bursts presented through sealed earphones at a range of ILDs. Surprisingly, we found that with increasing sound levels ILD responses shifted toward the ipsilateral ear (which is typically inhibitory), regardless of whether cells preferred ipsilateral, contralateral, or binaural stimuli. Voltage-clamp recordings suggest that synaptic inhibition does not contribute substantially to this transformation in level sensitivity. We conclude that the level invariance of ILD sensitivity seen in behavioral studies is not present in rat auditory cortex.
Collapse
|
35
|
Richter N, Schröger E, Rübsamen R. Differences in evoked potentials during the active processing of sound location and motion. Neuropsychologia 2013; 51:1204-14. [PMID: 23499852 DOI: 10.1016/j.neuropsychologia.2013.03.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2012] [Revised: 02/25/2013] [Accepted: 03/04/2013] [Indexed: 10/27/2022]
Abstract
Difference in the processing of motion and static sounds in the human cortex was studied by electroencephalography with subjects performing an active discrimination task. Sound bursts were presented in the acoustic free-field between 47° to the left and 47° to the right under three different stimulus conditions: (i) static, (ii) leftward motion, and (iii) rightward motion. In an active oddball design, subject was asked to detect target stimuli which were randomly embedded within a stream of frequently occurring non-target events (i.e. 'standards') and rare non-target stimuli (i.e. 'deviants'). The respective acoustic stimuli were presented in blocks with each stimulus type presented in either of three stimulus conditions: as target, as non-target, or as standard. The analysis focussed on the event related potentials evoked by the different stimulus types under the respective standard condition. Same as in previous studies, all three different acoustic stimuli elicited the obligatory P1/N1/P2 complex in the range of 50-200 ms. However, comparisons of ERPs elicited by static stimuli and both kinds of motion stimuli yielded differences as early as ~100 ms after stimulus-onset, i.e. at the level of the exogenous N1 and P2 components. Differences in signal amplitudes were also found in a time window 300-400 ms ('d300-400 ms' component in 'motion-minus-static' difference wave). For motion stimuli, the N1 amplitudes were larger over the hemisphere contralateral to the origin of motion, while for static stimuli N1 amplitudes over both hemispheres were in the same range. Contrary to the N1 component, the ERP in the 'd300-400 ms' time period showed stronger responses over the hemisphere contralateral to motion termination, with the static stimuli again yielding equal bilateral amplitudes. For the P2 component a motion-specific effect with larger signal amplitudes over the left hemisphere was found compared to static stimuli. The presently documented N1 components comply with the results of previous studies on auditory space processing and suggest a contralateral dominance during the process of cortical integration of spatial acoustic information. Additionally, the cortical activity in the 'd300-400 ms' time period indicates, that in addition to the motion origin (as reflected by the N1) also the direction of motion (leftward/ rightward motion) or rather motion termination is cortically encoded. These electrophysiological results are in accordance with the 'snap shot' hypothesis, assuming that auditory motion processing is not based on a genuine motion-sensitive system, but rather on a comparison process of spatial positions of motion origin (onset) and motion termination (offset). Still, specificities of the present P2 component provides evidence for additional motion-specific processes possibly associated with the evaluation of motion-specific attributes, i.e. motion direction and/or velocity which is preponderant in the left hemisphere.
Collapse
Affiliation(s)
- Nicole Richter
- University of Leipzig, Institute for Biology, Talstr 33, 04103 Leipzig, Germany.
| | | | | |
Collapse
|