1
|
Snir A, Cieśla K, Vekslar R, Amedi A. Highly compromised auditory spatial perception in aided congenitally hearing-impaired and rapid improvement with tactile technology. iScience 2024; 27:110808. [PMID: 39290844 PMCID: PMC11407022 DOI: 10.1016/j.isci.2024.110808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 07/11/2024] [Accepted: 08/21/2024] [Indexed: 09/19/2024] Open
Abstract
Spatial understanding is a multisensory construct while hearing is the only natural sense enabling the simultaneous perception of the entire 3D space. To test whether such spatial understanding is dependent on auditory experience, we study congenitally hearing-impaired users of assistive devices. We apply an in-house technology, which, inspired by the auditory system, performs intensity-weighting to represent external spatial positions and motion on the fingertips. We see highly impaired auditory spatial capabilities for tracking moving sources, which based on the "critical periods" theory emphasizes the role of nature in sensory development. Meanwhile, for tactile and audio-tactile spatial motion perception, the hearing-impaired show performance similar to typically hearing individuals. The immediate availability of 360° external space representation through touch, despite the lack of such experience during the lifetime, points to the significant role of nurture in spatial perception development, and to its amodal character. The findings show promise toward advancing multisensory solutions for rehabilitation.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| |
Collapse
|
2
|
Smyre SA, Bean NL, Stein BE, Rowland BA. The brain can develop conflicting multisensory principles to guide behavior. Cereb Cortex 2024; 34:bhae247. [PMID: 38879756 PMCID: PMC11179994 DOI: 10.1093/cercor/bhae247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/23/2024] [Accepted: 05/30/2024] [Indexed: 06/19/2024] Open
Abstract
Midbrain multisensory neurons undergo a significant postnatal transition in how they process cross-modal (e.g. visual-auditory) signals. In early stages, signals derived from common events are processed competitively; however, at later stages they are processed cooperatively such that their salience is enhanced. This transition reflects adaptation to cross-modal configurations that are consistently experienced and become informative about which correspond to common events. Tested here was the assumption that overt behaviors follow a similar maturation. Cats were reared in omnidirectional sound thereby compromising the experience needed for this developmental process. Animals were then repeatedly exposed to different configurations of visual and auditory stimuli (e.g. spatiotemporally congruent or spatially disparate) that varied on each side of space and their behavior was assessed using a detection/localization task. Animals showed enhanced performance to stimuli consistent with the experience provided: congruent stimuli elicited enhanced behaviors where spatially congruent cross-modal experience was provided, and spatially disparate stimuli elicited enhanced behaviors where spatially disparate cross-modal experience was provided. Cross-modal configurations not consistent with experience did not enhance responses. The presumptive benefit of such flexibility in the multisensory developmental process is to sensitize neural circuits (and the behaviors they control) to the features of the environment in which they will function. These experiments reveal that these processes have a high degree of flexibility, such that two (conflicting) multisensory principles can be implemented by cross-modal experience on opposite sides of space even within the same animal.
Collapse
Affiliation(s)
- Scott A Smyre
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Naomi L Bean
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| |
Collapse
|
3
|
Aker SC, Faulkner KF, Innes-Brown H, Vatti M, Marozeau J. Some, but not all, cochlear implant users prefer music stimuli with congruent haptic stimulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3101-3117. [PMID: 38722101 DOI: 10.1121/10.0025854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 04/10/2024] [Indexed: 09/20/2024]
Abstract
Cochlear implant (CI) users often report being unsatisfied by music listening through their hearing device. Vibrotactile stimulation could help alleviate those challenges. Previous research has shown that musical stimuli was given higher preference ratings by normal-hearing listeners when concurrent vibrotactile stimulation was congruent in intensity and timing with the corresponding auditory signal compared to incongruent. However, it is not known whether this is also the case for CI users. Therefore, in this experiment, we presented 18 CI users and 24 normal-hearing listeners with five melodies and five different audio-to-tactile maps. Each map varied the congruence between the audio and tactile signals related to intensity, fundamental frequency, and timing. Participants were asked to rate the maps from zero to 100, based on preference. It was shown that almost all normal-hearing listeners, as well as a subset of the CI users, preferred tactile stimulation, which was congruent with the audio in intensity and timing. However, many CI users had no difference in preference between timing aligned and timing unaligned stimuli. The results provide evidence that vibrotactile music enjoyment enhancement could be a solution for some CI users; however, more research is needed to understand which CI users can benefit from it most.
Collapse
Affiliation(s)
- Scott C Aker
- Music and CI Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 1165, Denmark
- Oticon A/S, Smørum, 2765, Denmark
| | | | - Hamish Innes-Brown
- Eriksholm Research Centre, Snekkersten, 3070, Denmark
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 1165, Denmark
| | | | - Jeremy Marozeau
- Music and CI Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 1165, Denmark
| |
Collapse
|
4
|
Schulte A, Marozeau J, Ruhe A, Büchner A, Kral A, Innes-Brown H. Improved speech intelligibility in the presence of congruent vibrotactile speech input. Sci Rep 2023; 13:22657. [PMID: 38114599 PMCID: PMC10730903 DOI: 10.1038/s41598-023-48893-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 11/30/2023] [Indexed: 12/21/2023] Open
Abstract
Vibrotactile stimulation is believed to enhance auditory speech perception, offering potential benefits for cochlear implant (CI) users who may utilize compensatory sensory strategies. Our study advances previous research by directly comparing tactile speech intelligibility enhancements in normal-hearing (NH) and CI participants, using the same paradigm. Moreover, we assessed tactile enhancement considering stimulus non-specific, excitatory effects through an incongruent audio-tactile control condition that did not contain any speech-relevant information. In addition to this incongruent audio-tactile condition, we presented sentences in an auditory only and a congruent audio-tactile condition, with the congruent tactile stimulus providing low-frequency envelope information via a vibrating probe on the index fingertip. The study involved 23 NH listeners and 14 CI users. In both groups, significant tactile enhancements were observed for congruent tactile stimuli (5.3% for NH and 5.4% for CI participants), but not for incongruent tactile stimulation. These findings replicate previously observed tactile enhancement effects. Juxtaposing our study with previous research, the informational content of the tactile stimulus emerges as a modulator of intelligibility: Generally, congruent stimuli enhanced, non-matching tactile stimuli reduced, and neutral stimuli did not change test outcomes. We conclude that the temporal cues provided by congruent vibrotactile stimuli may aid in parsing continuous speech signals into syllables and words, consequently leading to the observed improvements in intelligibility.
Collapse
Affiliation(s)
- Alina Schulte
- Department of Experimental Otology of the Clinics of Otolaryngology, Hannover Medical School, Hannover, Germany.
- Eriksholm Research Center, Oticon A/S, Snekkersten, Denmark.
| | - Jeremy Marozeau
- Music and Cochlear Implants Lab, Department of Health Technology, Technical University Denmark, Kongens Lyngby, Denmark
| | - Anna Ruhe
- Department of Experimental Otology of the Clinics of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Andreas Büchner
- Department of Experimental Otology of the Clinics of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Andrej Kral
- Department of Experimental Otology of the Clinics of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Hamish Innes-Brown
- Eriksholm Research Center, Oticon A/S, Snekkersten, Denmark
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
5
|
Bean NL, Stein BE, Rowland BA. Cross-modal exposure restores multisensory enhancement after hemianopia. Cereb Cortex 2023; 33:11036-11046. [PMID: 37724427 PMCID: PMC10646694 DOI: 10.1093/cercor/bhad343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 08/28/2023] [Accepted: 08/30/2023] [Indexed: 09/20/2023] Open
Abstract
Hemianopia is a common consequence of unilateral damage to visual cortex that manifests as a profound blindness in contralesional space. A noninvasive cross-modal (visual-auditory) exposure paradigm has been developed in an animal model to ameliorate this disorder. Repeated stimulation of a visual-auditory stimulus restores overt responses to visual stimuli in the blinded hemifield. It is believed to accomplish this by enhancing the visual sensitivity of circuits remaining after a lesion of visual cortex; in particular, circuits involving the multisensory neurons of the superior colliculus. Neurons in this midbrain structure are known to integrate spatiotemporally congruent visual and auditory signals to amplify their responses, which, in turn, enhances behavioral performance. Here we evaluated the relationship between the rehabilitation of hemianopia and this process of multisensory integration. Induction of hemianopia also eliminated multisensory enhancement in the blinded hemifield. Both vision and multisensory enhancement rapidly recovered with the rehabilitative cross-modal exposures. However, although both reached pre-lesion levels at similar rates, they did so with different spatial patterns. The results suggest that the capability for multisensory integration and enhancement is not a pre-requisite for visual recovery in hemianopia, and that the underlying mechanisms for recovery may be more complex than currently appreciated.
Collapse
Affiliation(s)
- Naomi L Bean
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157, United States
| |
Collapse
|
6
|
Smyre SA, Bean NL, Stein BE, Rowland BA. Predictability alters multisensory responses by modulating unisensory inputs. Front Neurosci 2023; 17:1150168. [PMID: 37065927 PMCID: PMC10090419 DOI: 10.3389/fnins.2023.1150168] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/13/2023] [Indexed: 03/30/2023] Open
Abstract
The multisensory (deep) layers of the superior colliculus (SC) play an important role in detecting, localizing, and guiding orientation responses to salient events in the environment. Essential to this role is the ability of SC neurons to enhance their responses to events detected by more than one sensory modality and to become desensitized ('attenuated' or 'habituated') or sensitized ('potentiated') to events that are predictable via modulatory dynamics. To identify the nature of these modulatory dynamics, we examined how the repetition of different sensory stimuli affected the unisensory and multisensory responses of neurons in the cat SC. Neurons were presented with 2HZ stimulus trains of three identical visual, auditory, or combined visual-auditory stimuli, followed by a fourth stimulus that was either the same or different ('switch'). Modulatory dynamics proved to be sensory-specific: they did not transfer when the stimulus switched to another modality. However, they did transfer when switching from the visual-auditory stimulus train to either of its modality-specific component stimuli and vice versa. These observations suggest that predictions, in the form of modulatory dynamics induced by stimulus repetition, are independently sourced from and applied to the modality-specific inputs to the multisensory neuron. This falsifies several plausible mechanisms for these modulatory dynamics: they neither produce general changes in the neuron's transform, nor are they dependent on the neuron's output.
Collapse
|
7
|
Bean NL, Smyre SA, Stein BE, Rowland BA. Noise-rearing precludes the behavioral benefits of multisensory integration. Cereb Cortex 2023; 33:948-958. [PMID: 35332919 PMCID: PMC9930622 DOI: 10.1093/cercor/bhac113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 02/23/2022] [Accepted: 02/24/2022] [Indexed: 11/14/2022] Open
Abstract
Concordant visual-auditory stimuli enhance the responses of individual superior colliculus (SC) neurons. This neuronal capacity for "multisensory integration" is not innate: it is acquired only after substantial cross-modal (e.g. auditory-visual) experience. Masking transient auditory cues by raising animals in omnidirectional sound ("noise-rearing") precludes their ability to obtain this experience and the ability of the SC to construct a normal multisensory (auditory-visual) transform. SC responses to combinations of concordant visual-auditory stimuli are depressed, rather than enhanced. The present experiments examined the behavioral consequence of this rearing condition in a simple detection/localization task. In the first experiment, the auditory component of the concordant cross-modal pair was novel, and only the visual stimulus was a target. In the second experiment, both component stimuli were targets. Noise-reared animals failed to show multisensory performance benefits in either experiment. These results reveal a close parallel between behavior and single neuron physiology in the multisensory deficits that are induced when noise disrupts early visual-auditory experience.
Collapse
Affiliation(s)
- Naomi L Bean
- Corresponding author: Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States.
| | | | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| |
Collapse
|
8
|
Li H, Song L, Wang P, Weiss PH, Fink GR, Zhou X, Chen Q. Impaired body-centered sensorimotor transformations in congenitally deaf people. Brain Commun 2022; 4:fcac148. [PMID: 35774184 PMCID: PMC9240416 DOI: 10.1093/braincomms/fcac148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 02/26/2022] [Accepted: 06/03/2022] [Indexed: 11/20/2022] Open
Abstract
Congenital deafness modifies an individual’s daily interaction with the environment and alters the fundamental perception of the external world. How congenital deafness shapes the interface between the internal and external worlds remains poorly understood. To interact efficiently with the external world, visuospatial representations of external target objects need to be effectively transformed into sensorimotor representations with reference to the body. Here, we tested the hypothesis that egocentric body-centred sensorimotor transformation is impaired in congenital deafness. Consistent with this hypothesis, we found that congenital deafness induced impairments in egocentric judgements, associating the external objects with the internal body. These impairments were due to deficient body-centred sensorimotor transformation per se, rather than the reduced fidelity of the visuospatial representations of the egocentric positions. At the neural level, we first replicated the previously well-documented critical involvement of the frontoparietal network in egocentric processing, in both congenitally deaf participants and hearing controls. However, both the strength of neural activity and the intra-network connectivity within the frontoparietal network alone could not account for egocentric performance variance. Instead, the inter-network connectivity between the task-positive frontoparietal network and the task-negative default-mode network was significantly correlated with egocentric performance: the more cross-talking between them, the worse the egocentric judgement. Accordingly, the impaired egocentric performance in the deaf group was related to increased inter-network connectivity between the frontoparietal network and the default-mode network and decreased intra-network connectivity within the default-mode network. The altered neural network dynamics in congenital deafness were observed for both evoked neural activity during egocentric processing and intrinsic neural activity during rest. Our findings thus not only demonstrate the optimal network configurations between the task-positive and -negative neural networks underlying coherent body-centred sensorimotor transformations but also unravel a critical cause (i.e. impaired body-centred sensorimotor transformation) of a variety of hitherto unexplained difficulties in sensory-guided movements the deaf population experiences in their daily life.
Collapse
Affiliation(s)
- Hui Li
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education , China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University , China
| | - Li Song
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education , China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University , China
| | - Pengfei Wang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education , China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University , China
| | - Peter H. Weiss
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Germany, Wilhelm-Johnen-Strasse , 52428 Jülich, Germany
- Department of Neurology, University Hospital Cologne, Cologne University , 509737 Cologne, Germany
| | - Gereon R. Fink
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Germany, Wilhelm-Johnen-Strasse , 52428 Jülich, Germany
- Department of Neurology, University Hospital Cologne, Cologne University , 509737 Cologne, Germany
| | - Xiaolin Zhou
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University , 200062 Shanghai, China
| | - Qi Chen
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Germany, Wilhelm-Johnen-Strasse , 52428 Jülich, Germany
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education , China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University , China
| |
Collapse
|
9
|
Event-related potential correlates of visuo-tactile motion processing in congenitally deaf humans. Neuropsychologia 2022; 170:108209. [DOI: 10.1016/j.neuropsychologia.2022.108209] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 02/23/2022] [Accepted: 03/08/2022] [Indexed: 01/08/2023]
|
10
|
Fletcher MD. Can Haptic Stimulation Enhance Music Perception in Hearing-Impaired Listeners? Front Neurosci 2021; 15:723877. [PMID: 34531717 PMCID: PMC8439542 DOI: 10.3389/fnins.2021.723877] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/11/2021] [Indexed: 01/07/2023] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom.,Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
11
|
Lasfargues-Delannoy A, Strelnikov K, Deguine O, Marx M, Barone P. Supra-normal skills in processing of visuo-auditory prosodic information by cochlear-implanted deaf patients. Hear Res 2021; 410:108330. [PMID: 34492444 DOI: 10.1016/j.heares.2021.108330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 07/08/2021] [Accepted: 08/02/2021] [Indexed: 10/20/2022]
Abstract
Cochlear implanted (CI) adults with acquired deafness are known to depend on multisensory integration skills (MSI) for speech comprehension through the fusion of speech reading skills and their deficient auditory perception. But, little is known on how CI patients perceive prosodic information relating to speech content. Our study aimed to identify how CI patients use MSI between visual and auditory information to process paralinguistic prosodic information of multimodal speech and the visual strategies employed. A psychophysics assessment was developed, in which CI patients and hearing controls (NH) had to distinguish between a question and a statement. The controls were separated into two age groups (young and aged-matched) to dissociate any effect of aging. In addition, the oculomotor strategies used when facing a speaker in this prosodic decision task were recorded using an eye-tracking device and compared to controls. This study confirmed that prosodic processing is multisensory but it revealed that CI patients showed significant supra-normal audiovisual integration for prosodic information compared to hearing controls irrespective of age. This study clearly showed that CI patients had a visuo-auditory gain more than 3 times larger than that observed in hearing controls. Furthermore, CI participants performed better in the visuo-auditory situation through a specific oculomotor exploration of the face as they significantly fixate the mouth region more than young NH participants who fixate the eyes, whereas the aged-matched controls presented an intermediate exploration pattern equally reported between the eyes and mouth. To conclude, our study demonstrated that CI patients have supra-normal skills MSI when integrating visual and auditory linguistic prosodic information, and a specific adaptive strategy developed as it participates directly in speech content comprehension.
Collapse
Affiliation(s)
- Anne Lasfargues-Delannoy
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France.
| | - Kuzma Strelnikov
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse, France
| | - Olivier Deguine
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France
| | - Mathieu Marx
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France
| | - Pascal Barone
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France
| |
Collapse
|
12
|
Smyre SA, Wang Z, Stein BE, Rowland BA. Multisensory enhancement of overt behavior requires multisensory experience. Eur J Neurosci 2021; 54:4514-4527. [PMID: 34013578 DOI: 10.1111/ejn.15315] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 05/11/2021] [Accepted: 05/14/2021] [Indexed: 11/27/2022]
Abstract
The superior colliculus (SC) is richly endowed with neurons that integrate cues from different senses to enhance their physiological responses and the overt behaviors they mediate. However, in the absence of experience with cross-modal combinations (e.g., visual-auditory), they fail to develop this characteristic multisensory capability: Their multisensory responses are no greater than their most effective unisensory responses. Presumably, this impairment in neural development would be reflected as corresponding impairments in SC-mediated behavioral capabilities such as detection and localization performance. Here, we tested that assumption directly in cats raised to adulthood in darkness. They, along with a normally reared cohort, were trained to approach brief visual or auditory stimuli. The animals were then tested with these stimuli individually and in combination under ambient light conditions consistent with their rearing conditions and home environment as well as under the opposite lighting condition. As expected, normally reared animals detected and localized the cross-modal combinations significantly better than their individual component stimuli. However, dark-reared animals showed significant defects in multisensory detection and localization performance. The results indicate that a physiological impairment in single multisensory SC neurons is predictive of an impairment in overt multisensory behaviors.
Collapse
Affiliation(s)
- Scott A Smyre
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Zhengyang Wang
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Barry E Stein
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Benjamin A Rowland
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| |
Collapse
|
13
|
Fletcher MD, Verschuur CA. Electro-Haptic Stimulation: A New Approach for Improving Cochlear-Implant Listening. Front Neurosci 2021; 15:581414. [PMID: 34177440 PMCID: PMC8219940 DOI: 10.3389/fnins.2021.581414] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 04/29/2021] [Indexed: 12/12/2022] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring speech perception for severely to profoundly deaf individuals. Despite their success, several limitations remain, particularly in CI users' ability to understand speech in noisy environments, locate sound sources, and enjoy music. A new multimodal approach has been proposed that uses haptic stimulation to provide sound information that is poorly transmitted by the implant. This augmenting of the electrical CI signal with haptic stimulation (electro-haptic stimulation; EHS) has been shown to improve speech-in-noise performance and sound localization in CI users. There is also evidence that it could enhance music perception. We review the evidence of EHS enhancement of CI listening and discuss key areas where further research is required. These include understanding the neural basis of EHS enhancement, understanding the effectiveness of EHS across different clinical populations, and the optimization of signal-processing strategies. We also discuss the significant potential for a new generation of haptic neuroprosthetic devices to aid those who cannot access hearing-assistive technology, either because of biomedical or healthcare-access issues. While significant further research and development is required, we conclude that EHS represents a promising new approach that could, in the near future, offer a non-invasive, inexpensive means of substantially improving clinical outcomes for hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D. Fletcher
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
| | - Carl A. Verschuur
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
14
|
Moïn-Darbari K, Lafontaine L, Maheu M, Bacon BA, Champoux F. Vestibular status: A missing factor in our understanding of brain reorganization in deaf individuals. Cortex 2021; 138:311-317. [PMID: 33784514 DOI: 10.1016/j.cortex.2021.02.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 02/09/2021] [Accepted: 02/18/2021] [Indexed: 10/22/2022]
Abstract
The brain of deaf people is definitely not just deaf, and we have to reconsider what we know about the impact of hearing loss on brain development in light of comorbid vestibular impairments.
Collapse
Affiliation(s)
- K Moïn-Darbari
- École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Québec, Canada; Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada
| | - L Lafontaine
- École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Québec, Canada; Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada
| | - M Maheu
- École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Québec, Canada
| | - B A Bacon
- Department of Psychology, Carleton University, Ottawa, Ontario, Canada
| | - F Champoux
- École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Québec, Canada; Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada.
| |
Collapse
|
15
|
Mao Y, Chen H, Xie S, Xu L. Acoustic Assessment of Tone Production of Prelingually-Deafened Mandarin-Speaking Children With Cochlear Implants. Front Neurosci 2020; 14:592954. [PMID: 33250708 PMCID: PMC7673231 DOI: 10.3389/fnins.2020.592954] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Accepted: 10/12/2020] [Indexed: 11/23/2022] Open
Abstract
Objective The purpose of the present study was to investigate Mandarin tone production performance of prelingually deafened children with cochlear implants (CIs) using modified acoustic analyses and to evaluate the relationship between demographic factors of those CI children and their tone production ability. Methods Two hundred seventy-eight prelingually deafened children with CIs and 173 age-matched normal-hearing (NH) children participated in the study. Thirty-six monosyllabic Mandarin Chinese words were recorded from each subject. The fundamental frequencies (F0) were extracted from the tone tokens. Two acoustic measures (i.e., differentiability and hit rate) were computed based on the F0 onset and offset values (i.e., the tone ellipses of the two-dimensional [2D] method) or the F0 onset, midpoint, and offset values (i.e., the tone ellipsoids of the 3D method). The correlations between the acoustic measures as well as between the methods were performed. The relationship between demographic factors and acoustic measures were also explored. Results The children with CIs showed significantly poorer performance in tone differentiability and hit rate than the NH children. For both CI and NH groups, performance on the two acoustic measures was highly correlated with each other (r values: 0.895–0.961). The performance between the two methods (i.e., 2D and 3D methods) was also highly correlated (r values: 0.774–0.914). Age at implantation and duration of CI use showed a weak correlation with the scores of acoustic measures under both methods. These two factors jointly accounted for 15.4–18.9% of the total variance of tone production performance. Conclusion There were significant deficits in tone production ability in most prelingually deafened children with CIs, even after prolonged use of the devices. The strong correlation between the two methods suggested that the simpler, 2D method seemed to be efficient in acoustic assessment for lexical tones in hearing-impaired children. Age at implantation and especially the duration of CI use were significant, although weak, predictors for tone development in pediatric CI users. Although a large part of tone production ability could not be attributed to these two factors, the results still encourage early implantation and continual CI use for better lexical tone development in Mandarin-speaking pediatric CI users.
Collapse
Affiliation(s)
- Yitao Mao
- Department of Radiology, Xiangya Hospital, Central South University, Changsha, China
| | - Hongsheng Chen
- Department of Otolaryngology-Head and Neck Surgery, Xiangya Hospital, Central South University, Changsha, China
| | - Shumin Xie
- Department of Otolaryngology-Head and Neck Surgery, Xiangya Hospital, Central South University, Changsha, China
| | - Li Xu
- Communication Sciences and Disorders, Ohio University, Athens, OH, United States
| |
Collapse
|
16
|
Scurry AN, Chifamba K, Jiang F. Electrophysiological Dynamics of Visual-Tactile Temporal Order Perception in Early Deaf Adults. Front Neurosci 2020; 14:544472. [PMID: 33071731 PMCID: PMC7539666 DOI: 10.3389/fnins.2020.544472] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 08/19/2020] [Indexed: 11/17/2022] Open
Abstract
Studies of compensatory plasticity in early deaf (ED) individuals have mainly focused on unisensory processing, and on spatial rather than temporal coding. However, precise discrimination of the temporal relationship between stimuli is imperative for successful perception of and interaction with the complex, multimodal environment. Although the properties of cross-modal temporal processing have been extensively studied in neurotypical populations, remarkably little is known about how the loss of one sense impacts the integrity of temporal interactions among the remaining senses. To understand how auditory deprivation affects multisensory temporal interactions, ED and age-matched normal hearing (NH) controls performed a visual-tactile temporal order judgment task in which visual and tactile stimuli were separated by varying stimulus onset asynchronies (SOAs) and subjects had to discern the leading stimulus. Participants performed the task while EEG data were recorded. Group averaged event-related potential waveforms were compared between groups in occipital and fronto-central electrodes. Despite similar temporal order sensitivities and performance accuracy, ED had larger visual P100 amplitudes for all SOA levels and larger tactile N140 amplitudes for the shortest asynchronous (± 30 ms) and synchronous SOA levels. The enhanced signal strength reflected in these components from ED adults are discussed in terms of compensatory recruitment of cortical areas for visual-tactile processing. In addition, ED adults had similar tactile P200 amplitudes as NH but longer P200 latencies suggesting reduced efficiency in later processing of tactile information. Overall, these results suggest that greater responses by ED for early processing of visual and tactile signals are likely critical for maintained performance in visual-tactile temporal order discrimination.
Collapse
Affiliation(s)
- Alexandra N Scurry
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Kudzai Chifamba
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| |
Collapse
|
17
|
Sharp A, Bacon BA, Champoux F. Enhanced tactile identification of musical emotion in the deaf. Exp Brain Res 2020; 238:1229-1236. [DOI: 10.1007/s00221-020-05789-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 03/18/2020] [Indexed: 10/24/2022]
|
18
|
Fletcher MD, Hadeedi A, Goehring T, Mills SR. Electro-haptic enhancement of speech-in-noise performance in cochlear implant users. Sci Rep 2019; 9:11428. [PMID: 31388053 PMCID: PMC6684551 DOI: 10.1038/s41598-019-47718-z] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 07/17/2019] [Indexed: 11/21/2022] Open
Abstract
Cochlear implant (CI) users receive only limited sound information through their implant, which means that they struggle to understand speech in noisy environments. Recent work has suggested that combining the electrical signal from the CI with a haptic signal that provides crucial missing sound information ("electro-haptic stimulation"; EHS) could improve speech-in-noise performance. The aim of the current study was to test whether EHS could enhance speech-in-noise performance in CI users using: (1) a tactile signal derived using an algorithm that could be applied in real time, (2) a stimulation site appropriate for a real-world application, and (3) a tactile signal that could readily be produced by a compact, portable device. We measured speech intelligibility in multi-talker noise with and without vibro-tactile stimulation of the wrist in CI users, before and after a short training regime. No effect of EHS was found before training, but after training EHS was found to improve the number of words correctly identified by an average of 8.3%-points, with some users improving by more than 20%-points. Our approach could offer an inexpensive and non-invasive means of improving speech-in-noise performance in CI users.
Collapse
Affiliation(s)
- Mark D Fletcher
- Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom.
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, S017 1BJ, United Kingdom.
| | - Amatullah Hadeedi
- Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom
| | - Tobias Goehring
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, United Kingdom
| | - Sean R Mills
- Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom
| |
Collapse
|
19
|
Cross-Modal Competition: The Default Computation for Multisensory Processing. J Neurosci 2018; 39:1374-1385. [PMID: 30573648 DOI: 10.1523/jneurosci.1806-18.2018] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 12/04/2018] [Accepted: 12/08/2018] [Indexed: 11/21/2022] Open
Abstract
Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.
Collapse
|
20
|
Rizza A, Terekhov AV, Montone G, Olivetti-Belardinelli M, O'Regan JK. Why Early Tactile Speech Aids May Have Failed: No Perceptual Integration of Tactile and Auditory Signals. Front Psychol 2018; 9:767. [PMID: 29875719 PMCID: PMC5974558 DOI: 10.3389/fpsyg.2018.00767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Accepted: 04/30/2018] [Indexed: 11/23/2022] Open
Abstract
Tactile speech aids, though extensively studied in the 1980’s and 1990’s, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level.
Collapse
Affiliation(s)
- Aurora Rizza
- Department of Psychology, Faculty of Medicine and Psychology, Sapienza University of Rome, Rome, Italy
| | - Alexander V Terekhov
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France
| | - Guglielmo Montone
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France
| | - Marta Olivetti-Belardinelli
- Department of Psychology, Faculty of Medicine and Psychology, Sapienza University of Rome, Rome, Italy.,ECONA Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial Systems, Rome, Italy
| | - J Kevin O'Regan
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France
| |
Collapse
|
21
|
Land R, Radecke JO, Kral A. Congenital Deafness Reduces, But Does Not Eliminate Auditory Responsiveness in Cat Extrastriate Visual Cortex. Neuroscience 2018; 375:149-157. [DOI: 10.1016/j.neuroscience.2018.01.065] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2017] [Revised: 01/25/2018] [Accepted: 01/30/2018] [Indexed: 01/12/2023]
|
22
|
Schierholz I, Finke M, Kral A, Büchner A, Rach S, Lenarz T, Dengler R, Sandmann P. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study. Hum Brain Mapp 2017; 38:2206-2225. [PMID: 28130910 DOI: 10.1002/hbm.23515] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Revised: 12/26/2016] [Accepted: 01/03/2017] [Indexed: 11/10/2022] Open
Abstract
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Irina Schierholz
- Department of Neurology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Mareike Finke
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Andrej Kral
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany.,Institute of AudioNeuroTechnology and Department of Experimental Otology, Hannover Medical School, Hannover, Germany.,School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, Texas
| | - Andreas Büchner
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Stefan Rach
- Department of Epidemiological Methods and Etiological Research, Leibniz Institute for Prevention Research and Epidemiology - BIPS, Bremen, Germany
| | - Thomas Lenarz
- Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Reinhard Dengler
- Department of Neurology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4all,", Hannover, Germany
| | - Pascale Sandmann
- Department of Neurology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4all,", Hannover, Germany.,Department of Otorhinolaryngology, University Hospital Cologne, Cologne, Germany
| |
Collapse
|
23
|
Stropahl M, Chen LC, Debener S. Cortical reorganization in postlingually deaf cochlear implant users: Intra-modal and cross-modal considerations. Hear Res 2017; 343:128-137. [DOI: 10.1016/j.heares.2016.07.005] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Revised: 07/12/2016] [Accepted: 07/18/2016] [Indexed: 10/21/2022]
|
24
|
Landry SP, Champoux F. Musicians react faster and are better multisensory integrators. Brain Cogn 2016; 111:156-162. [PMID: 27978450 DOI: 10.1016/j.bandc.2016.12.001] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Revised: 11/03/2016] [Accepted: 12/01/2016] [Indexed: 10/20/2022]
Abstract
The results from numerous investigations suggest that musical training might enhance how senses interact. Despite repeated confirmation of anatomical and structural changes in visual, tactile, and auditory regions, significant changes have only been reported in the audiovisual domain and for the detection of audio-tactile incongruencies. In the present study, we aim at testing whether long-term musical training might also enhance other multisensory processes at a behavioural level. An audio-tactile reaction time task was administrated to a group of musicians and non-musicians. We found significantly faster reaction times with musicians for auditory, tactile, and audio-tactile stimulations. Statistical analyses between the combined uni- and multisensory reaction times revealed that musicians possess a statistical advantage when responding to multisensory stimuli compared to non-musicians. These results suggest for the first time that long-term musical training reduces simple non-musical auditory, tactile, and multisensory reaction times. Taken together with the previous results from other sensory modalities, these results strongly point towards musicians being better at integrating the inputs from various senses.
Collapse
Affiliation(s)
- Simon P Landry
- Université de Montréal, Faculté de Medicine, École d'orthophonie et d'audiologie, C.P. 6128, Succursale Centre-Ville, Montréal, Québec H3C 3J7, Canada
| | - François Champoux
- Université de Montréal, Faculté de Medicine, École d'orthophonie et d'audiologie, C.P. 6128, Succursale Centre-Ville, Montréal, Québec H3C 3J7, Canada.
| |
Collapse
|
25
|
Body Perception and Action Following Deafness. Neural Plast 2016; 2016:5260671. [PMID: 26881115 PMCID: PMC4737455 DOI: 10.1155/2016/5260671] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2015] [Revised: 11/13/2015] [Accepted: 11/16/2015] [Indexed: 11/29/2022] Open
Abstract
The effect of deafness on sensory abilities has been the topic of extensive investigation over the past decades. These investigations have mostly focused on visual capacities. We are only now starting to investigate how the deaf experience their own bodies and body-related abilities. Indeed, a growing corpus of research suggests that auditory input could play an important role in body-related processing. Deafness could therefore disturb such processes. It has also been suggested that many unexplained daily difficulties experienced by the deaf could be related to deficits in this underexplored field. In the present review, we propose an overview of the current state of knowledge on the effects of deafness on body-related processing.
Collapse
|
26
|
Schierholz I, Finke M, Schulte S, Hauthal N, Kantzke C, Rach S, Büchner A, Dengler R, Sandmann P. Enhanced audio–visual interactions in the auditory cortex of elderly cochlear-implant users. Hear Res 2015; 328:133-47. [DOI: 10.1016/j.heares.2015.08.009] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Revised: 08/12/2015] [Accepted: 08/19/2015] [Indexed: 11/29/2022]
|
27
|
Wu C, Stefanescu RA, Martel DT, Shore SE. Listening to another sense: somatosensory integration in the auditory system. Cell Tissue Res 2015; 361:233-50. [PMID: 25526698 PMCID: PMC4475675 DOI: 10.1007/s00441-014-2074-7] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Accepted: 11/18/2014] [Indexed: 12/19/2022]
Abstract
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.
Collapse
Affiliation(s)
- Calvin Wu
- Department of Otolaryngology, Kresge Hearing Research Institute, University of Michigan, Ann Arbor, MI, 48109, USA
| | | | | | | |
Collapse
|