1
|
Schilling A, Sedley W, Gerum R, Metzner C, Tziridis K, Maier A, Schulze H, Zeng FG, Friston KJ, Krauss P. Predictive coding and stochastic resonance as fundamental principles of auditory phantom perception. Brain 2023; 146:4809-4825. [PMID: 37503725 PMCID: PMC10690027 DOI: 10.1093/brain/awad255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 06/27/2023] [Accepted: 07/15/2023] [Indexed: 07/29/2023] Open
Abstract
Mechanistic insight is achieved only when experiments are employed to test formal or computational models. Furthermore, in analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying healthy auditory perception. With a special focus on tinnitus-as the prime example of auditory phantom perception-we review recent work at the intersection of artificial intelligence, psychology and neuroscience. In particular, we discuss why everyone with tinnitus suffers from (at least hidden) hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that intrinsic neural noise is generated and amplified along the auditory pathway as a compensatory mechanism to restore normal hearing based on adaptive stochastic resonance. The neural noise increase can then be misinterpreted as auditory input and perceived as tinnitus. This mechanism can be formalized in the Bayesian brain framework, where the percept (posterior) assimilates a prior prediction (brain's expectations) and likelihood (bottom-up neural signal). A higher mean and lower variance (i.e. enhanced precision) of the likelihood shifts the posterior, evincing a misinterpretation of sensory evidence, which may be further confounded by plastic changes in the brain that underwrite prior predictions. Hence, two fundamental processing principles provide the most explanatory power for the emergence of auditory phantom perceptions: predictive coding as a top-down and adaptive stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles also play a crucial role in healthy auditory perception. Finally, in the context of neuroscience-inspired artificial intelligence, both processing principles may serve to improve contemporary machine learning techniques.
Collapse
Affiliation(s)
- Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Richard Gerum
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
- Department of Physics and Astronomy and Center for Vision Research, York University, Toronto, ON M3J 1P3, Canada
| | - Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | | | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - Holger Schulze
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Fan-Gang Zeng
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology–Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, USA
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
- Pattern Recognition Lab, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| |
Collapse
|
2
|
Yüksel M, Sarlik E, Çiprut A. Emotions and Psychological Mechanisms of Listening to Music in Cochlear Implant Recipients. Ear Hear 2023; 44:1451-1463. [PMID: 37280743 DOI: 10.1097/aud.0000000000001388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVES Music is a multidimensional phenomenon and is classified by its arousal properties, emotional quality, and structural characteristics. Although structural features of music (i.e., pitch, timbre, and tempo) and music emotion recognition in cochlear implant (CI) recipients are popular research topics, music-evoked emotions, and related psychological mechanisms that reflect both the individual and social context of music are largely ignored. Understanding the music-evoked emotions (the "what") and related mechanisms (the "why") can help professionals and CI recipients better comprehend the impact of music on CI recipients' daily lives. Therefore, the purpose of this study is to evaluate these aspects in CI recipients and compare their findings to those of normal hearing (NH) controls. DESIGN This study included 50 CI recipients with diverse auditory experiences who were prelingually deafened (deafened at or before 6 years of age)-early implanted (N = 21), prelingually deafened-late implanted (implanted at or after 12 years of age-N = 13), and postlingually deafened (N = 16) as well as 50 age-matched NH controls. All participants completed the same survey, which included 28 emotions and 10 mechanisms (Brainstem reflex, Rhythmic entrainment, Evaluative Conditioning, Contagion, Visual imagery, Episodic memory, Musical expectancy, Aesthetic judgment, Cognitive appraisal, and Lyrics). Data were presented in detail for CI groups and compared between CI groups and between CI and NH groups. RESULTS The principal component analysis showed five emotion factors that are explained by 63.4% of the total variance, including anxiety and anger, happiness and pride, sadness and pain, sympathy and tenderness, and serenity and satisfaction in the CI group. Positive emotions such as happiness, tranquility, love, joy, and trust ranked as most often experienced in all groups, whereas negative and complex emotions such as guilt, fear, anger, and anxiety ranked lowest. The CI group ranked lyrics and rhythmic entrainment highest in the emotion mechanism, and there was a statistically significant group difference in the episodic memory mechanism, in which the prelingually deafened, early implanted group scored the lowest. CONCLUSION Our findings indicate that music can evoke similar emotions in CI recipients with diverse auditory experiences as it does in NH individuals. However, prelingually deafened and early implanted individuals lack autobiographical memories associated with music, which affects the feelings evoked by music. In addition, the preference for rhythmic entrainment and lyrics as mechanisms of music-elicited emotions suggests that rehabilitation programs should pay particular attention to these cues.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Ankara Medipol University School of Health Sciences, Department of Speech and Language Therapy, Ankara, Turkey
| | - Esra Sarlik
- Marmara University Institute of Health Sciences, Audiology and Speech Disorders Program, Istanbul, Turkey
| | - Ayça Çiprut
- Marmara University Faculty of Medicine, Department of Audiology, Istanbul, Turkey
| |
Collapse
|
3
|
Kral A, Sharma A. Crossmodal plasticity in hearing loss. Trends Neurosci 2023; 46:377-393. [PMID: 36990952 PMCID: PMC10121905 DOI: 10.1016/j.tins.2023.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/27/2023] [Accepted: 02/21/2023] [Indexed: 03/29/2023]
Abstract
Crossmodal plasticity is a textbook example of the ability of the brain to reorganize based on use. We review evidence from the auditory system showing that such reorganization has significant limits, is dependent on pre-existing circuitry and top-down interactions, and that extensive reorganization is often absent. We argue that the evidence does not support the hypothesis that crossmodal reorganization is responsible for closing critical periods in deafness, and crossmodal plasticity instead represents a neuronal process that is dynamically adaptable. We evaluate the evidence for crossmodal changes in both developmental and adult-onset deafness, which start as early as mild-moderate hearing loss and show reversibility when hearing is restored. Finally, crossmodal plasticity does not appear to affect the neuronal preconditions for successful hearing restoration. Given its dynamic and versatile nature, we describe how this plasticity can be exploited for improving clinical outcomes after neurosensory restoration.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, Otolaryngology Clinics, Hannover Medical School, Hannover, Germany; Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia
| | - Anu Sharma
- Department of Speech Language and Hearing Science, Center for Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA.
| |
Collapse
|
4
|
Verma T, Aker SC, Marozeau J. Effect of Vibrotactile Stimulation on Auditory Timbre Perception for Normal-Hearing Listeners and Cochlear-Implant Users. Trends Hear 2023; 27:23312165221138390. [PMID: 36789758 PMCID: PMC9932763 DOI: 10.1177/23312165221138390] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023] Open
Abstract
The study tests the hypothesis that vibrotactile stimulation can affect timbre perception. A multidimensional scaling experiment was conducted. Twenty listeners with normal hearing and nine cochlear implant users were asked to judge the dissimilarity of a set of synthetic sounds that varied in attack time and amplitude modulation depth. The listeners were simultaneously presented with vibrotactile stimuli, which varied also in attack time and amplitude modulation depth. The results showed that alterations to the temporal waveform of the tactile stimuli affected the listeners' dissimilarity judgments of the audio. A three-dimensional analysis revealed evidence of crossmodal processing where the audio and tactile equivalents combined accounted for their dissimilarity judgments. For the normal-hearing listeners, 86% of the first dimension was explained by audio impulsiveness and 14% by tactile impulsiveness; 75% of the second dimension was explained by the audio roughness or fast amplitude modulation, while its tactile counterpart explained 25%. Interestingly, the third dimension revealed a combination of 43% of audio impulsiveness and 57% of tactile amplitude modulation. For the CI listeners, the first dimension was mostly accounted for by the tactile roughness and the second by the audio impulsiveness. This experiment shows that the perception of timbre can be affected by tactile input and could lead to the developing of new audio-tactile devices for people with hearing impairment.
Collapse
Affiliation(s)
- Tushar Verma
- Music and Cochlear Implant Lab, Department of Health Technology,
Technical
University of Denmark, Kongens Lyngby,
Denmark
| | - Scott C. Aker
- Music and Cochlear Implant Lab, Department of Health Technology,
Technical
University of Denmark, Kongens Lyngby,
Denmark,Oticon Medical, Smørum, Denmark
| | - Jeremy Marozeau
- Music and Cochlear Implant Lab, Department of Health Technology,
Technical
University of Denmark, Kongens Lyngby,
Denmark,Jeremy Marozeau, Music and Cochlear Implant
Lab, Department of Health Technology, Technical University of Denmark, Kongens
Lyngby, 2800, Denmark.
| |
Collapse
|
5
|
Schilling A, Gerum R, Metzner C, Maier A, Krauss P. Intrinsic Noise Improves Speech Recognition in a Computational Model of the Auditory Pathway. Front Neurosci 2022; 16:908330. [PMID: 35757533 PMCID: PMC9215117 DOI: 10.3389/fnins.2022.908330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 05/09/2022] [Indexed: 01/05/2023] Open
Abstract
Noise is generally considered to harm information processing performance. However, in the context of stochastic resonance, noise has been shown to improve signal detection of weak sub- threshold signals, and it has been proposed that the brain might actively exploit this phenomenon. Especially within the auditory system, recent studies suggest that intrinsic noise plays a key role in signal processing and might even correspond to increased spontaneous neuronal firing rates observed in early processing stages of the auditory brain stem and cortex after hearing loss. Here we present a computational model of the auditory pathway based on a deep neural network, trained on speech recognition. We simulate different levels of hearing loss and investigate the effect of intrinsic noise. Remarkably, speech recognition after hearing loss actually improves with additional intrinsic noise. This surprising result indicates that intrinsic noise might not only play a crucial role in human auditory processing, but might even be beneficial for contemporary machine learning approaches.
Collapse
Affiliation(s)
- Achim Schilling
- Laboratory of Sensory and Cognitive Neuroscience, Aix-Marseille University, Marseille, France
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
| | - Richard Gerum
- Department of Physics and Center for Vision Research, York University, Toronto, ON, Canada
| | - Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
- Linguistics Lab, Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
| |
Collapse
|
6
|
Fletcher MD. Can Haptic Stimulation Enhance Music Perception in Hearing-Impaired Listeners? Front Neurosci 2021; 15:723877. [PMID: 34531717 PMCID: PMC8439542 DOI: 10.3389/fnins.2021.723877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/11/2021] [Indexed: 01/07/2023] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom.,Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
7
|
Lasfargues-Delannoy A, Strelnikov K, Deguine O, Marx M, Barone P. Supra-normal skills in processing of visuo-auditory prosodic information by cochlear-implanted deaf patients. Hear Res 2021; 410:108330. [PMID: 34492444 DOI: 10.1016/j.heares.2021.108330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 07/08/2021] [Accepted: 08/02/2021] [Indexed: 10/20/2022]
Abstract
Cochlear implanted (CI) adults with acquired deafness are known to depend on multisensory integration skills (MSI) for speech comprehension through the fusion of speech reading skills and their deficient auditory perception. But, little is known on how CI patients perceive prosodic information relating to speech content. Our study aimed to identify how CI patients use MSI between visual and auditory information to process paralinguistic prosodic information of multimodal speech and the visual strategies employed. A psychophysics assessment was developed, in which CI patients and hearing controls (NH) had to distinguish between a question and a statement. The controls were separated into two age groups (young and aged-matched) to dissociate any effect of aging. In addition, the oculomotor strategies used when facing a speaker in this prosodic decision task were recorded using an eye-tracking device and compared to controls. This study confirmed that prosodic processing is multisensory but it revealed that CI patients showed significant supra-normal audiovisual integration for prosodic information compared to hearing controls irrespective of age. This study clearly showed that CI patients had a visuo-auditory gain more than 3 times larger than that observed in hearing controls. Furthermore, CI participants performed better in the visuo-auditory situation through a specific oculomotor exploration of the face as they significantly fixate the mouth region more than young NH participants who fixate the eyes, whereas the aged-matched controls presented an intermediate exploration pattern equally reported between the eyes and mouth. To conclude, our study demonstrated that CI patients have supra-normal skills MSI when integrating visual and auditory linguistic prosodic information, and a specific adaptive strategy developed as it participates directly in speech content comprehension.
Collapse
Affiliation(s)
- Anne Lasfargues-Delannoy
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France.
| | - Kuzma Strelnikov
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse, France
| | - Olivier Deguine
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France
| | - Mathieu Marx
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France
| | - Pascal Barone
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France
| |
Collapse
|
8
|
Fletcher MD, Verschuur CA. Electro-Haptic Stimulation: A New Approach for Improving Cochlear-Implant Listening. Front Neurosci 2021; 15:581414. [PMID: 34177440 PMCID: PMC8219940 DOI: 10.3389/fnins.2021.581414] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 04/29/2021] [Indexed: 12/12/2022] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring speech perception for severely to profoundly deaf individuals. Despite their success, several limitations remain, particularly in CI users' ability to understand speech in noisy environments, locate sound sources, and enjoy music. A new multimodal approach has been proposed that uses haptic stimulation to provide sound information that is poorly transmitted by the implant. This augmenting of the electrical CI signal with haptic stimulation (electro-haptic stimulation; EHS) has been shown to improve speech-in-noise performance and sound localization in CI users. There is also evidence that it could enhance music perception. We review the evidence of EHS enhancement of CI listening and discuss key areas where further research is required. These include understanding the neural basis of EHS enhancement, understanding the effectiveness of EHS across different clinical populations, and the optimization of signal-processing strategies. We also discuss the significant potential for a new generation of haptic neuroprosthetic devices to aid those who cannot access hearing-assistive technology, either because of biomedical or healthcare-access issues. While significant further research and development is required, we conclude that EHS represents a promising new approach that could, in the near future, offer a non-invasive, inexpensive means of substantially improving clinical outcomes for hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D. Fletcher
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
| | - Carl A. Verschuur
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
9
|
Schilling A, Tziridis K, Schulze H, Krauss P. The stochastic resonance model of auditory perception: A unified explanation of tinnitus development, Zwicker tone illusion, and residual inhibition. PROGRESS IN BRAIN RESEARCH 2021; 262:139-157. [PMID: 33931176 DOI: 10.1016/bs.pbr.2021.01.025] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Stochastic resonance (SR) has been proposed to play a major role in auditory perception, and to maintain optimal information transmission from the cochlea to the auditory system. By this, the auditory system could adapt to changes of the auditory input at second or even sub-second timescales. In case of reduced auditory input, somatosensory projections to the dorsal cochlear nucleus would be disinhibited in order to improve hearing thresholds by means of SR. As a side effect, the increased somatosensory input corresponding to the observed tinnitus-associated neuronal hyperactivity is then perceived as tinnitus. In addition, the model can also explain transient phantom tone perceptions occurring after ear plugging, or the Zwicker tone illusion. Vice versa, the model predicts that via stimulation with acoustic noise, SR would not be needed to optimize information transmission, and hence somatosensory noise would be tuned down, resulting in a transient vanishing of tinnitus, an effect referred to as residual inhibition.
Collapse
Affiliation(s)
- Achim Schilling
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Erlangen, Germany; Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Konstantin Tziridis
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Erlangen, Germany
| | - Holger Schulze
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Erlangen, Germany; Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany; FAU Linguistics Lab, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany; Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands.
| |
Collapse
|
10
|
Fletcher MD, Thini N, Perry SW. Enhanced Pitch Discrimination for Cochlear Implant Users with a New Haptic Neuroprosthetic. Sci Rep 2020; 10:10354. [PMID: 32587354 PMCID: PMC7316732 DOI: 10.1038/s41598-020-67140-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 06/03/2020] [Indexed: 11/25/2022] Open
Abstract
The cochlear implant (CI) is the most widely used neuroprosthesis, recovering hearing for more than half a million severely-to-profoundly hearing-impaired people. However, CIs still have significant limitations, with users having severely impaired pitch perception. Pitch is critical to speech understanding (particularly in noise), to separating different sounds in complex acoustic environments, and to music enjoyment. In recent decades, researchers have attempted to overcome shortcomings in CIs by improving implant technology and surgical techniques, but with limited success. In the current study, we take a new approach of providing missing pitch information through haptic stimulation on the forearm, using our new mosaicOne_B device. The mosaicOne_B extracts pitch information in real-time and presents it via 12 motors that are arranged in ascending pitch along the forearm, with each motor representing a different pitch. In normal-hearing subjects listening to CI simulated audio, we showed that participants were able to discriminate pitch differences at a similar performance level to that achieved by normal-hearing listeners. Furthermore, the device was shown to be highly robust to background noise. This enhanced pitch discrimination has the potential to significantly improve music perception, speech recognition, and speech prosody perception in CI users.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom.
| | - Nour Thini
- Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom
| | - Samuel W Perry
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom
| |
Collapse
|