1
|
Weglage A, Layer N, Meister H, Müller V, Lang-Roth R, Walger M, Sandmann P. Changes in visually and auditory attended audiovisual speech processing in cochlear implant users: A longitudinal ERP study. Hear Res 2024; 447:109023. [PMID: 38733710 DOI: 10.1016/j.heares.2024.109023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/25/2024] [Accepted: 04/26/2024] [Indexed: 05/13/2024]
Abstract
Limited auditory input, whether caused by hearing loss or by electrical stimulation through a cochlear implant (CI), can be compensated by the remaining senses. Specifically for CI users, previous studies reported not only improved visual skills, but also altered cortical processing of unisensory visual and auditory stimuli. However, in multisensory scenarios, it is still unclear how auditory deprivation (before implantation) and electrical hearing experience (after implantation) affect cortical audiovisual speech processing. Here, we present a prospective longitudinal electroencephalography (EEG) study which systematically examined the deprivation- and CI-induced alterations of cortical processing of audiovisual words by comparing event-related potentials (ERPs) in postlingually deafened CI users before and after implantation (five weeks and six months of CI use). A group of matched normal-hearing (NH) listeners served as controls. The participants performed a word-identification task with congruent and incongruent audiovisual words, focusing their attention on either the visual (lip movement) or the auditory speech signal. This allowed us to study the (top-down) attention effect on the (bottom-up) sensory cortical processing of audiovisual speech. When compared to the NH listeners, the CI candidates (before implantation) and the CI users (after implantation) exhibited enhanced lipreading abilities and an altered cortical response at the N1 latency range (90-150 ms) that was characterized by a decreased theta oscillation power (4-8 Hz) and a smaller amplitude in the auditory cortex. After implantation, however, the auditory-cortex response gradually increased and developed a stronger intra-modal connectivity. Nevertheless, task efficiency and activation in the visual cortex was significantly modulated in both groups by focusing attention on the visual as compared to the auditory speech signal, with the NH listeners additionally showing an attention-dependent decrease in beta oscillation power (13-30 Hz). In sum, these results suggest remarkable deprivation effects on audiovisual speech processing in the auditory cortex, which partially reverse after implantation. Although even experienced CI users still show distinct audiovisual speech processing compared to NH listeners, pronounced effects of (top-down) direction of attention on (bottom-up) audiovisual processing can be observed in both groups. However, NH listeners but not CI users appear to show enhanced allocation of cognitive resources in visually as compared to auditory attended audiovisual speech conditions, which supports our behavioural observations of poorer lipreading abilities and reduced visual influence on audition in NH listeners as compared to CI users.
Collapse
Affiliation(s)
- Anna Weglage
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany.
| | - Natalie Layer
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany
| | - Hartmut Meister
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany; Jean-Uhrmacher-Institute for Clinical ENT Research, University of Cologne, Germany
| | - Verena Müller
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany
| | - Ruth Lang-Roth
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany
| | - Martin Walger
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany; Jean-Uhrmacher-Institute for Clinical ENT Research, University of Cologne, Germany
| | - Pascale Sandmann
- Department of Otolaryngology, Head and Neck Surgery, Carl von Ossietzky University of Oldenburg, Germany; Research Center Neurosensory Science University of Oldenburg, Germany; Cluster of Excellence "Hearing4all", University of Oldenburg, Germany
| |
Collapse
|
2
|
Weng Y, Rong Y, Peng G. The development of audiovisual speech perception in Mandarin-speaking children: Evidence from the McGurk paradigm. Child Dev 2024; 95:750-765. [PMID: 37843038 DOI: 10.1111/cdev.14022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 08/30/2023] [Accepted: 09/21/2023] [Indexed: 10/17/2023]
Abstract
The developmental trajectory of audiovisual speech perception in Mandarin-speaking children remains understudied. This cross-sectional study in Mandarin-speaking 3- to 4-year-old, 5- to 6-year-old, 7- to 8-year-old children, and adults from Xiamen, China (n = 87, 44 males) investigated this issue using the McGurk paradigm with three levels of auditory noise. For the identification of congruent stimuli, 3- to 4-year-olds underperformed older groups whose performances were comparable. For the perception of the incongruent stimuli, a developmental shift was observed as 3- to 4-year-olds made significantly more audio-dominant but fewer audiovisual-integrated responses to incongruent stimuli than older groups. With increasing auditory noise, the difference between children and adults widened in identifying congruent stimuli but narrowed in perceiving incongruent ones. The findings regarding noise effects agree with the statistically optimal hypothesis.
Collapse
Affiliation(s)
- Yi Weng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yicheng Rong
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Gang Peng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
3
|
Fletcher MD, Perry SW, Thoidis I, Verschuur CA, Goehring T. Improved tactile speech robustness to background noise with a dual-path recurrent neural network noise-reduction method. Sci Rep 2024; 14:7357. [PMID: 38548750 PMCID: PMC10978864 DOI: 10.1038/s41598-024-57312-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/17/2024] [Indexed: 04/01/2024] Open
Abstract
Many people with hearing loss struggle to understand speech in noisy environments, making noise robustness critical for hearing-assistive devices. Recently developed haptic hearing aids, which convert audio to vibration, can improve speech-in-noise performance for cochlear implant (CI) users and assist those unable to access hearing-assistive devices. They are typically body-worn rather than head-mounted, allowing additional space for batteries and microprocessors, and so can deploy more sophisticated noise-reduction techniques. The current study assessed whether a real-time-feasible dual-path recurrent neural network (DPRNN) can improve tactile speech-in-noise performance. Audio was converted to vibration on the wrist using a vocoder method, either with or without noise reduction. Performance was tested for speech in a multi-talker noise (recorded at a party) with a 2.5-dB signal-to-noise ratio. An objective assessment showed the DPRNN improved the scale-invariant signal-to-distortion ratio by 8.6 dB and substantially outperformed traditional noise-reduction (log-MMSE). A behavioural assessment in 16 participants showed the DPRNN improved tactile-only sentence identification in noise by 8.2%. This suggests that advanced techniques like the DPRNN could substantially improve outcomes with haptic hearing aids. Low-cost haptic devices could soon be an important supplement to hearing-assistive devices such as CIs or offer an alternative for people who cannot access CI technology.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
| | - Samuel W Perry
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Iordanis Thoidis
- School of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124, Thessaloniki, Greece
| | - Carl A Verschuur
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Tobias Goehring
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK
| |
Collapse
|
4
|
McDaniel J, Krimm H, Schuele CM. SLPs' perceptions of language learning myths about children who are DHH. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2024; 29:245-257. [PMID: 37742092 PMCID: PMC10950421 DOI: 10.1093/deafed/enad043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 06/30/2023] [Accepted: 08/23/2023] [Indexed: 09/25/2023]
Abstract
This article reports on speech-language pathologists' (SLPs') knowledge related to myths about spoken language learning of children who are deaf and hard of hearing (DHH). The broader study was designed as a step toward narrowing the research-practice gap and providing effective, evidence-based language services to children. In the broader study, SLPs (n = 106) reported their agreement/disagreement with myth statements and true statements (n = 52) about 7 clinical topics related to speech and language development. For the current report, participant responses to 7 statements within the DHH topic were analyzed. Participants exhibited a relative strength in bilingualism knowledge for spoken languages and a relative weakness in audiovisual integration knowledge. Much individual variation was observed. Participants' responses were more likely to align with current evidence about bilingualism if the participants had less experience as an SLP. The findings provide guidance on prioritizing topics for speech-language pathology preservice and professional development.
Collapse
Affiliation(s)
- Jena McDaniel
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, United States
| | - Hannah Krimm
- Department of Communication Sciences and Special Education, University of Georgia, Athens, United States
| | - C Melanie Schuele
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, United States
| |
Collapse
|
5
|
Schulte A, Marozeau J, Ruhe A, Büchner A, Kral A, Innes-Brown H. Improved speech intelligibility in the presence of congruent vibrotactile speech input. Sci Rep 2023; 13:22657. [PMID: 38114599 PMCID: PMC10730903 DOI: 10.1038/s41598-023-48893-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 11/30/2023] [Indexed: 12/21/2023] Open
Abstract
Vibrotactile stimulation is believed to enhance auditory speech perception, offering potential benefits for cochlear implant (CI) users who may utilize compensatory sensory strategies. Our study advances previous research by directly comparing tactile speech intelligibility enhancements in normal-hearing (NH) and CI participants, using the same paradigm. Moreover, we assessed tactile enhancement considering stimulus non-specific, excitatory effects through an incongruent audio-tactile control condition that did not contain any speech-relevant information. In addition to this incongruent audio-tactile condition, we presented sentences in an auditory only and a congruent audio-tactile condition, with the congruent tactile stimulus providing low-frequency envelope information via a vibrating probe on the index fingertip. The study involved 23 NH listeners and 14 CI users. In both groups, significant tactile enhancements were observed for congruent tactile stimuli (5.3% for NH and 5.4% for CI participants), but not for incongruent tactile stimulation. These findings replicate previously observed tactile enhancement effects. Juxtaposing our study with previous research, the informational content of the tactile stimulus emerges as a modulator of intelligibility: Generally, congruent stimuli enhanced, non-matching tactile stimuli reduced, and neutral stimuli did not change test outcomes. We conclude that the temporal cues provided by congruent vibrotactile stimuli may aid in parsing continuous speech signals into syllables and words, consequently leading to the observed improvements in intelligibility.
Collapse
Affiliation(s)
- Alina Schulte
- Department of Experimental Otology of the Clinics of Otolaryngology, Hannover Medical School, Hannover, Germany.
- Eriksholm Research Center, Oticon A/S, Snekkersten, Denmark.
| | - Jeremy Marozeau
- Music and Cochlear Implants Lab, Department of Health Technology, Technical University Denmark, Kongens Lyngby, Denmark
| | - Anna Ruhe
- Department of Experimental Otology of the Clinics of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Andreas Büchner
- Department of Experimental Otology of the Clinics of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Andrej Kral
- Department of Experimental Otology of the Clinics of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Hamish Innes-Brown
- Eriksholm Research Center, Oticon A/S, Snekkersten, Denmark
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
6
|
Alemi R, Wolfe J, Neumann S, Manning J, Towler W, Koirala N, Gracco VL, Deroche M. Audiovisual integration in children with cochlear implants revealed through EEG and fNIRS. Brain Res Bull 2023; 205:110817. [PMID: 37989460 DOI: 10.1016/j.brainresbull.2023.110817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 09/22/2023] [Accepted: 11/13/2023] [Indexed: 11/23/2023]
Abstract
Sensory deprivation can offset the balance of audio versus visual information in multimodal processing. Such a phenomenon could persist for children born deaf, even after they receive cochlear implants (CIs), and could potentially explain why one modality is given priority over the other. Here, we recorded cortical responses to a single speaker uttering two syllables, presented in audio-only (A), visual-only (V), and audio-visual (AV) modes. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were successively recorded in seventy-five school-aged children. Twenty-five were children with normal hearing (NH) and fifty wore CIs, among whom 26 had relatively high language abilities (HL) comparable to those of NH children, while 24 others had low language abilities (LL). In EEG data, visual-evoked potentials were captured in occipital regions, in response to V and AV stimuli, and they were accentuated in the HL group compared to the LL group (the NH group being intermediate). Close to the vertex, auditory-evoked potentials were captured in response to A and AV stimuli and reflected a differential treatment of the two syllables but only in the NH group. None of the EEG metrics revealed any interaction between group and modality. In fNIRS data, each modality induced a corresponding activity in visual or auditory regions, but no group difference was observed in A, V, or AV stimulation. The present study did not reveal any sign of abnormal AV integration in children with CI. An efficient multimodal integrative network (at least for rudimentary speech materials) is clearly not a sufficient condition to exhibit good language and literacy.
Collapse
Affiliation(s)
- Razieh Alemi
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada.
| | - Jace Wolfe
- Oberkotter Foundation, Oklahoma City, OK, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Will Towler
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Nabin Koirala
- Haskins Laboratories, 300 George St., New Haven, CT 06511, USA
| | | | - Mickael Deroche
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada
| |
Collapse
|
7
|
Moro SS, Qureshi FA, Steeves JKE. Perception of the McGurk effect in people with one eye depends on whether the eye is removed during infancy or adulthood. Front Neurosci 2023; 17:1217831. [PMID: 37901426 PMCID: PMC10603249 DOI: 10.3389/fnins.2023.1217831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 09/25/2023] [Indexed: 10/31/2023] Open
Abstract
Background The visual system is not fully mature at birth and continues to develop throughout infancy until it reaches adult levels through late childhood and adolescence. Disruption of vision during this postnatal period and prior to visual maturation results in deficits of visual processing and in turn may affect the development of complementary senses. Studying people who have had one eye surgically removed during early postnatal development is a useful model for understanding timelines of sensory development and the role of binocularity in visual system maturation. Adaptive auditory and audiovisual plasticity following the loss of one eye early in life has been observed for both low-and high-level visual stimuli. Notably, people who have had one eye removed early in life perceive the McGurk effect much less than binocular controls. Methods The current study investigates whether multisensory compensatory mechanisms are also present in people who had one eye removed late in life, after postnatal visual system maturation, by measuring whether they perceive the McGurk effect compared to binocular controls and people who have had one eye removed early in life. Results People who had one eye removed late in life perceived the McGurk effect similar to binocular viewing controls, unlike those who had one eye removed early in life. Conclusion This suggests differences in multisensory compensatory mechanisms based on age at surgical eye removal. These results indicate that cross-modal adaptations for the loss of binocularity may be dependent on plasticity levels during cortical development.
Collapse
Affiliation(s)
- Stefania S. Moro
- Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada
- The Hospital for Sick Children, Toronto, ON, Canada
| | - Faizaan A. Qureshi
- Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada
| | - Jennifer K. E. Steeves
- Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada
- The Hospital for Sick Children, Toronto, ON, Canada
| |
Collapse
|
8
|
Ross P, Williams E, Herbert G, Manning L, Lee B. Turn that music down! Affective musical bursts cause an auditory dominance in children recognizing bodily emotions. J Exp Child Psychol 2023; 230:105632. [PMID: 36731279 DOI: 10.1016/j.jecp.2023.105632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 12/16/2022] [Accepted: 01/13/2023] [Indexed: 02/01/2023]
Abstract
Previous work has shown that different sensory channels are prioritized across the life course, with children preferentially responding to auditory information. The aim of the current study was to investigate whether the mechanism that drives this auditory dominance in children occurs at the level of encoding (overshadowing) or when the information is integrated to form a response (response competition). Given that response competition is dependent on a modality integration attempt, a combination of stimuli that could not be integrated was used so that if children's auditory dominance persisted, this would provide evidence for the overshadowing over the response competition mechanism. Younger children (≤7 years), older children (8-11 years), and adults (18+ years) were asked to recognize the emotion (happy or fearful) in either nonvocal auditory musical emotional bursts or human visual bodily expressions of emotion in three conditions: unimodal, congruent bimodal, and incongruent bimodal. We found that children performed significantly worse at recognizing emotional bodies when they heard (and were told to ignore) musical emotional bursts. This provides the first evidence for auditory dominance in both younger and older children when presented with modally incongruent emotional stimuli. The continued presence of auditory dominance, despite the lack of modality integration, was taken as supportive evidence for the overshadowing explanation. These findings are discussed in relation to educational considerations, and future sensory dominance investigations and models are proposed.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham DH1 3LE, UK.
| | - Ella Williams
- Department of Psychology, Durham University, Durham DH1 3LE, UK; Oxford Neuroscience, University of Oxford, Oxford OX3 9DU, UK
| | - Gemma Herbert
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Laura Manning
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Becca Lee
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| |
Collapse
|
9
|
Kral A, Sharma A. Crossmodal plasticity in hearing loss. Trends Neurosci 2023; 46:377-393. [PMID: 36990952 PMCID: PMC10121905 DOI: 10.1016/j.tins.2023.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/27/2023] [Accepted: 02/21/2023] [Indexed: 03/29/2023]
Abstract
Crossmodal plasticity is a textbook example of the ability of the brain to reorganize based on use. We review evidence from the auditory system showing that such reorganization has significant limits, is dependent on pre-existing circuitry and top-down interactions, and that extensive reorganization is often absent. We argue that the evidence does not support the hypothesis that crossmodal reorganization is responsible for closing critical periods in deafness, and crossmodal plasticity instead represents a neuronal process that is dynamically adaptable. We evaluate the evidence for crossmodal changes in both developmental and adult-onset deafness, which start as early as mild-moderate hearing loss and show reversibility when hearing is restored. Finally, crossmodal plasticity does not appear to affect the neuronal preconditions for successful hearing restoration. Given its dynamic and versatile nature, we describe how this plasticity can be exploited for improving clinical outcomes after neurosensory restoration.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, Otolaryngology Clinics, Hannover Medical School, Hannover, Germany; Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia
| | - Anu Sharma
- Department of Speech Language and Hearing Science, Center for Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA.
| |
Collapse
|
10
|
Kral A. Hearing and Cognition in Childhood. Laryngorhinootologie 2023; 102:S3-S11. [PMID: 37130527 PMCID: PMC10184669 DOI: 10.1055/a-1973-5087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The human brain shows extensive development of the cerebral cortex after birth. This is extensively altered by the absence of auditory input: the development of cortical synapses in the auditory system is delayed and their degradation is increased. Recent work shows that the synapses responsible for corticocortical processing of stimuli and their embedding into multisensory interactions and cognition are particularly affected. Since the brain is heavily reciprocally interconnected, inborn deafness manifests not only in deficits in auditory processing, but also in cognitive (non-auditory) functions that are affected differently between individuals. It requires individualized approaches in therapy of deafness in childhood.
Collapse
Affiliation(s)
- Andrej Kral
- Institut für AudioNeuroTechnologie (VIANNA) & Abt. für experimentelle Otologie, Exzellenzcluster Hearing4All, Medizinische Hochschule Hannover (Abteilungsleiter und Institutsleiter: Prof. Dr. A. Kral) & Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
11
|
Iqbal ZJ, Shahin AJ, Bortfeld H, Backer KC. The McGurk Illusion: A Default Mechanism of the Auditory System. Brain Sci 2023; 13:brainsci13030510. [PMID: 36979322 PMCID: PMC10046462 DOI: 10.3390/brainsci13030510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/10/2023] [Accepted: 03/18/2023] [Indexed: 03/22/2023] Open
Abstract
Recent studies have questioned past conclusions regarding the mechanisms of the McGurk illusion, especially how McGurk susceptibility might inform our understanding of audiovisual (AV) integration. We previously proposed that the McGurk illusion is likely attributable to a default mechanism, whereby either the visual system, auditory system, or both default to specific phonemes—those implicated in the McGurk illusion. We hypothesized that the default mechanism occurs because visual stimuli with an indiscernible place of articulation (like those traditionally used in the McGurk illusion) lead to an ambiguous perceptual environment and thus a failure in AV integration. In the current study, we tested the default hypothesis as it pertains to the auditory system. Participants performed two tasks. One task was a typical McGurk illusion task, in which individuals listened to auditory-/ba/ paired with visual-/ga/ and judged what they heard. The second task was an auditory-only task, in which individuals transcribed trisyllabic words with a phoneme replaced by silence. We found that individuals’ transcription of missing phonemes often defaulted to ‘/d/t/th/’, the same phonemes often experienced during the McGurk illusion. Importantly, individuals’ default rate was positively correlated with their McGurk rate. We conclude that the McGurk illusion arises when people fail to integrate visual percepts with auditory percepts, due to visual ambiguity, thus leading the auditory system to default to phonemes often implicated in the McGurk illusion.
Collapse
Affiliation(s)
- Zunaira J. Iqbal
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA
| | - Antoine J. Shahin
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA
- Health Sciences Research Institute, University of California, Merced, CA 95343, USA
| | - Heather Bortfeld
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA
- Health Sciences Research Institute, University of California, Merced, CA 95343, USA
- Department of Psychological Sciences, University of California, Merced, CA 95353, USA
| | - Kristina C. Backer
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA
- Health Sciences Research Institute, University of California, Merced, CA 95343, USA
- Correspondence:
| |
Collapse
|
12
|
Butera IM, Stevenson RA, Gifford RH, Wallace MT. Visually biased Perception in Cochlear Implant Users: A Study of the McGurk and Sound-Induced Flash Illusions. Trends Hear 2023; 27:23312165221076681. [PMID: 37377212 PMCID: PMC10334005 DOI: 10.1177/23312165221076681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 12/08/2021] [Accepted: 01/10/2021] [Indexed: 06/29/2023] Open
Abstract
The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme "ba" dubbed onto the viseme "ga"), we found that 55 CI users (87%) reported a fused percept of "da" or "tha" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.
Collapse
Affiliation(s)
- Iliza M. Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Ryan A. Stevenson
- Department of Psychology, University of
Western Ontario, London, ON, Canada
- Brain and Mind Institute, University of
Western Ontario, London, ON, Canada
| | - René H. Gifford
- Department of Hearing and Speech
Sciences, Vanderbilt University, Nashville, TN, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech
Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt
University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
13
|
von Eiff CI, Frühholz S, Korth D, Guntinas-Lichius O, Schweinberger SR. Crossmodal benefits to vocal emotion perception in cochlear implant users. iScience 2022; 25:105711. [PMID: 36578321 PMCID: PMC9791346 DOI: 10.1016/j.isci.2022.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 10/17/2022] [Accepted: 11/29/2022] [Indexed: 12/03/2022] Open
Abstract
Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users' VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.
Collapse
Affiliation(s)
- Celina Isabelle von Eiff
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany,Corresponding author
| | - Sascha Frühholz
- Department of Psychology (Cognitive and Affective Neuroscience), Faculty of Arts and Social Sciences, University of Zurich, 8050 Zurich, Switzerland,Department of Psychology, University of Oslo, 0373 Oslo, Norway
| | - Daniela Korth
- Department of Otorhinolaryngology, Jena University Hospital, 07747 Jena, Germany
| | | | - Stefan Robert Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany
| |
Collapse
|
14
|
Corina DP, Coffey-Corina S, Pierotti E, Bormann B, LaMarr T, Lawyer L, Backer KC, Miller LM. Electrophysiological Examination of Ambient Speech Processing in Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3502-3517. [PMID: 36037517 PMCID: PMC9913291 DOI: 10.1044/2022_jslhr-22-00004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 05/05/2022] [Accepted: 06/11/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE This research examined the expression of cortical auditory evoked potentials in a cohort of children who received cochlear implants (CIs) for treatment of congenital deafness (n = 28) and typically hearing controls (n = 28). METHOD We make use of a novel electroencephalography paradigm that permits the assessment of auditory responses to ambiently presented speech and evaluates the contributions of concurrent visual stimulation on this activity. RESULTS Our findings show group differences in the expression of auditory sensory and perceptual event-related potential components occurring in 80- to 200-ms and 200- to 300-ms time windows, with reductions in amplitude and a greater latency difference for CI-using children. Relative to typically hearing children, current source density analysis showed muted responses to concurrent visual stimulation in CI-using children, suggesting less cortical specialization and/or reduced responsiveness to auditory information that limits the detection of the interaction between sensory systems. CONCLUSION These findings indicate that even in the face of early interventions, CI-using children may exhibit disruptions in the development of auditory and multisensory processing.
Collapse
Affiliation(s)
- David P. Corina
- Department of Linguistics, University of California, Davis
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | | | - Elizabeth Pierotti
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | - Brett Bormann
- Center for Mind and Brain, University of California, Davis
- Neurobiology, Physiology and Behavior, University of California, Davis
| | - Todd LaMarr
- Center for Mind and Brain, University of California, Davis
| | - Laurel Lawyer
- Center for Mind and Brain, University of California, Davis
| | | | - Lee M. Miller
- Center for Mind and Brain, University of California, Davis
- Neurobiology, Physiology and Behavior, University of California, Davis
- Department of Otolaryngology/Head and Neck Surgery, University of California, Davis
| |
Collapse
|
15
|
Cross-Modal Reorganization From Both Visual and Somatosensory Modalities in Cochlear Implanted Children and Its Relationship to Speech Perception. Otol Neurotol 2022; 43:e872-e879. [PMID: 35970165 DOI: 10.1097/mao.0000000000003619] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS We hypothesized that children with cochlear implants (CIs) who demonstrate cross-modal reorganization by vision also demonstrate cross-modal reorganization by somatosensation and that these processes are interrelated and impact speech perception. BACKGROUND Cross-modal reorganization, which occurs when a deprived sensory modality's cortical resources are recruited by other intact modalities, has been proposed as a source of variability underlying speech perception in deaf children with CIs. Visual and somatosensory cross-modal reorganization of auditory cortex have been documented separately in CI children, but reorganization in these modalities has not been documented within the same subjects. Our goal was to examine the relationship between cross-modal reorganization from both visual and somatosensory modalities within a single group of CI children. METHODS We analyzed high-density electroencephalogram responses to visual and somatosensory stimuli and current density reconstruction of brain activity sources. Speech perception in noise testing was performed. Current density reconstruction patterns were analyzed within the entire subject group and across groups of CI children exhibiting good versus poor speech perception. RESULTS Positive correlations between visual and somatosensory cross-modal reorganization suggested that neuroplasticity in different sensory systems may be interrelated. Furthermore, CI children with good speech perception did not show recruitment of frontal or auditory cortices during visual processing, unlike CI children with poor speech perception. CONCLUSION Our results reflect changes in cortical resource allocation in pediatric CI users. Cross-modal recruitment of auditory and frontal cortices by vision, and cross-modal reorganization of auditory cortex by somatosensation, may underlie variability in speech and language outcomes in CI children.
Collapse
|
16
|
Li H, Song L, Wang P, Weiss PH, Fink GR, Zhou X, Chen Q. Impaired body-centered sensorimotor transformations in congenitally deaf people. Brain Commun 2022; 4:fcac148. [PMID: 35774184 PMCID: PMC9240416 DOI: 10.1093/braincomms/fcac148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 02/26/2022] [Accepted: 06/03/2022] [Indexed: 11/20/2022] Open
Abstract
Congenital deafness modifies an individual’s daily interaction with the environment and alters the fundamental perception of the external world. How congenital deafness shapes the interface between the internal and external worlds remains poorly understood. To interact efficiently with the external world, visuospatial representations of external target objects need to be effectively transformed into sensorimotor representations with reference to the body. Here, we tested the hypothesis that egocentric body-centred sensorimotor transformation is impaired in congenital deafness. Consistent with this hypothesis, we found that congenital deafness induced impairments in egocentric judgements, associating the external objects with the internal body. These impairments were due to deficient body-centred sensorimotor transformation per se, rather than the reduced fidelity of the visuospatial representations of the egocentric positions. At the neural level, we first replicated the previously well-documented critical involvement of the frontoparietal network in egocentric processing, in both congenitally deaf participants and hearing controls. However, both the strength of neural activity and the intra-network connectivity within the frontoparietal network alone could not account for egocentric performance variance. Instead, the inter-network connectivity between the task-positive frontoparietal network and the task-negative default-mode network was significantly correlated with egocentric performance: the more cross-talking between them, the worse the egocentric judgement. Accordingly, the impaired egocentric performance in the deaf group was related to increased inter-network connectivity between the frontoparietal network and the default-mode network and decreased intra-network connectivity within the default-mode network. The altered neural network dynamics in congenital deafness were observed for both evoked neural activity during egocentric processing and intrinsic neural activity during rest. Our findings thus not only demonstrate the optimal network configurations between the task-positive and -negative neural networks underlying coherent body-centred sensorimotor transformations but also unravel a critical cause (i.e. impaired body-centred sensorimotor transformation) of a variety of hitherto unexplained difficulties in sensory-guided movements the deaf population experiences in their daily life.
Collapse
Affiliation(s)
- Hui Li
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education , China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University , China
| | - Li Song
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education , China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University , China
| | - Pengfei Wang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education , China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University , China
| | - Peter H. Weiss
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Germany, Wilhelm-Johnen-Strasse , 52428 Jülich, Germany
- Department of Neurology, University Hospital Cologne, Cologne University , 509737 Cologne, Germany
| | - Gereon R. Fink
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Germany, Wilhelm-Johnen-Strasse , 52428 Jülich, Germany
- Department of Neurology, University Hospital Cologne, Cologne University , 509737 Cologne, Germany
| | - Xiaolin Zhou
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University , 200062 Shanghai, China
| | - Qi Chen
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Germany, Wilhelm-Johnen-Strasse , 52428 Jülich, Germany
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education , China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University , China
| |
Collapse
|
17
|
Bruns P, Li L, Guerreiro MJ, Shareef I, Rajendran SS, Pitchaimuthu K, Kekunnaya R, Röder B. Audiovisual spatial recalibration but not integration is shaped by early sensory experience. iScience 2022; 25:104439. [PMID: 35874923 PMCID: PMC9301879 DOI: 10.1016/j.isci.2022.104439] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 02/14/2022] [Accepted: 05/06/2022] [Indexed: 11/15/2022] Open
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Corresponding author
| | - Lux Li
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Department of Epidemiology and Biostatistics, Schulich School of Medicine & Dentistry, Western University, London, ON N6G 2M1, Canada
| | - Maria J.S. Guerreiro
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, University of Oldenburg, 26111 Oldenburg, Germany
| | - Idris Shareef
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Siddhart S. Rajendran
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Kabilan Pitchaimuthu
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Ramesh Kekunnaya
- Jasti V Ramanamma Children’s Eye Care Centre, LV Prasad Eye Institute, Hyderabad, Telangana 500034, India
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
| |
Collapse
|
18
|
Radecke JO, Schierholz I, Kral A, Lenarz T, Murray MM, Sandmann P. Distinct multisensory perceptual processes guide enhanced auditory recognition memory in older cochlear implant users. Neuroimage Clin 2022; 33:102942. [PMID: 35033811 PMCID: PMC8762088 DOI: 10.1016/j.nicl.2022.102942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 12/23/2021] [Accepted: 01/10/2022] [Indexed: 11/15/2022]
Abstract
Congruent audio-visual encoding enhances later auditory processing in the elderly. CI users benefit from additional congruent visual information, similar to controls. CI users show distinct neurophysiological processes, compared to controls. CI users show an earlier modulation of event-related topographies, compared to controls.
In naturalistic situations, sounds are often perceived in conjunction with matching visual impressions. For example, we see and hear the neighbor’s dog barking in the garden. Still, there is a good chance that we recognize the neighbor’s dog even when we only hear it barking, but do not see it behind the fence. Previous studies with normal-hearing (NH) listeners have shown that the audio-visual presentation of a perceptual object (like an animal) increases the probability to recognize this object later on, even if the repeated presentation of this object occurs in a purely auditory condition. In patients with a cochlear implant (CI), however, the electrical hearing of sounds is impoverished, and the ability to recognize perceptual objects in auditory conditions is significantly limited. It is currently not well understood whether CI users – as NH listeners – show a multisensory facilitation for auditory recognition. The present study used event-related potentials (ERPs) and a continuous recognition paradigm with auditory and audio-visual stimuli to test the prediction that CI users show a benefit from audio-visual perception. Indeed, the congruent audio-visual context resulted in an improved recognition ability of objects in an auditory-only condition, both in the NH listeners and the CI users. The ERPs revealed a group-specific pattern of voltage topographies and correlations between these ERP maps and the auditory recognition ability, indicating a different processing of congruent audio-visual stimuli in CI users when compared to NH listeners. Taken together, our results point to distinct cortical processing of naturalistic audio-visual objects in CI users and NH listeners, which however allows both groups to improve the recognition ability of these objects in a purely auditory context. Our findings are of relevance for future clinical research since audio-visual perception might also improve the auditory rehabilitation after cochlear implantation.
Collapse
Affiliation(s)
- Jan-Ole Radecke
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Germany; Institute of Audioneurotechnology, Hannover Medical School, Hannover, Germany; Department of Experimental Otology, ENT Clinics, Hannover Medical School, Hannover, Germany.
| | - Irina Schierholz
- Department of Otolaryngology, Hannover Medical School, Hannover, Germany; Department of Otorhinolaryngology, University of Cologne, Cologne, Germany
| | - Andrej Kral
- Institute of Audioneurotechnology, Hannover Medical School, Hannover, Germany; Department of Experimental Otology, ENT Clinics, Hannover Medical School, Hannover, Germany
| | - Thomas Lenarz
- Institute of Audioneurotechnology, Hannover Medical School, Hannover, Germany; Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Micah M Murray
- The LINE (The Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; CIBM Center for Biomedical Imaging of Lausanne and Geneva, Lausanne, Switzerland; Department of Ophthalmology, Fondation Asile des aveugles, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Pascale Sandmann
- Department of Otorhinolaryngology, University of Cologne, Cologne, Germany
| |
Collapse
|
19
|
Li J, Mayr R, Zhao F. Speech production in Mandarin-speaking children with cochlear implants: a systematic review. Int J Audiol 2021; 61:711-719. [PMID: 34620034 DOI: 10.1080/14992027.2021.1978567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE This study aimed to systematically review and critically appraise the literature describing the phonetic characteristics and accuracy of the consonants, vowels and tones produced by Mandarin-speaking children with cochlear implants (CIs). DESIGN The protocol in this review was designed in conformity with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. EBSCOhost, PubMed, Scopus, PsycINFO, ProQuest Central databases were searched for relevant articles which met the inclusion criteria. STUDY SAMPLE A total of 18 journal papers were included in this review. RESULTS The results revealed that Mandarin-speaking children with CIs perform consistently more poorly in their production of consonants, in particular on fricatives, have a smaller and less well-defined vowel space, and exhibit greater difficulties in tone realisation, notably T2 and T3, when compared to their normal-hearing (NH) peers. The results from acoustic and accuracy analyses are negatively correlated with CI implantation age, but largely positively correlated with hearing age. CONCLUSIONS Findings of this review highlight the factors that influence consonant, vowel and tone production in Mandarin-speaking children with CIs, thereby providing critical information for clinicians and researchers working with this population.
Collapse
Affiliation(s)
- Jiaying Li
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, United Kingdom
| | - Robert Mayr
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, United Kingdom
| | - Fei Zhao
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, United Kingdom
| |
Collapse
|
20
|
Carlyon RP, Goehring T. Cochlear Implant Research and Development in the Twenty-first Century: A Critical Update. J Assoc Res Otolaryngol 2021; 22:481-508. [PMID: 34432222 PMCID: PMC8476711 DOI: 10.1007/s10162-021-00811-5] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 08/02/2021] [Indexed: 12/22/2022] Open
Abstract
Cochlear implants (CIs) are the world's most successful sensory prosthesis and have been the subject of intense research and development in recent decades. We critically review the progress in CI research, and its success in improving patient outcomes, from the turn of the century to the present day. The review focuses on the processing, stimulation, and audiological methods that have been used to try to improve speech perception by human CI listeners, and on fundamental new insights in the response of the auditory system to electrical stimulation. The introduction of directional microphones and of new noise reduction and pre-processing algorithms has produced robust and sometimes substantial improvements. Novel speech-processing algorithms, the use of current-focusing methods, and individualised (patient-by-patient) deactivation of subsets of electrodes have produced more modest improvements. We argue that incremental advances have and will continue to be made, that collectively these may substantially improve patient outcomes, but that the modest size of each individual advance will require greater attention to experimental design and power. We also briefly discuss the potential and limitations of promising technologies that are currently being developed in animal models, and suggest strategies for researchers to collectively maximise the potential of CIs to improve hearing in a wide range of listening situations.
Collapse
Affiliation(s)
- Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, UK.
| | - Tobias Goehring
- Cambridge Hearing Group, MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, UK
| |
Collapse
|
21
|
Fletcher MD. Can Haptic Stimulation Enhance Music Perception in Hearing-Impaired Listeners? Front Neurosci 2021; 15:723877. [PMID: 34531717 PMCID: PMC8439542 DOI: 10.3389/fnins.2021.723877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/11/2021] [Indexed: 01/07/2023] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom.,Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
22
|
Motor Circuit and Superior Temporal Sulcus Activities Linked to Individual Differences in Multisensory Speech Perception. Brain Topogr 2021; 34:779-792. [PMID: 34480635 DOI: 10.1007/s10548-021-00869-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 08/24/2021] [Indexed: 10/20/2022]
Abstract
Integrating multimodal information into a unified perception is a fundamental human capacity. McGurk effect is a remarkable multisensory illusion that demonstrates a percept different from incongruent auditory and visual syllables. However, not all listeners perceive the McGurk illusion to the same degree. The neural basis for individual differences in modulation of multisensory integration and syllabic perception remains largely unclear. To probe the possible involvement of specific neural circuits in individual differences in multisensory speech perception, we first implemented a behavioral experiment to examine the McGurk susceptibility. Then, functional magnetic resonance imaging was performed in 63 participants to measure the brain activity in response to non-McGurk audiovisual syllables. We revealed significant individual variability in McGurk illusion perception. Moreover, we found significant differential activations of the auditory and visual regions and the left Superior temporal sulcus (STS), as well as multiple motor areas between strong and weak McGurk perceivers. Importantly, the individual engagement of the STS and motor areas could specifically predict the behavioral McGurk susceptibility, contrary to the sensory regions. These findings suggest that the distinct multimodal integration in STS as well as coordinated phonemic modulatory processes in motor circuits may serve as a neural substrate for interindividual differences in multisensory speech perception.
Collapse
|
23
|
van de Rijt LPH, van Opstal AJ, van Wanrooij MM. Multisensory Integration-Attention Trade-Off in Cochlear-Implanted Deaf Individuals. Front Neurosci 2021; 15:683804. [PMID: 34393707 PMCID: PMC8358073 DOI: 10.3389/fnins.2021.683804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 06/21/2021] [Indexed: 11/13/2022] Open
Abstract
The cochlear implant (CI) allows profoundly deaf individuals to partially recover hearing. Still, due to the coarse acoustic information provided by the implant, CI users have considerable difficulties in recognizing speech, especially in noisy environments. CI users therefore rely heavily on visual cues to augment speech recognition, more so than normal-hearing individuals. However, it is unknown how attention to one (focused) or both (divided) modalities plays a role in multisensory speech recognition. Here we show that unisensory speech listening and reading were negatively impacted in divided-attention tasks for CI users—but not for normal-hearing individuals. Our psychophysical experiments revealed that, as expected, listening thresholds were consistently better for the normal-hearing, while lipreading thresholds were largely similar for the two groups. Moreover, audiovisual speech recognition for normal-hearing individuals could be described well by probabilistic summation of auditory and visual speech recognition, while CI users were better integrators than expected from statistical facilitation alone. Our results suggest that this benefit in integration comes at a cost. Unisensory speech recognition is degraded for CI users when attention needs to be divided across modalities. We conjecture that CI users exhibit an integration-attention trade-off. They focus solely on a single modality during focused-attention tasks, but need to divide their limited attentional resources in situations with uncertainty about the upcoming stimulus modality. We argue that in order to determine the benefit of a CI for speech recognition, situational factors need to be discounted by presenting speech in realistic or complex audiovisual environments.
Collapse
Affiliation(s)
- Luuk P H van de Rijt
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboudumc, Nijmegen, Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Marc M van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
24
|
Prince P, Paul BT, Chen J, Le T, Lin V, Dimitrijevic A. Neural correlates of visual stimulus encoding and verbal working memory differ between cochlear implant users and normal-hearing controls. Eur J Neurosci 2021; 54:5016-5037. [PMID: 34146363 PMCID: PMC8457219 DOI: 10.1111/ejn.15365] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 06/10/2021] [Accepted: 06/14/2021] [Indexed: 11/29/2022]
Abstract
A common concern for individuals with severe‐to‐profound hearing loss fitted with cochlear implants (CIs) is difficulty following conversations in noisy environments. Recent work has suggested that these difficulties are related to individual differences in brain function, including verbal working memory and the degree of cross‐modal reorganization of auditory areas for visual processing. However, the neural basis for these relationships is not fully understood. Here, we investigated neural correlates of visual verbal working memory and sensory plasticity in 14 CI users and age‐matched normal‐hearing (NH) controls. While we recorded the high‐density electroencephalogram (EEG), participants completed a modified Sternberg visual working memory task where sets of letters and numbers were presented visually and then recalled at a later time. Results suggested that CI users had comparable behavioural working memory performance compared with NH. However, CI users had more pronounced neural activity during visual stimulus encoding, including stronger visual‐evoked activity in auditory and visual cortices, larger modulations of neural oscillations and increased frontotemporal connectivity. In contrast, during memory retention of the characters, CI users had descriptively weaker neural oscillations and significantly lower frontotemporal connectivity. We interpret the differences in neural correlates of visual stimulus processing in CI users through the lens of cross‐modal and intramodal plasticity.
Collapse
Affiliation(s)
- Priyanka Prince
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.,Department of Physiology, University of Toronto, Toronto, Ontario, Canada
| | - Brandon T Paul
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.,Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Department of Psychology, Ryerson University, Toronto, Ontario, Canada
| | - Joseph Chen
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Trung Le
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Vincent Lin
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Andrew Dimitrijevic
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.,Department of Physiology, University of Toronto, Toronto, Ontario, Canada.,Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
25
|
Finkl T, Hahne A, Friederici AD, Gerber J, Mürbe D, Anwander A. Language Without Speech: Segregating Distinct Circuits in the Human Brain. Cereb Cortex 2021; 30:812-823. [PMID: 31373629 DOI: 10.1093/cercor/bhz128] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 05/08/2019] [Accepted: 05/20/2019] [Indexed: 01/09/2023] Open
Abstract
Language is a fundamental part of human cognition. The question of whether language is processed independently of speech, however, is still heavily discussed. The absence of speech in deaf signers offers the opportunity to disentangle language from speech in the human brain. Using probabilistic tractography, we compared brain structural connectivity of adult deaf signers who had learned sign language early in life to that of matched hearing controls. Quantitative comparison of the connectivity profiles revealed that the core language tracts did not differ between signers and controls, confirming that language is independent of speech. In contrast, pathways involved in the production and perception of speech displayed lower connectivity in deaf signers compared to hearing controls. These differences were located in tracts towards the left pre-supplementary motor area and the thalamus when seeding in Broca's area, and in ipsilateral parietal areas and the precuneus with seeds in left posterior temporal regions. Furthermore, the interhemispheric connectivity between the auditory cortices was lower in the deaf than in the hearing group, underlining the importance of the transcallosal connection for early auditory processes. The present results provide evidence for a functional segregation of the neural pathways for language and speech.
Collapse
Affiliation(s)
- Theresa Finkl
- Saxonian Cochlear Implant Centre, Phoniatrics and Audiology, Faculty of Medicine, Technische Universität Dresden, Fetscherstraße 74, Dresden, Germany
| | - Anja Hahne
- Saxonian Cochlear Implant Centre, Phoniatrics and Audiology, Faculty of Medicine, Technische Universität Dresden, Fetscherstraße 74, Dresden, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Johannes Gerber
- Neuroradiology, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Dirk Mürbe
- Department of Audiology and Phoniatrics, Charité-Universitätsmedizin, Berlin, Germany
| | - Alfred Anwander
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
26
|
Fletcher MD, Verschuur CA. Electro-Haptic Stimulation: A New Approach for Improving Cochlear-Implant Listening. Front Neurosci 2021; 15:581414. [PMID: 34177440 PMCID: PMC8219940 DOI: 10.3389/fnins.2021.581414] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 04/29/2021] [Indexed: 12/12/2022] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring speech perception for severely to profoundly deaf individuals. Despite their success, several limitations remain, particularly in CI users' ability to understand speech in noisy environments, locate sound sources, and enjoy music. A new multimodal approach has been proposed that uses haptic stimulation to provide sound information that is poorly transmitted by the implant. This augmenting of the electrical CI signal with haptic stimulation (electro-haptic stimulation; EHS) has been shown to improve speech-in-noise performance and sound localization in CI users. There is also evidence that it could enhance music perception. We review the evidence of EHS enhancement of CI listening and discuss key areas where further research is required. These include understanding the neural basis of EHS enhancement, understanding the effectiveness of EHS across different clinical populations, and the optimization of signal-processing strategies. We also discuss the significant potential for a new generation of haptic neuroprosthetic devices to aid those who cannot access hearing-assistive technology, either because of biomedical or healthcare-access issues. While significant further research and development is required, we conclude that EHS represents a promising new approach that could, in the near future, offer a non-invasive, inexpensive means of substantially improving clinical outcomes for hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D. Fletcher
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
| | - Carl A. Verschuur
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
27
|
Abstract
Visual speech cues play an important role in speech recognition, and the McGurk effect is a classic demonstration of this. In the original McGurk & Macdonald (Nature 264, 746-748 1976) experiment, 98% of participants reported an illusory "fusion" percept of /d/ when listening to the spoken syllable /b/ and watching the visual speech movements for /g/. However, more recent work shows that subject and task differences influence the proportion of fusion responses. In the current study, we varied task (forced-choice vs. open-ended), stimulus set (including /d/ exemplars vs. not), and data collection environment (lab vs. Mechanical Turk) to investigate the robustness of the McGurk effect. Across experiments, using the same stimuli to elicit the McGurk effect, we found fusion responses ranging from 10% to 60%, thus showing large variability in the likelihood of experiencing the McGurk effect across factors that are unrelated to the perceptual information provided by the stimuli. Rather than a robust perceptual illusion, we therefore argue that the McGurk effect exists only for some individuals under specific task situations.Significance: This series of studies re-evaluates the classic McGurk effect, which shows the relevance of visual cues on speech perception. We highlight the importance of taking into account subject variables and task differences, and challenge future researchers to think carefully about the perceptual basis of the McGurk effect, how it is defined, and what it can tell us about audiovisual integration in speech.
Collapse
|
28
|
Cartocci G, Giorgi A, Inguscio BMS, Scorpecci A, Giannantonio S, De Lucia A, Garofalo S, Grassia R, Leone CA, Longo P, Freni F, Malerba P, Babiloni F. Higher Right Hemisphere Gamma Band Lateralization and Suggestion of a Sensitive Period for Vocal Auditory Emotional Stimuli Recognition in Unilateral Cochlear Implant Children: An EEG Study. Front Neurosci 2021; 15:608156. [PMID: 33767607 PMCID: PMC7985439 DOI: 10.3389/fnins.2021.608156] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/01/2021] [Indexed: 12/21/2022] Open
Abstract
In deaf children, huge emphasis was given to language; however, emotional cues decoding and production appear of pivotal importance for communication capabilities. Concerning neurophysiological correlates of emotional processing, the gamma band activity appears a useful tool adopted for emotion classification and related to the conscious elaboration of emotions. Starting from these considerations, the following items have been investigated: (i) whether emotional auditory stimuli processing differs between normal-hearing (NH) children and children using a cochlear implant (CI), given the non-physiological development of the auditory system in the latter group; (ii) whether the age at CI surgery influences emotion recognition capabilities; and (iii) in light of the right hemisphere hypothesis for emotional processing, whether the CI side influences the processing of emotional cues in unilateral CI (UCI) children. To answer these matters, 9 UCI (9.47 ± 2.33 years old) and 10 NH (10.95 ± 2.11 years old) children were asked to recognize nonverbal vocalizations belonging to three emotional states: positive (achievement, amusement, contentment, relief), negative (anger, disgust, fear, sadness), and neutral (neutral, surprise). Results showed better performances in NH than UCI children in emotional states recognition. The UCI group showed increased gamma activity lateralization index (LI) (relative higher right hemisphere activity) in comparison to the NH group in response to emotional auditory cues. Moreover, LI gamma values were negatively correlated with the percentage of correct responses in emotion recognition. Such observations could be explained by a deficit in UCI children in engaging the left hemisphere for more demanding emotional task, or alternatively by a higher conscious elaboration in UCI than NH children. Additionally, for the UCI group, there was no difference between the CI side and the contralateral side in gamma activity, but a higher gamma activity in the right in comparison to the left hemisphere was found. Therefore, the CI side did not appear to influence the physiologic hemispheric lateralization of emotional processing. Finally, a negative correlation was shown between the age at the CI surgery and the percentage of correct responses in emotion recognition and then suggesting the occurrence of a sensitive period for CI surgery for best emotion recognition skills development.
Collapse
Affiliation(s)
- Giulia Cartocci
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy
| | - Andrea Giorgi
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy
| | - Bianca M S Inguscio
- BrainSigns Srl, Rome, Italy.,Cochlear Implant Unit, Department of Sensory Organs, Sapienza University of Rome, Rome, Italy
| | - Alessandro Scorpecci
- Audiology and Otosurgery Unit, "Bambino Gesù" Pediatric Hospital and Research Institute, Rome, Italy
| | - Sara Giannantonio
- Audiology and Otosurgery Unit, "Bambino Gesù" Pediatric Hospital and Research Institute, Rome, Italy
| | - Antonietta De Lucia
- Otology and Cochlear Implant Unit, Regional Referral Centre Children's Hospital "Santobono-Pausilipon", Naples, Italy
| | - Sabina Garofalo
- Otology and Cochlear Implant Unit, Regional Referral Centre Children's Hospital "Santobono-Pausilipon", Naples, Italy
| | - Rosa Grassia
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Carlo Antonio Leone
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Patrizia Longo
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Freni
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | | | - Fabio Babiloni
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy.,Department of Computer Science and Technology, Hangzhou Dianzi University, Xiasha Higher Education Zone, Hangzhou, China
| |
Collapse
|
29
|
Dorn K, Cauvet E, Weinert S. A cross‐linguistic study of multisensory perceptual narrowing in German and Swedish infants during the first year of life. INFANT AND CHILD DEVELOPMENT 2021. [DOI: 10.1002/icd.2217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Katharina Dorn
- Department of Developmental Psychology Otto‐Friedrich University Bamberg Germany
| | - Elodie Cauvet
- Department of Women's and Children's health Karolinska Institute of Neurodevelopmental Disorders (KIND) Stockholm Sweden
| | - Sabine Weinert
- Department of Developmental Psychology Otto‐Friedrich University Bamberg Germany
| |
Collapse
|
30
|
Stiles NRB, Patel VR, Weiland JD. Multisensory perception in Argus II retinal prosthesis patients: Leveraging auditory-visual mappings to enhance prosthesis outcomes. Vision Res 2021; 182:58-68. [PMID: 33607599 DOI: 10.1016/j.visres.2021.01.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 01/17/2021] [Accepted: 01/18/2021] [Indexed: 11/18/2022]
Abstract
Crossmodal mappings associate features (such as spatial location) between audition and vision, thereby aiding sensory binding and perceptual accuracy. Previously, it has been unclear whether patients with artificial vision will develop crossmodal mappings despite the low spatial and temporal resolution of their visual perception (particularly in light of the remodeling of the retina and visual cortex that takes place during decades of vision loss). To address this question, we studied crossmodal mappings psychophysically in Retinitis Pigmentosa patients with partial visual restoration by means of Argus II retinal prostheses, which incorporate an electrode array implanted on the retinal surface that stimulates still-viable ganglion cells with a video stream from a head-mounted camera. We found that Argus II patients (N = 10) exhibit significant crossmodal mappings between auditory location and visual location, and between auditory pitch and visual elevation, equivalent to those of age-matched sighted controls (N = 10). Furthermore, Argus II patients (N = 6) were able to use crossmodal mappings to locate a visual target more quickly with auditory cueing than without. Overall, restored artificial vision was shown to interact with audition via crossmodal mappings, which implies that the reorganization during blindness and the limitations of artificial vision did not prevent the relearning of crossmodal mappings. In particular, cueing based on crossmodal mappings was shown to improve visual search with a retinal prosthesis. This result represents a key first step toward leveraging crossmodal interactions for improved patient visual functionality.
Collapse
Affiliation(s)
- Noelle R B Stiles
- Department of Ophthalmology, University of Southern California, 1450 San Pablo Street, Los Angeles, CA 90033, USA; Department of Biomedical Engineering, University of Michigan, 2800 Plymouth Road, Ann Arbor, MI 48109, USA.
| | - Vivek R Patel
- Department of Ophthalmology, University of Southern California, 1450 San Pablo Street, Los Angeles, CA 90033, USA
| | - James D Weiland
- Department of Biomedical Engineering, University of Michigan, 2800 Plymouth Road, Ann Arbor, MI 48109, USA; Department of Ophthalmology and Visual Sciences, University of Michigan, 1000 Wall Street, Ann Arbor, MI 48109, USA
| |
Collapse
|
31
|
Heimler B, Amedi A. Are critical periods reversible in the adult brain? Insights on cortical specializations based on sensory deprivation studies. Neurosci Biobehav Rev 2020; 116:494-507. [DOI: 10.1016/j.neubiorev.2020.06.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 06/07/2020] [Accepted: 06/25/2020] [Indexed: 02/06/2023]
|
32
|
He Y, Sun SY, Roy A, Caspi A, Montezuma SR. Improved mobility performance with an artificial vision therapy system using a thermal sensor. J Neural Eng 2020; 17:045011. [PMID: 32650330 DOI: 10.1088/1741-2552/aba4fb] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
OBJECTIVE To evaluate the benefit of integrating thermal imaging into an artificial vision therapy system, the Argus II retinal prosthesis, in simplifying a complex scene and improving mobility performance in the presence of other persons. APPROACH Four Argus II retinal implant users were evaluated on two tasks: to locate and approach target persons in a booth, and to navigate a hallway while avoiding people. They completed the tasks using both the original Argus II system (the 'Argus II camera') and a thermal-integrated Argus II system (the 'thermal camera'). The safety and efficiency of their navigation were evaluated by their walking speed, navigation errors, and the number of collisions. MAIN RESULTS Navigation performance was significantly superior when using the thermal camera compared to using the Argus II camera, including 75% smaller angle of deviation (p < 0.001), 48% smaller error of distance (p < 0.05), and 30% fewer collisions (p < 0.05). The thermal camera also brought the additional benefit of allowing the participants to perform the task in the dark as efficiently as in the light. More importantly, these benefits did not come at a cost of reduced walking speed. SIGNIFICANCE Using the thermal camera in the Argus II system, compared to a visible-light camera, could improve the wearers' navigation performance by helping them better approach or avoid other persons. Adding the thermal camera to future artificial vision therapy systems may complement the visible-light camera and improve the users' mobility safety and efficiency, enhancing their quality of life.
Collapse
Affiliation(s)
- Yingchen He
- Department of Ophthalmology and Visual Neurosciences, University of Minnesota, Minneapolis, MN, United States of America. Author to whom any correspondence should be addressed
| | | | | | | | | |
Collapse
|
33
|
Systematic Analysis of Environmental Chemicals That Dysregulate Critical Period Plasticity-Related Gene Expression Reveals Common Pathways That Mimic Immune Response to Pathogen. Neural Plast 2020; 2020:1673897. [PMID: 32454811 PMCID: PMC7222500 DOI: 10.1155/2020/1673897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Accepted: 02/04/2020] [Indexed: 11/22/2022] Open
Abstract
The tens of thousands of industrial and synthetic chemicals released into the environment have an unknown but potentially significant capacity to interfere with neurodevelopment. Consequently, there is an urgent need for systematic approaches that can identify disruptive chemicals. Little is known about the impact of environmental chemicals on critical periods of developmental neuroplasticity, in large part, due to the challenge of screening thousands of chemicals. Using an integrative bioinformatics approach, we systematically scanned 2001 environmental chemicals and identified 50 chemicals that consistently dysregulate two transcriptional signatures of critical period plasticity. These chemicals included pesticides (e.g., pyridaben), antimicrobials (e.g., bacitracin), metals (e.g., mercury), anesthetics (e.g., halothane), and other chemicals and mixtures (e.g., vehicle emissions). Application of a chemogenomic enrichment analysis and hierarchical clustering across these diverse chemicals identified two clusters of chemicals with one that mimicked an immune response to pathogen, implicating inflammatory pathways and microglia as a common chemically induced neuropathological process. Thus, we established an integrative bioinformatics approach to systematically scan thousands of environmental chemicals for their ability to dysregulate molecular signatures relevant to critical periods of development.
Collapse
|
34
|
Torppa R, Huotilainen M. Why and how music can be used to rehabilitate and develop speech and language skills in hearing-impaired children. Hear Res 2019; 380:108-122. [DOI: 10.1016/j.heares.2019.06.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 06/11/2019] [Accepted: 06/14/2019] [Indexed: 10/26/2022]
|
35
|
Fletcher MD, Hadeedi A, Goehring T, Mills SR. Electro-haptic enhancement of speech-in-noise performance in cochlear implant users. Sci Rep 2019; 9:11428. [PMID: 31388053 PMCID: PMC6684551 DOI: 10.1038/s41598-019-47718-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 07/17/2019] [Indexed: 11/21/2022] Open
Abstract
Cochlear implant (CI) users receive only limited sound information through their implant, which means that they struggle to understand speech in noisy environments. Recent work has suggested that combining the electrical signal from the CI with a haptic signal that provides crucial missing sound information ("electro-haptic stimulation"; EHS) could improve speech-in-noise performance. The aim of the current study was to test whether EHS could enhance speech-in-noise performance in CI users using: (1) a tactile signal derived using an algorithm that could be applied in real time, (2) a stimulation site appropriate for a real-world application, and (3) a tactile signal that could readily be produced by a compact, portable device. We measured speech intelligibility in multi-talker noise with and without vibro-tactile stimulation of the wrist in CI users, before and after a short training regime. No effect of EHS was found before training, but after training EHS was found to improve the number of words correctly identified by an average of 8.3%-points, with some users improving by more than 20%-points. Our approach could offer an inexpensive and non-invasive means of improving speech-in-noise performance in CI users.
Collapse
Affiliation(s)
- Mark D Fletcher
- Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom.
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, S017 1BJ, United Kingdom.
| | - Amatullah Hadeedi
- Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom
| | - Tobias Goehring
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, United Kingdom
| | - Sean R Mills
- Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom
| |
Collapse
|
36
|
Psychobiological Responses Reveal Audiovisual Noise Differentially Challenges Speech Recognition. Ear Hear 2019; 41:268-277. [PMID: 31283529 DOI: 10.1097/aud.0000000000000755] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES In noisy environments, listeners benefit from both hearing and seeing a talker, demonstrating audiovisual (AV) cues enhance speech-in-noise (SIN) recognition. Here, we examined the relative contribution of auditory and visual cues to SIN perception and the strategies used by listeners to decipher speech in noise interference(s). DESIGN Normal-hearing listeners (n = 22) performed an open-set speech recognition task while viewing audiovisual TIMIT sentences presented under different combinations of signal degradation including visual (AVn), audio (AnV), or multimodal (AnVn) noise. Acoustic and visual noises were matched in physical signal-to-noise ratio. Eyetracking monitored participants' gaze to different parts of a talker's face during SIN perception. RESULTS As expected, behavioral performance for clean sentence recognition was better for A-only and AV compared to V-only speech. Similarly, with noise in the auditory channel (AnV and AnVn speech), performance was aided by the addition of visual cues of the talker regardless of whether the visual channel contained noise, confirming a multimodal benefit to SIN recognition. The addition of visual noise (AVn) obscuring the talker's face had little effect on speech recognition by itself. Listeners' eye gaze fixations were biased toward the eyes (decreased at the mouth) whenever the auditory channel was compromised. Fixating on the eyes was negatively associated with SIN recognition performance. Eye gazes on the mouth versus eyes of the face also depended on the gender of the talker. CONCLUSIONS Collectively, results suggest listeners (1) depend heavily on the auditory over visual channel when seeing and hearing speech and (2) alter their visual strategy from viewing the mouth to viewing the eyes of a talker with signal degradations, which negatively affects speech perception.
Collapse
|
37
|
Kral A, Dorman MF, Wilson BS. Neuronal Development of Hearing and Language: Cochlear Implants and Critical Periods. Annu Rev Neurosci 2019; 42:47-65. [DOI: 10.1146/annurev-neuro-080317-061513] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into ( a) minimal representations at the periphery for speech reception, ( b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, ( c) the developmental neuroscience of language and hearing, and ( d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical University, 30625 Hannover, Germany
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, Texas 75080, USA
- School of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Michael F. Dorman
- Department of Speech and Hearing Science, Arizona State University, Tempe, Arizona 85287, USA
| | - Blake S. Wilson
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, Texas 75080, USA
- School of Medicine and Pratt School of Engineering, Duke University, Durham, North Carolina 27708, USA
| |
Collapse
|
38
|
Bayard C, Machart L, Strauß A, Gerber S, Aubanel V, Schwartz JL. Cued Speech Enhances Speech-in-Noise Perception. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2019; 24:223-233. [PMID: 30809665 DOI: 10.1093/deafed/enz003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 01/28/2019] [Accepted: 01/31/2019] [Indexed: 06/09/2023]
Abstract
Speech perception in noise remains challenging for Deaf/Hard of Hearing people (D/HH), even fitted with hearing aids or cochlear implants. The perception of sentences in noise by 20 implanted or aided D/HH subjects mastering Cued Speech (CS), a system of hand gestures complementing lip movements, was compared with the perception of 15 typically hearing (TH) controls in three conditions: audio only, audiovisual, and audiovisual + CS. Similar audiovisual scores were obtained for signal-to-noise ratios (SNRs) 11 dB higher in D/HH participants compared with TH ones. Adding CS information enabled D/HH participants to reach a mean score of 83% in the audiovisual + CS condition at a mean SNR of 0 dB, similar to the usual audio score for TH participants at this SNR. This confirms that the combination of lipreading and Cued Speech system remains extremely important for persons with hearing loss, particularly in adverse hearing conditions.
Collapse
Affiliation(s)
| | | | - Antje Strauß
- Zukunftskolleg, FB Sprachwissenschaft, University of Konstanz
| | | | | | | |
Collapse
|
39
|
Bidelman GM, Sigley L, Lewis GA. Acoustic noise and vision differentially warp the auditory categorization of speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:60. [PMID: 31370660 PMCID: PMC6786888 DOI: 10.1121/1.5114822] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 06/05/2019] [Accepted: 06/07/2019] [Indexed: 06/10/2023]
Abstract
Speech perception requires grouping acoustic information into meaningful linguistic-phonetic units via categorical perception (CP). Beyond shrinking observers' perceptual space, CP might aid degraded speech perception if categories are more resistant to noise than surface acoustic features. Combining audiovisual (AV) cues also enhances speech recognition, particularly in noisy environments. This study investigated the degree to which visual cues from a talker (i.e., mouth movements) aid speech categorization amidst noise interference by measuring participants' identification of clear and noisy speech (0 dB signal-to-noise ratio) presented in auditory-only or combined AV modalities (i.e., A, A+noise, AV, AV+noise conditions). Auditory noise expectedly weakened (i.e., shallower identification slopes) and slowed speech categorization. Interestingly, additional viseme cues largely counteracted noise-related decrements in performance and stabilized classification speeds in both clear and noise conditions suggesting more precise acoustic-phonetic representations with multisensory information. Results are parsimoniously described under a signal detection theory framework and by a reduction (visual cues) and increase (noise) in the precision of perceptual object representation, which were not due to lapses of attention or guessing. Collectively, findings show that (i) mapping sounds to categories aids speech perception in "cocktail party" environments; (ii) visual cues help lattice formation of auditory-phonetic categories to enhance and refine speech identification.
Collapse
Affiliation(s)
- Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee 38152, USA
| | - Lauren Sigley
- School of Communication Sciences & Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee 38152, USA
| | - Gwyneth A Lewis
- School of Communication Sciences & Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee 38152, USA
| |
Collapse
|
40
|
Cartocci G, Maglione AG, Vecchiato G, Modica E, Rossi D, Malerba P, Marsella P, Scorpecci A, Giannantonio S, Mosca F, Leone CA, Grassia R, Babiloni F. Frontal brain asymmetries as effective parameters to assess the quality of audiovisual stimuli perception in adult and young cochlear implant users. ACTA ACUST UNITED AC 2019; 38:346-360. [PMID: 30197426 PMCID: PMC6146571 DOI: 10.14639/0392-100x-1407] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2016] [Accepted: 08/01/2017] [Indexed: 11/23/2022]
Abstract
How is music perceived by cochlear implant (CI) users? This question arises as “the next step” given the impressive performance obtained by these patients in language perception. Furthermore, how can music perception be evaluated beyond self-report rating, in order to obtain measurable data? To address this question, estimation of the frontal electroencephalographic (EEG) alpha activity imbalance, acquired through a 19-channel EEG cap, appears to be a suitable instrument to measure the approach/withdrawal (AW index) reaction to external stimuli. Specifically, a greater value of AW indicates an increased propensity to stimulus approach, and vice versa a lower one a tendency to withdraw from the stimulus. Additionally, due to prelingually and postlingually deafened pathology acquisition, children and adults, respectively, would probably differ in music perception. The aim of the present study was to investigate children and adult CI users, in unilateral (UCI) and bilateral (BCI) implantation conditions, during three experimental situations of music exposure (normal, distorted and mute). Additionally, a study of functional connectivity patterns within cerebral networks was performed to investigate functioning patterns in different experimental populations. As a general result, congruency among patterns between BCI patients and control (CTRL) subjects was seen, characterised by lowest values for the distorted condition (vs. normal and mute conditions) in the AW index and in the connectivity analysis. Additionally, the normal and distorted conditions were significantly different in CI and CTRL adults, and in CTRL children, but not in CI children. These results suggest a higher capacity of discrimination and approach motivation towards normal music in CTRL and BCI subjects, but not for UCI patients. Therefore, for perception of music CTRL and BCI participants appear more similar than UCI subjects, as estimated by measurable and not self-reported parameters.
Collapse
Affiliation(s)
- G Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Italy.,These authors equally contributed to the present article
| | - A G Maglione
- BrainSigns Srl, Rome, Italy.,These authors equally contributed to the present article
| | - G Vecchiato
- Department of Molecular Medicine, Sapienza University of Rome, Italy
| | - E Modica
- Department of Anatomical, Histological, Forensic & Orthopedic Sciences, Sapienza University of Rome, Italy
| | - D Rossi
- Department of Anatomical, Histological, Forensic & Orthopedic Sciences, Sapienza University of Rome, Italy
| | - P Malerba
- Cochlear Italia Srl., Bologna, Italy
| | - P Marsella
- Department of Otorhinolaryngology, Audiology and Otology Unit, "Bambino Gesù" Pediatric Hospital, Rome, Italy
| | - A Scorpecci
- Department of Otorhinolaryngology, Audiology and Otology Unit, "Bambino Gesù" Pediatric Hospital, Rome, Italy
| | - S Giannantonio
- Department of Otorhinolaryngology, Audiology and Otology Unit, "Bambino Gesù" Pediatric Hospital, Rome, Italy
| | - F Mosca
- ENT Department, Azienda Ospedaliera Dei Colli Monaldi, Naples, Italy
| | - C A Leone
- ENT Department, Azienda Ospedaliera Dei Colli Monaldi, Naples, Italy
| | - R Grassia
- ENT Department, Azienda Ospedaliera Dei Colli Monaldi, Naples, Italy
| | - F Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Italy.,BrainSigns Srl, Rome, Italy
| |
Collapse
|
41
|
Hirst RJ, Kicks EC, Allen HA, Cragg L. Cross-modal interference-control is reduced in childhood but maintained in aging: A cohort study of stimulus- and response-interference in cross-modal and unimodal Stroop tasks. J Exp Psychol Hum Percept Perform 2019; 45:553-572. [PMID: 30945905 PMCID: PMC6484713 DOI: 10.1037/xhp0000608] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Interference-control is the ability to exclude distractions and focus on a specific task or stimulus. However, it is currently unclear whether the same interference-control mechanisms underlie the ability to ignore unimodal and cross-modal distractions. In 2 experiments we assessed whether unimodal and cross-modal interference follow similar trajectories in development and aging and occur at similar processing levels. In Experiment 1, 42 children (6-11 years), 31 younger adults (18-25 years) and 32 older adults (60-84 years) identified color rectangles with either written (unimodal) or spoken (cross-modal) distractor-words. Stimuli could be congruent, incongruent but mapped to the same response (stimulus-incongruent), or incongruent and mapped to different responses (response-incongruent); thus, separating interference occurring at early (sensory) and late (response) processing levels. Unimodal interference was worst in childhood and old age; however, older adults maintained the ability to ignore cross-modal distraction. Unimodal but not cross-modal response-interference also reduced accuracy. In Experiment 2 we compared the effect of audition on vision and vice versa in 52 children (6-11 years), 30 young adults (22-33 years) and 30 older adults (60-84 years). As in Experiment 1, older adults maintained the ability to ignore cross-modal distraction arising from either modality, and neither type of cross-modal distraction limited accuracy in adults. However, cross-modal distraction still reduced accuracy in children and children were more slowed by stimulus-interference compared with adults. We conclude that; unimodal and cross-modal interference follow different life span trajectories and differences in stimulus- and response-interference may increase cross-modal distractibility in childhood. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Ella C Kicks
- School of Psychology and Neuroscience, University of St. Andrews
| | | | - Lucy Cragg
- School of Psychology, University of Nottingham
| |
Collapse
|
42
|
Kressner AA, May T, Dau T. Effect of Noise Reduction Gain Errors on Simulated Cochlear Implant Speech Intelligibility. Trends Hear 2019; 23:2331216519825930. [PMID: 30755108 PMCID: PMC6378641 DOI: 10.1177/2331216519825930] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
It has been suggested that the most important factor for obtaining high speech intelligibility in noise with cochlear implant (CI) recipients is to preserve the low-frequency amplitude modulations of speech across time and frequency by, for example, minimizing the amount of noise in the gaps between speech segments. In contrast, it has also been argued that the transient parts of the speech signal, such as speech onsets, provide the most important information for speech intelligibility. The present study investigated the relative impact of these two factors on the potential benefit of noise reduction for CI recipients by systematically introducing noise estimation errors within speech segments, speech gaps, and the transitions between them. The introduction of these noise estimation errors directly induces errors in the noise reduction gains within each of these regions. Speech intelligibility in both stationary and modulated noise was then measured using a CI simulation tested on normal-hearing listeners. The results suggest that minimizing noise in the speech gaps can improve intelligibility, at least in modulated noise. However, significantly larger improvements were obtained when both the noise in the gaps was minimized and the speech transients were preserved. These results imply that the ability to identify the boundaries between speech segments and speech gaps may be one of the most important factors for a noise reduction algorithm because knowing the boundaries makes it possible to minimize the noise in the gaps as well as enhance the low-frequency amplitude modulations of the speech.
Collapse
Affiliation(s)
- Abigail A Kressner
- 1 Hearing Systems, Department of Health Technology, Technical University of Denmark, Denmark
| | - Tobias May
- 1 Hearing Systems, Department of Health Technology, Technical University of Denmark, Denmark
| | - Torsten Dau
- 1 Hearing Systems, Department of Health Technology, Technical University of Denmark, Denmark
| |
Collapse
|
43
|
Abbott NT, Shahin AJ. Cross-modal phonetic encoding facilitates the McGurk illusion and phonemic restoration. J Neurophysiol 2018; 120:2988-3000. [PMID: 30303762 DOI: 10.1152/jn.00262.2018] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In spoken language, audiovisual (AV) perception occurs when the visual modality influences encoding of acoustic features (e.g., phonetic representations) at the auditory cortex. We examined how visual speech (mouth movements) transforms phonetic representations, indexed by changes to the N1 auditory evoked potential (AEP). EEG was acquired while human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables, /ba/ and /wa/, presented in auditory-only or AV congruent or incongruent contexts or in a context in which the consonants were replaced by white noise (noise replaced). Subjects reported whether they heard "ba" or "wa." We hypothesized that the auditory N1 amplitude during illusory perception (caused by incongruent AV input, as in the McGurk illusion, or white noise-replaced consonants in CV utterances) should shift to reflect the auditory N1 characteristics of the phonemes conveyed visually (by mouth movements) as opposed to acoustically. Indeed, the N1 AEP became larger and occurred earlier when listeners experienced illusory "ba" (video /ba/, audio /wa/, heard as "ba") and vice versa when they experienced illusory "wa" (video /wa/, audio /ba/, heard as "wa"), mirroring the N1 AEP characteristics for /ba/ and /wa/ observed in natural acoustic situations (e.g., auditory-only setting). This visually mediated N1 behavior was also observed for noise-replaced CVs. Taken together, the findings suggest that information relayed by the visual modality modifies phonetic representations at the auditory cortex and that similar neural mechanisms support the McGurk illusion and visually mediated phonemic restoration. NEW & NOTEWORTHY Using a variant of the McGurk illusion experimental design (using the syllables /ba/ and /wa/), we demonstrate that lipreading influences phonetic encoding at the auditory cortex. We show that the N1 auditory evoked potential morphology shifts to resemble the N1 morphology of the syllable conveyed visually. We also show similar N1 shifts when the consonants are replaced by white noise, suggesting that the McGurk illusion and the visually mediated phonemic restoration rely on common mechanisms.
Collapse
Affiliation(s)
- Noelle T Abbott
- Center for Mind and Brain, University of California, Davis, California.,San Diego State University-University of California, San Diego Joint Doctoral Program in Language and Communicative Disorders, San Diego, California
| | - Antoine J Shahin
- Center for Mind and Brain, University of California, Davis, California.,Department of Cognitive and Information Sciences, University of California, Merced, California
| |
Collapse
|
44
|
McDaniel J, Camarata S, Yoder P. Comparing Auditory-Only and Audiovisual Word Learning for Children With Hearing Loss. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2018; 23:382-398. [PMID: 29767759 PMCID: PMC6146754 DOI: 10.1093/deafed/eny016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 04/16/2018] [Accepted: 05/04/2018] [Indexed: 06/08/2023]
Abstract
Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.
Collapse
|
45
|
Hirst RJ, Stacey JE, Cragg L, Stacey PC, Allen HA. The threshold for the McGurk effect in audio-visual noise decreases with development. Sci Rep 2018; 8:12372. [PMID: 30120399 PMCID: PMC6098036 DOI: 10.1038/s41598-018-30798-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 08/02/2018] [Indexed: 11/09/2022] Open
Abstract
Across development, vision increasingly influences audio-visual perception. This is evidenced in illusions such as the McGurk effect, in which a seen mouth movement changes the perceived sound. The current paper assessed the effects of manipulating the clarity of the heard and seen signal upon the McGurk effect in children aged 3-6 (n = 29), 7-9 (n = 32) and 10-12 (n = 29) years, and adults aged 20-35 years (n = 32). Auditory noise increased, and visual blur decreased, the likelihood of vision changing auditory perception. Based upon a proposed developmental shift from auditory to visual dominance we predicted that younger children would be less susceptible to McGurk responses, and that adults would continue to be influenced by vision in higher levels of visual noise and with less auditory noise. Susceptibility to the McGurk effect was higher in adults compared with 3-6-year-olds and 7-9-year-olds but not 10-12-year-olds. Younger children required more auditory noise, and less visual noise, than adults to induce McGurk responses (i.e. adults and older children were more easily influenced by vision). Reduced susceptibility in childhood supports the theory that sensory dominance shifts across development and reaches adult-like levels by 10 years of age.
Collapse
Affiliation(s)
| | | | - Lucy Cragg
- University of Nottingham, Nottingham, UK
| | | | | |
Collapse
|
46
|
Butera IM, Stevenson RA, Mangus BD, Woynaroski TG, Gifford RH, Wallace MT. Audiovisual Temporal Processing in Postlingually Deafened Adults with Cochlear Implants. Sci Rep 2018; 8:11345. [PMID: 30054512 PMCID: PMC6063927 DOI: 10.1038/s41598-018-29598-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2018] [Accepted: 07/09/2018] [Indexed: 11/17/2022] Open
Abstract
For many cochlear implant (CI) users, visual cues are vitally important for interpreting the impoverished auditory speech information that an implant conveys. Although the temporal relationship between auditory and visual stimuli is crucial for how this information is integrated, audiovisual temporal processing in CI users is poorly understood. In this study, we tested unisensory (auditory alone, visual alone) and multisensory (audiovisual) temporal processing in postlingually deafened CI users (n = 48) and normal-hearing controls (n = 54) using simultaneity judgment (SJ) and temporal order judgment (TOJ) tasks. We varied the timing onsets between the auditory and visual components of either a syllable/viseme or a simple flash/beep pairing, and participants indicated either which stimulus appeared first (TOJ) or if the pair occurred simultaneously (SJ). Results indicate that temporal binding windows-the interval within which stimuli are likely to be perceptually 'bound'-are not significantly different between groups for either speech or non-speech stimuli. However, the point of subjective simultaneity for speech was less visually leading in CI users, who interestingly, also had improved visual-only TOJ thresholds. Further signal detection analysis suggests that this SJ shift may be due to greater visual bias within the CI group, perhaps reflecting heightened attentional allocation to visual cues.
Collapse
Affiliation(s)
- Iliza M Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Western Ontario, London, ON, Canada
- Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Brannon D Mangus
- Murfreesboro Medical Clinic and Surgicenter, Murfreesboro, TN, USA
| | - Tiffany G Woynaroski
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H Gifford
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
47
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
48
|
Visual Temporal Acuity Is Related to Auditory Speech Perception Abilities in Cochlear Implant Users. Ear Hear 2018; 38:236-243. [PMID: 27764001 DOI: 10.1097/aud.0000000000000379] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Despite significant improvements in speech perception abilities following cochlear implantation, many prelingually deafened cochlear implant (CI) recipients continue to rely heavily on visual information to develop speech and language. Increased reliance on visual cues for understanding spoken language could lead to the development of unique audiovisual integration and visual-only processing abilities in these individuals. Brain imaging studies have demonstrated that good CI performers, as indexed by auditory-only speech perception abilities, have different patterns of visual cortex activation in response to visual and auditory stimuli as compared with poor CI performers. However, no studies have examined whether speech perception performance is related to any type of visual processing abilities following cochlear implantation. The purpose of the present study was to provide a preliminary examination of the relationship between clinical, auditory-only speech perception tests, and visual temporal acuity in prelingually deafened adult CI users. It was hypothesized that prelingually deafened CI users, who exhibit better (i.e., more acute) visual temporal processing abilities would demonstrate better auditory-only speech perception performance than those with poorer visual temporal acuity. DESIGN Ten prelingually deafened adult CI users were recruited for this study. Participants completed a visual temporal order judgment task to quantify visual temporal acuity. To assess auditory-only speech perception abilities, participants completed the consonant-nucleus-consonant word recognition test and the AzBio sentence recognition test. Results were analyzed using two-tailed partial Pearson correlations, Spearman's rho correlations, and independent samples t tests. RESULTS Visual temporal acuity was significantly correlated with auditory-only word and sentence recognition abilities. In addition, proficient CI users, as assessed via auditory-only speech perception performance, demonstrated significantly better visual temporal acuity than nonproficient CI users. CONCLUSIONS These findings provide the first behavioral evidence that visual temporal acuity is related to post implantation CI proficiency as indexed by auditory-only speech perception performance. These preliminary data bring to light the possible future role of visual temporal acuity in predicting CI outcomes before implantation, as well as the possible utility of visual training methods in improving CI outcomes.
Collapse
|
49
|
McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS). Psychon Bull Rev 2018; 24:863-872. [PMID: 27562763 DOI: 10.3758/s13423-016-1148-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Collapse
|
50
|
Wang Y, Zhou W, Cheng Y, Bian X. Gaze Patterns in Auditory-Visual Perception of Emotion by Children with Hearing Aids and Hearing Children. Front Psychol 2017; 8:2281. [PMID: 29312104 PMCID: PMC5743909 DOI: 10.3389/fpsyg.2017.02281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Accepted: 12/14/2017] [Indexed: 12/30/2022] Open
Abstract
This study investigated eye-movement patterns during emotion perception for children with hearing aids and hearing children. Seventy-eight participants aged from 3 to 7 were asked to watch videos with a facial expression followed by an oral statement, and these two cues were either congruent or incongruent in emotional valence. Results showed that while hearing children paid more attention to the upper part of the face, children with hearing aids paid more attention to the lower part of the face after the oral statement was presented, especially for the neutral facial expression/neutral oral statement condition. These results suggest that children with hearing aids have an altered eye contact pattern with others and a difficulty in matching visual and voice cues in emotion perception. The negative cause and effect of these gaze patterns should be avoided in earlier rehabilitation for hearing-impaired children with assistive devices.
Collapse
Affiliation(s)
- Yifang Wang
- School of Psychology, Capital Normal University, Beijing, China
| | - Wei Zhou
- School of Psychology, Capital Normal University, Beijing, China
| | | | - Xiaoying Bian
- School of Psychology, Capital Normal University, Beijing, China
| |
Collapse
|