1
|
Yu Q, Li H, Li S, Tang P. Prosodic and Visual Cues Facilitate Irony Comprehension by Mandarin-Speaking Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:2172-2190. [PMID: 38820233 DOI: 10.1044/2024_jslhr-23-00701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
PURPOSE This study investigated irony comprehension by Mandarin-speaking children with cochlear implants, focusing on how prosodic and visual cues contribute to their comprehension, and whether second-order Theory of Mind is required for using these cues. METHOD We tested 52 Mandarin-speaking children with cochlear implants (aged 3-7 years) and 52 age- and gender-matched children with normal hearing. All children completed a Theory of Mind test and a story comprehension test. Ironic stories were presented in three conditions, each providing different cues: (a) context-only, (b) context and prosody, and (c) context, prosody, and visual cues. Comparisons were conducted on the accuracy of story understanding across the three conditions to examine the role of prosodic and visual cues. RESULTS The results showed that, compared to the context-only condition, the additional prosodic and visual cues both improved the accuracy of irony comprehension for children with cochlear implants, similar to their normal-hearing peers. Furthermore, such improvements were observed for all children, regardless of whether they passed the second-order Theory of Mind test or not. CONCLUSIONS This study is the first to demonstrate the benefits of prosodic and visual cues in irony comprehension, without reliance on second-order Theory of Mind, for Mandarin-speaking children with cochlear implants. It implies potential insights for utilizing prosodic and visual cues in intervention strategies to promote irony comprehension.
Collapse
Affiliation(s)
- Qianxi Yu
- School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Honglan Li
- School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Shanpeng Li
- School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Ping Tang
- School of Foreign Studies, Nanjing University of Science and Technology, China
| |
Collapse
|
2
|
Fletcher MD, Perry SW, Thoidis I, Verschuur CA, Goehring T. Improved tactile speech robustness to background noise with a dual-path recurrent neural network noise-reduction method. Sci Rep 2024; 14:7357. [PMID: 38548750 PMCID: PMC10978864 DOI: 10.1038/s41598-024-57312-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/17/2024] [Indexed: 04/01/2024] Open
Abstract
Many people with hearing loss struggle to understand speech in noisy environments, making noise robustness critical for hearing-assistive devices. Recently developed haptic hearing aids, which convert audio to vibration, can improve speech-in-noise performance for cochlear implant (CI) users and assist those unable to access hearing-assistive devices. They are typically body-worn rather than head-mounted, allowing additional space for batteries and microprocessors, and so can deploy more sophisticated noise-reduction techniques. The current study assessed whether a real-time-feasible dual-path recurrent neural network (DPRNN) can improve tactile speech-in-noise performance. Audio was converted to vibration on the wrist using a vocoder method, either with or without noise reduction. Performance was tested for speech in a multi-talker noise (recorded at a party) with a 2.5-dB signal-to-noise ratio. An objective assessment showed the DPRNN improved the scale-invariant signal-to-distortion ratio by 8.6 dB and substantially outperformed traditional noise-reduction (log-MMSE). A behavioural assessment in 16 participants showed the DPRNN improved tactile-only sentence identification in noise by 8.2%. This suggests that advanced techniques like the DPRNN could substantially improve outcomes with haptic hearing aids. Low-cost haptic devices could soon be an important supplement to hearing-assistive devices such as CIs or offer an alternative for people who cannot access CI technology.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
| | - Samuel W Perry
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Iordanis Thoidis
- School of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124, Thessaloniki, Greece
| | - Carl A Verschuur
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Tobias Goehring
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK
| |
Collapse
|
3
|
McDaniel J, Krimm H, Schuele CM. SLPs' perceptions of language learning myths about children who are DHH. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2024; 29:245-257. [PMID: 37742092 PMCID: PMC10950421 DOI: 10.1093/deafed/enad043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 06/30/2023] [Accepted: 08/23/2023] [Indexed: 09/25/2023]
Abstract
This article reports on speech-language pathologists' (SLPs') knowledge related to myths about spoken language learning of children who are deaf and hard of hearing (DHH). The broader study was designed as a step toward narrowing the research-practice gap and providing effective, evidence-based language services to children. In the broader study, SLPs (n = 106) reported their agreement/disagreement with myth statements and true statements (n = 52) about 7 clinical topics related to speech and language development. For the current report, participant responses to 7 statements within the DHH topic were analyzed. Participants exhibited a relative strength in bilingualism knowledge for spoken languages and a relative weakness in audiovisual integration knowledge. Much individual variation was observed. Participants' responses were more likely to align with current evidence about bilingualism if the participants had less experience as an SLP. The findings provide guidance on prioritizing topics for speech-language pathology preservice and professional development.
Collapse
Affiliation(s)
- Jena McDaniel
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, United States
| | - Hannah Krimm
- Department of Communication Sciences and Special Education, University of Georgia, Athens, United States
| | - C Melanie Schuele
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, United States
| |
Collapse
|
4
|
Sabatier E, Leybaert J, Chetail F. Orthographic Learning in French-Speaking Deaf and Hard of Hearing Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:870-885. [PMID: 38394239 DOI: 10.1044/2023_jslhr-23-00324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2024]
Abstract
PURPOSE Children are assumed to acquire orthographic representations during autonomous reading by decoding new written words. The present study investigates how deaf and hard of hearing (DHH) children build new orthographic representations compared to typically hearing (TH) children. METHOD Twenty-nine DHH children, from 7.8 to 13.5 years old, with moderate-to-profound hearing loss, matched for reading level and chronological age to TH controls, were exposed to 10 pseudowords (novel words) in written stories. Then, they performed a spelling task and an orthographic recognition task on these new words. RESULTS In the spelling task, we found no difference in accuracy, but a difference in errors emerged between the two groups: Phonologically plausible errors were less common in DHH children than in TH children. In the recognition task, DHH children were better than TH children at recognizing target pseudowords. Phonological strategies seemed to be used less by DHH than by TH children who very often chose phonological distractors. CONCLUSIONS Both groups created sufficiently detailed orthographic representations to complete the tasks, which support the self-teaching hypothesis. DHH children used phonological information in both tasks but could use more orthographic cues than TH children to build up orthographic representations. Using the combination of a spelling task and a recognition task, as well as analyzing the nature of errors, in this study, provides a methodological implication for further understanding of underlying cognitive processes.
Collapse
Affiliation(s)
- Elodie Sabatier
- Laboratoire Cognition Langage et Développement, Center for Research in Cognition & Neurosciences, Université libre de Bruxelles, Brussels, Belgium
| | - Jacqueline Leybaert
- Laboratoire Cognition Langage et Développement, Center for Research in Cognition & Neurosciences, Université libre de Bruxelles, Brussels, Belgium
| | - Fabienne Chetail
- Laboratoire Cognition Langage et Développement, Center for Research in Cognition & Neurosciences, Université libre de Bruxelles, Brussels, Belgium
| |
Collapse
|
5
|
Cychosz M, Edwards JR, Munson B, Romeo R, Kosie J, Newman RS. The everyday speech environments of preschoolers with and without cochlear implants. JOURNAL OF CHILD LANGUAGE 2024:1-22. [PMID: 38362892 PMCID: PMC11327381 DOI: 10.1017/s0305000924000023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
Children who receive cochlear implants develop spoken language on a protracted timescale. The home environment facilitates speech-language development, yet it is relatively unknown how the environment differs between children with cochlear implants and typical hearing. We matched eighteen preschoolers with implants (31-65 months) to two groups of children with typical hearing: by chronological age and hearing age. Each child completed a long-form, naturalistic audio recording of their home environment (appx. 16 hours/child; >730 hours of observation) to measure adult speech input, child vocal productivity, and caregiver-child interaction. Results showed that children with cochlear implants and typical hearing were exposed to and engaged in similar amounts of spoken language with caregivers. However, the home environment did not reflect developmental stages as closely for children with implants, or predict their speech outcomes as strongly. Home-based speech-language interventions should focus on the unique input-outcome relationships for this group of children with hearing loss.
Collapse
|
6
|
Koupka G, Okalidou A, Nicolaidis K, Constantinidis J, Kyriafinis G, Menexes G. Voice Onset Time of Greek Stops Productions by Greek Children with Cochlear Implants and Normal Hearing. Folia Phoniatr Logop 2023; 76:109-126. [PMID: 37497950 DOI: 10.1159/000533133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 07/01/2023] [Indexed: 07/28/2023] Open
Abstract
INTRODUCTION Research on voice onset time (VOT) production of stops in children with CI versus NH has reported conflicting results. Effects of age and place of articulation on VOT have not been examined for children with CI. The purpose of this study was to examine VOT production by Greek-speaking children with CI in comparison to NH controls, with a focus on the effects of age, type of stimuli, and place of articulation. METHODS Participants were 24 children with CI aged from 2;8 to 13;3 years and 24 age- and gender-matched children with NH. Words were elicited via a picture-naming task, and nonwords were elicited via a fast mapping procedure. RESULTS For voiced stops, children with CI showed longer VOT than children with NH, whereas VOT for voiceless stops was similar to that of NH peers. Also, in both voiced and voiceless stops, the VOT differed as a function of age and place of articulation across groups. Differences as a function of stimulus type were only noted for voiced stops across groups. CONCLUSIONS For the voiced stop consonants, which demand more articulatory effort, VOT production in children with CI was longer than in children with NH. For the voiceless stop consonants, VOT production in children with CI is acquired at a young age.
Collapse
Affiliation(s)
- Georgia Koupka
- Educational and Social Policy University of Macedonia, University of Macedonia, Thessaloniki, Greece
| | - Areti Okalidou
- Educational and Social Policy University of Macedonia, University of Macedonia, Thessaloniki, Greece
| | - Katerina Nicolaidis
- Theoretical and Applied Linguistics, School of English, Aristotle University, Thessaloniki, Greece
| | - Jannis Constantinidis
- AHEPA Hospital, 1st Otorhinolaryngology Clinic of AHEPA Hospital, Thessaloniki, Greece
| | - Georgios Kyriafinis
- AHEPA Hospital, 1st Otorhinolaryngology Clinic of AHEPA Hospital, Thessaloniki, Greece
| | - George Menexes
- Faculty of Agriculture Forestry and Natural Environment, Aristotle University, Thessaloniki, Greece
| |
Collapse
|
7
|
Humphries T, Mathur G, Napoli DJ, Padden C, Rathmann C. Deaf Children Need Rich Language Input from the Start: Support in Advising Parents. CHILDREN (BASEL, SWITZERLAND) 2022; 9:1609. [PMID: 36360337 PMCID: PMC9688581 DOI: 10.3390/children9111609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/13/2022] [Accepted: 10/19/2022] [Indexed: 01/25/2023]
Abstract
Bilingual bimodalism is a great benefit to deaf children at home and in schooling. Deaf signing children perform better overall than non-signing deaf children, regardless of whether they use a cochlear implant. Raising a deaf child in a speech-only environment can carry cognitive and psycho-social risks that may have lifelong adverse effects. For children born deaf, or who become deaf in early childhood, we recommend comprehensible multimodal language exposure and engagement in joint activity with parents and friends to assure age-appropriate first-language acquisition. Accessible visual language input should begin as close to birth as possible. Hearing parents will need timely and extensive support; thus, we propose that, upon the birth of a deaf child and through the preschool years, among other things, the family needs an adult deaf presence in the home for several hours every day to be a linguistic model, to guide the family in taking sign language lessons, to show the family how to make spoken language accessible to their deaf child, and to be an encouraging liaison to deaf communities. While such a support program will be complicated and challenging to implement, it is far less costly than the harm of linguistic deprivation.
Collapse
Affiliation(s)
- Tom Humphries
- Department of Communication, University of California at San Diego, La Jolla, CA 92093, USA
| | - Gaurav Mathur
- Department of Linguistics, Gallaudet University, Washington, DC 20002, USA
| | - Donna Jo Napoli
- Department of Linguistics, Swarthmore College, Swarthmore, PA 19081, USA
| | - Carol Padden
- Division of Social Sciences, Department of Communication and Dean, University of California at San Diego, La Jolla, CA 92093, USA
| | - Christian Rathmann
- Department of Deaf Studies and Sign Language Interpreting, Humboldt-Universität zu Berlin, 10019 Berlin, Germany
| |
Collapse
|
8
|
Corina DP, Coffey-Corina S, Pierotti E, Bormann B, LaMarr T, Lawyer L, Backer KC, Miller LM. Electrophysiological Examination of Ambient Speech Processing in Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3502-3517. [PMID: 36037517 PMCID: PMC9913291 DOI: 10.1044/2022_jslhr-22-00004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 05/05/2022] [Accepted: 06/11/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE This research examined the expression of cortical auditory evoked potentials in a cohort of children who received cochlear implants (CIs) for treatment of congenital deafness (n = 28) and typically hearing controls (n = 28). METHOD We make use of a novel electroencephalography paradigm that permits the assessment of auditory responses to ambiently presented speech and evaluates the contributions of concurrent visual stimulation on this activity. RESULTS Our findings show group differences in the expression of auditory sensory and perceptual event-related potential components occurring in 80- to 200-ms and 200- to 300-ms time windows, with reductions in amplitude and a greater latency difference for CI-using children. Relative to typically hearing children, current source density analysis showed muted responses to concurrent visual stimulation in CI-using children, suggesting less cortical specialization and/or reduced responsiveness to auditory information that limits the detection of the interaction between sensory systems. CONCLUSION These findings indicate that even in the face of early interventions, CI-using children may exhibit disruptions in the development of auditory and multisensory processing.
Collapse
Affiliation(s)
- David P. Corina
- Department of Linguistics, University of California, Davis
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | | | - Elizabeth Pierotti
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | - Brett Bormann
- Center for Mind and Brain, University of California, Davis
- Neurobiology, Physiology and Behavior, University of California, Davis
| | - Todd LaMarr
- Center for Mind and Brain, University of California, Davis
| | - Laurel Lawyer
- Center for Mind and Brain, University of California, Davis
| | | | - Lee M. Miller
- Center for Mind and Brain, University of California, Davis
- Neurobiology, Physiology and Behavior, University of California, Davis
- Department of Otolaryngology/Head and Neck Surgery, University of California, Davis
| |
Collapse
|
9
|
Cross-Modal Reorganization From Both Visual and Somatosensory Modalities in Cochlear Implanted Children and Its Relationship to Speech Perception. Otol Neurotol 2022; 43:e872-e879. [PMID: 35970165 DOI: 10.1097/mao.0000000000003619] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS We hypothesized that children with cochlear implants (CIs) who demonstrate cross-modal reorganization by vision also demonstrate cross-modal reorganization by somatosensation and that these processes are interrelated and impact speech perception. BACKGROUND Cross-modal reorganization, which occurs when a deprived sensory modality's cortical resources are recruited by other intact modalities, has been proposed as a source of variability underlying speech perception in deaf children with CIs. Visual and somatosensory cross-modal reorganization of auditory cortex have been documented separately in CI children, but reorganization in these modalities has not been documented within the same subjects. Our goal was to examine the relationship between cross-modal reorganization from both visual and somatosensory modalities within a single group of CI children. METHODS We analyzed high-density electroencephalogram responses to visual and somatosensory stimuli and current density reconstruction of brain activity sources. Speech perception in noise testing was performed. Current density reconstruction patterns were analyzed within the entire subject group and across groups of CI children exhibiting good versus poor speech perception. RESULTS Positive correlations between visual and somatosensory cross-modal reorganization suggested that neuroplasticity in different sensory systems may be interrelated. Furthermore, CI children with good speech perception did not show recruitment of frontal or auditory cortices during visual processing, unlike CI children with poor speech perception. CONCLUSION Our results reflect changes in cortical resource allocation in pediatric CI users. Cross-modal recruitment of auditory and frontal cortices by vision, and cross-modal reorganization of auditory cortex by somatosensation, may underlie variability in speech and language outcomes in CI children.
Collapse
|
10
|
Zhou X, Feng M, Hu Y, Zhang C, Zhang Q, Luo X, Yuan W. The Effects of Cortical Reorganization and Applications of Functional Near-Infrared Spectroscopy in Deaf People and Cochlear Implant Users. Brain Sci 2022; 12:brainsci12091150. [PMID: 36138885 PMCID: PMC9496692 DOI: 10.3390/brainsci12091150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/19/2022] [Accepted: 08/24/2022] [Indexed: 11/22/2022] Open
Abstract
A cochlear implant (CI) is currently the only FDA-approved biomedical device that can restore hearing for the majority of patients with severe-to-profound sensorineural hearing loss (SNHL). While prelingually and postlingually deaf individuals benefit substantially from CI, the outcomes after implantation vary greatly. Numerous studies have attempted to study the variables that affect CI outcomes, including the personal characteristics of CI candidates, environmental variables, and device-related variables. Up to 80% of the results remained unexplainable because all these variables could only roughly predict auditory performance with a CI. Brain structure/function differences after hearing deprivation, that is, cortical reorganization, has gradually attracted the attention of neuroscientists. The cross-modal reorganization in the auditory cortex following deafness is thought to be a key factor in the success of CI. In recent years, the adaptive and maladaptive effects of this reorganization on CI rehabilitation have been argued because the neural mechanisms of how this reorganization impacts CI learning and rehabilitation have not been revealed. Due to the lack of brain processes describing how this plasticity affects CI learning and rehabilitation, the adaptive and deleterious consequences of this reorganization on CI outcomes have recently been the subject of debate. This review describes the evidence for different roles of cross-modal reorganization in CI performance and attempts to explore the possible reasons. Additionally, understanding the core influencing mechanism requires taking into account the cortical changes from deafness to hearing restoration. However, methodological issues have restricted longitudinal research on cortical function in CI. Functional near-infrared spectroscopy (fNIRS) has been increasingly used for the study of brain function and language assessment in CI because of its unique advantages, which are considered to have great potential. Here, we review studies on auditory cortex reorganization in deaf patients and CI recipients, and then we try to illustrate the feasibility of fNIRS as a neuroimaging tool in predicting and assessing speech performance in CI recipients. Here, we review research on the cross-modal reorganization of the auditory cortex in deaf patients and CI recipients and seek to demonstrate the viability of using fNIRS as a neuroimaging technique to predict and evaluate speech function in CI recipients.
Collapse
Affiliation(s)
- Xiaoqing Zhou
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Menglong Feng
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Yaqin Hu
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Chanyuan Zhang
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Qingling Zhang
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Xiaoqin Luo
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Wei Yuan
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
- Correspondence: ; Tel.: +86-23-63535180
| |
Collapse
|
11
|
Nicholson N, Rhoades EA, Glade RE. Analysis of Health Disparities in the Screening and Diagnosis of Hearing Loss: Early Hearing Detection and Intervention Hearing Screening Follow-Up Survey. Am J Audiol 2022; 31:764-788. [PMID: 35613624 DOI: 10.1044/2022_aja-21-00014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this study was to (a) provide introductory literature regarding cultural constructs, health disparities, and social determinants of health (SDoH); (b) summarize the literature regarding the Centers for Disease Control and Prevention (CDC) Early Hearing Detection and Intervention (EHDI) Hearing Screening Follow-Up Survey (HSFS) data; (c) explore the CDC EHDI HSFS data regarding the contribution of maternal demographics to loss-to-follow-up/loss-to-documentation (LTF/D) between hearing screening and audiologic diagnosis for 2016, 2017, and 2018; and (d) examine these health disparities within the context of potential ethnoracial biases. METHOD This is a comprehensive narrative literature review of cultural constructs, hearing health disparities, and SDoH as they relate to the CDC EHDI HSFS data. We explore the maternal demographic data reported on the CDC EHDI website and report disparities for maternal age, education, ethnicity, and race for 2016, 2017, and 2018. We focus on LTF/D for screening and diagnosis within the context of racial and cultural bias. RESULTS A literature review demonstrates the increase in quality of the CDC EHDI HSFS data over the past 2 decades. LTF/D rates for hearing screening and audiologic diagnostic testing have improved from higher than 60% to current rates of less than 30%. Comparisons of diagnostic completion rates reported on the CDC website for the EHDI HSFS 2016, 2017, and 2018 data show trends for maternal age, education, and race, but not for ethnicity. Trends were defined as changes more than 10% for variables averaged over a 3-year period (2016-2018). CONCLUSIONS Although there have been significant improvements in LTF/D over the past 2 decades, there continue to be opportunities for further improvement. Beyond neonatal screening, delays continue to be reported in the diagnosis of young children with hearing loss. Notwithstanding the extraordinarily diverse families within the United States, the imperative is to minimize such delays so that all children with hearing loss can, at the very least, have auditory accessibility to spoken language by 3 months of age. Conscious awareness is essential before developing a potentially effective plan of action that might remediate the problem.
Collapse
Affiliation(s)
| | | | - Rachel E. Glade
- Communication Science and Disorders, University of Arkansas, Fayetteville
| |
Collapse
|
12
|
Benetti S, Collignon O. Cross-modal integration and plasticity in the superior temporal cortex. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:127-143. [PMID: 35964967 DOI: 10.1016/b978-0-12-823493-8.00026-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In congenitally deaf people, temporal regions typically believed to be primarily auditory enhance their response to nonauditory information. The neural mechanisms and functional principles underlying this phenomenon, as well as its impact on auditory recovery after sensory restoration, yet remain debated. In this chapter, we demonstrate that the cross-modal recruitment of temporal regions by visual inputs in congenitally deaf people follows organizational principles known to be present in the hearing brain. We propose that the functional and structural mechanisms allowing optimal convergence of multisensory information in the temporal cortex of hearing people also provide the neural scaffolding for feeding visual or tactile information into the deafened temporal areas. Innate in their nature, such anatomo-functional links between the auditory and other sensory systems would represent the common substrate of both early multisensory integration and expression of selective cross-modal plasticity in the superior temporal cortex.
Collapse
Affiliation(s)
- Stefania Benetti
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy; Institute for Research in Psychology and Neuroscience, Faculty of Psychology and Educational Science, UC Louvain, Louvain-la-Neuve, Belgium.
| |
Collapse
|
13
|
Alzaher M, Vannson N, Deguine O, Marx M, Barone P, Strelnikov K. Brain plasticity and hearing disorders. Rev Neurol (Paris) 2021; 177:1121-1132. [PMID: 34657730 DOI: 10.1016/j.neurol.2021.09.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 09/06/2021] [Accepted: 09/10/2021] [Indexed: 11/30/2022]
Abstract
Permanently changed sensory stimulation can modify functional connectivity patterns in the healthy brain and in pathology. In the pathology case, these adaptive modifications of the brain are referred to as compensation, and the subsequent configurations of functional connectivity are called compensatory plasticity. The variability and extent of auditory deficits due to the impairments in the hearing system determine the related brain reorganization and rehabilitation. In this review, we consider cross-modal and intra-modal brain plasticity related to bilateral and unilateral hearing loss and their restoration using cochlear implantation. Cross-modal brain plasticity may have both beneficial and detrimental effects on hearing disorders. It has a beneficial effect when it serves to improve a patient's adaptation to the visuo-auditory environment. However, the occupation of the auditory cortex by visual functions may be a negative factor for the restoration of hearing with cochlear implants. In what concerns intra-modal plasticity, the loss of interhemispheric asymmetry in asymmetric hearing loss is deleterious for the auditory spatial localization. Research on brain plasticity in hearing disorders can advance our understanding of brain plasticity and improve the rehabilitation of the patients using prognostic, evidence-based approaches from cognitive neuroscience combined with post-rehabilitation objective biomarkers of this plasticity utilizing neuroimaging.
Collapse
Affiliation(s)
- M Alzaher
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France
| | - N Vannson
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France
| | - O Deguine
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France; Faculté de médecine de Purpan, CHU Toulouse, université de Toulouse 3, France
| | - M Marx
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France; Faculté de médecine de Purpan, CHU Toulouse, université de Toulouse 3, France
| | - P Barone
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France.
| | - K Strelnikov
- Faculté de médecine de Purpan, CHU Toulouse, université de Toulouse 3, France
| |
Collapse
|
14
|
Fletcher MD. Can Haptic Stimulation Enhance Music Perception in Hearing-Impaired Listeners? Front Neurosci 2021; 15:723877. [PMID: 34531717 PMCID: PMC8439542 DOI: 10.3389/fnins.2021.723877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/11/2021] [Indexed: 01/07/2023] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom.,Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
15
|
Lalonde K, McCreery RW. Audiovisual Enhancement of Speech Perception in Noise by School-Age Children Who Are Hard of Hearing. Ear Hear 2021; 41:705-719. [PMID: 32032226 PMCID: PMC7822589 DOI: 10.1097/aud.0000000000000830] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The purpose of this study was to examine age- and hearing-related differences in school-age children's benefit from visual speech cues. The study addressed three questions: (1) Do age and hearing loss affect degree of audiovisual (AV) speech enhancement in school-age children? (2) Are there age- and hearing-related differences in the mechanisms underlying AV speech enhancement in school-age children? (3) What cognitive and linguistic variables predict individual differences in AV benefit among school-age children? DESIGN Forty-eight children between 6 and 13 years of age (19 with mild to severe sensorineural hearing loss; 29 with normal hearing) and 14 adults with normal hearing completed measures of auditory and AV syllable detection and/or sentence recognition in a two-talker masker type and a spectrally matched noise. Children also completed standardized behavioral measures of receptive vocabulary, visuospatial working memory, and executive attention. Mixed linear modeling was used to examine effects of modality, listener group, and masker on sentence recognition accuracy and syllable detection thresholds. Pearson correlations were used to examine the relationship between individual differences in children's AV enhancement (AV-auditory-only) and age, vocabulary, working memory, executive attention, and degree of hearing loss. RESULTS Significant AV enhancement was observed across all tasks, masker types, and listener groups. AV enhancement of sentence recognition was similar across maskers, but children with normal hearing exhibited less AV enhancement of sentence recognition than adults with normal hearing and children with hearing loss. AV enhancement of syllable detection was greater in the two-talker masker than the noise masker, but did not vary significantly across listener groups. Degree of hearing loss positively correlated with individual differences in AV benefit on the sentence recognition task in noise, but not on the detection task. None of the cognitive and linguistic variables correlated with individual differences in AV enhancement of syllable detection or sentence recognition. CONCLUSIONS Although AV benefit to syllable detection results from the use of visual speech to increase temporal expectancy, AV benefit to sentence recognition requires that an observer extracts phonetic information from the visual speech signal. The findings from this study suggest that all listener groups were equally good at using temporal cues in visual speech to detect auditory speech, but that adults with normal hearing and children with hearing loss were better than children with normal hearing at extracting phonetic information from the visual signal and/or using visual speech information to access phonetic/lexical representations in long-term memory. These results suggest that standard, auditory-only clinical speech recognition measures likely underestimate real-world speech recognition skills of children with mild to severe hearing loss.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, USA
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, USA
| |
Collapse
|
16
|
Fletcher MD, Verschuur CA. Electro-Haptic Stimulation: A New Approach for Improving Cochlear-Implant Listening. Front Neurosci 2021; 15:581414. [PMID: 34177440 PMCID: PMC8219940 DOI: 10.3389/fnins.2021.581414] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 04/29/2021] [Indexed: 12/12/2022] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring speech perception for severely to profoundly deaf individuals. Despite their success, several limitations remain, particularly in CI users' ability to understand speech in noisy environments, locate sound sources, and enjoy music. A new multimodal approach has been proposed that uses haptic stimulation to provide sound information that is poorly transmitted by the implant. This augmenting of the electrical CI signal with haptic stimulation (electro-haptic stimulation; EHS) has been shown to improve speech-in-noise performance and sound localization in CI users. There is also evidence that it could enhance music perception. We review the evidence of EHS enhancement of CI listening and discuss key areas where further research is required. These include understanding the neural basis of EHS enhancement, understanding the effectiveness of EHS across different clinical populations, and the optimization of signal-processing strategies. We also discuss the significant potential for a new generation of haptic neuroprosthetic devices to aid those who cannot access hearing-assistive technology, either because of biomedical or healthcare-access issues. While significant further research and development is required, we conclude that EHS represents a promising new approach that could, in the near future, offer a non-invasive, inexpensive means of substantially improving clinical outcomes for hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D. Fletcher
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
| | - Carl A. Verschuur
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
17
|
Buchs G, Haimler B, Kerem M, Maidenbaum S, Braun L, Amedi A. A self-training program for sensory substitution devices. PLoS One 2021; 16:e0250281. [PMID: 33905446 PMCID: PMC8078811 DOI: 10.1371/journal.pone.0250281] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/01/2021] [Indexed: 11/30/2022] Open
Abstract
Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.
Collapse
Affiliation(s)
- Galit Buchs
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail: (AA); (GB)
| | - Benedetta Haimler
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Ramat Gan, Israel
| | - Menachem Kerem
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
| | - Shachar Maidenbaum
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Biomedical Engineering, Ben Gurion University, Beersheba, Israel
| | - Liraz Braun
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- * E-mail: (AA); (GB)
| |
Collapse
|
18
|
The Development of a Paediatric Phoneme Discrimination Test for Arabic Phonemic Contrasts. Audiol Res 2021; 11:150-166. [PMID: 33917153 PMCID: PMC8167783 DOI: 10.3390/audiolres11020014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 02/12/2021] [Accepted: 03/30/2021] [Indexed: 11/16/2022] Open
Abstract
Objective: The aim of this project was to develop the Arabic CAPT (A-CAPT), a Standard Arabic version of the CHEAR auditory perception test (CAPT) that assesses consonant perception ability in children. Method: This closed-set test was evaluated with normal-hearing children aged 5 to 11 years. Development and validation of the speech materials were accomplished in two experimental phases. Twenty-six children participated in phase I, where the test materials were piloted to ensure that the selected words were age appropriate and that the form of Arabic used was familiar to the children. Sixteen children participated in phase II where test-retest reliability, age effects, and critical differences were measured. A computerized implementation was used to present stimuli and collect responses. Children selected one of four response options displayed on a screen for each trial. Results: Two lists of 32 words were developed with two levels of difficulty, easy and hard. Assessment of test-retest reliability for the final version of the lists showed a strong agreement. A within-subject ANOVA showed no significant difference between test and retest sessions. Performance improved with increasing age. Critical difference values were similar to the British English version of the CAPT. Conclusions: The A-CAPT is an appropriate speech perception test for assessing Arabic-speaking children as young as 5 years old. This test can reliably assess consonant perception ability and monitor changes over time or after an intervention.
Collapse
|
19
|
Hall ML, Dills S. The Limits of "Communication Mode" as a Construct. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2020; 25:383-397. [PMID: 32432678 DOI: 10.1093/deafed/enaa009] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 02/06/2020] [Accepted: 02/19/2020] [Indexed: 06/11/2023]
Abstract
Questions about communication mode (a.k.a. "communication options" or "communication opportunities") remain among the most controversial issues in the many fields that are concerned with the development and well-being of children (and adults) who are d/Deaf or hard of hearing. In this manuscript, we argue that a large part of the reason that this debate persists is due to limitations of the construct itself. We focus on what we term "the crucial question": namely, what kind of experience with linguistic input during infancy and toddlerhood is most likely to result in mastery of at least one language (spoken or signed) by school entry. We argue that the construct of communication mode-as currently construed-actively prevents the discovery of compelling answers to that question. To substantiate our argument, we present a review of a relevant subset of the recent empirical literature and document the prevalence of our concerns. We conclude by articulating the desiderata of an alternative construct that, if appropriately measured, would have the potential to yield answers to what we identify as "the crucial question."
Collapse
|
20
|
Effects of Age at Auditory Brainstem Implantation: Impact on Auditory Perception, Language Development, Speech Intelligibility. Otol Neurotol 2020; 41:11-20. [PMID: 31789803 DOI: 10.1097/mao.0000000000002455] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To study the effect of age at auditory brainstem implant (ABI) surgery on auditory perception, language, and speech intelligibility. STUDY DESIGN Retrospective single cohort design. SETTING Tertiary referral center. PATIENTS In this study, 30 pediatric ABI users with no significant developmental issues were included. Participants were divided into two groups, according to age at surgery (Early Group: < 3 yr old [n = 15], Late Group: ≥ 3 yr old [n = 15]). Groups were matched by duration of ABI use and participants were evaluated after 5 years (±1 yr) experience with their device. The mean age at ABI surgery was 22.27 (ranged ± 6.5) months in the early group, 45.53 (ranged ± 7.9) months in the late group. INTERVENTION(S) Retrosigmoid craniotomy and ABI placement. MAIN OUTCOME MEASURE(S) Auditory perception skills were evaluated using the Meaningful Auditory Integration Scale and Categories of Auditory Performance from the Children's Auditory Perception Test Battery. We used a closed-set pattern perception subtest, a closed-set word identification subtest, and an open-set sentence recognition subtest. Language performance was assessed with the Test of Early Language Development and Speech Intelligibility Rating, which was administered in a quiet room. RESULTS In this study, the results demonstrated that the Early Group's auditory perception performance was better than the Late Group after 5 years of ABI use, when children had no additional needs (U = 12, p < 0.001). Speech intelligibility was the most challenging skill to develop, in both groups. Due to multiple regression analysis, we found that auditory perception categories can be estimated with speech intelligibility scores, pattern perception scores, receptive language scores, and age at ABI surgery variables in ABI users with no additional handicaps. CONCLUSIONS ABI is a viable option to provide auditory sensations for children with cochlear anomalies. ABI surgery under age 3 is associated with improved auditory perception and language development compared with older users.
Collapse
|
21
|
Kondaurova MV, Fagan MK, Zheng Q. Vocal imitation between mothers and their children with cochlear implants. INFANCY 2020; 25:827-850. [DOI: 10.1111/infa.12363] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 06/12/2020] [Accepted: 07/13/2020] [Indexed: 01/11/2023]
Affiliation(s)
- Maria V. Kondaurova
- Department of Psychological & Brain Sciences University of Louisville Louisville KY USA
| | - Mary K. Fagan
- Department of Communication Sciences and Disorders Chapman University Orange CA USA
| | - Qi Zheng
- Department of Bioinformatics & Biostatistics University of Louisville Louisville KY USA
| |
Collapse
|
22
|
Chen CH, Monroy C, Houston DM, Yu C. Using head-mounted eye-trackers to study sensory-motor dynamics of coordinated attention. PROGRESS IN BRAIN RESEARCH 2020; 254:71-88. [PMID: 32859294 DOI: 10.1016/bs.pbr.2020.06.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
In this chapter, we introduce recent research using head-mounted eye-trackers to record sensory-motor behaviors at a high resolution and examine parent-child interactions at a micro-level. We focus on one important research topic in early social and cognitive development: how young children and their parents coordinate their visual attention in social interactions. We start by introducing head-mounted eye-tracking and recent studies conducted using this method. We then present two sets of novel analysis techniques that examine how manual actions of parents and children with and without hearing loss contribute to their attention coordination. In the first set of analyses, we investigated different pathways parents and children used to coordinate their visual attention in toy play. After that, we used Sankey diagrams to represent the temporal dynamics of parents' and children's manual actions prior to and during coordinated attention. These two sets of analyses allowed us to explore how participants' sensory-motor behaviors contribute to the establishment and maintenance of coordinated attention. More generally, head-mounted eye-tracking allows us to ask new questions and conduct new analyses that were not previously possible. With this new sensing technology, the results here highlight the importance of understanding early social interaction from a multimodal, embodied view.
Collapse
Affiliation(s)
- Chi-Hsin Chen
- Department of Otolaryngology - Head and Neck Surgery, The Ohio State University, Columbus, OH, United States.
| | - Claire Monroy
- Department of Otolaryngology - Head and Neck Surgery, The Ohio State University, Columbus, OH, United States
| | - Derek M Houston
- Department of Otolaryngology - Head and Neck Surgery, The Ohio State University, Columbus, OH, United States; Nationwide Children's Hospital, Columbus, OH, United States
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, United States
| |
Collapse
|
23
|
Chen CH, Castellanos I, Yu C, Houston DM. What leads to coordinated attention in parent-toddler interactions? Children's hearing status matters. Dev Sci 2020; 23:e12919. [PMID: 31680414 PMCID: PMC7160036 DOI: 10.1111/desc.12919] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 10/25/2019] [Accepted: 10/28/2019] [Indexed: 11/30/2022]
Abstract
Coordinated attention between children and their parents plays an important role in their social, language, and cognitive development. The current study used head-mounted eye-trackers to investigate the effects of children's prelingual hearing loss on how they achieve coordinated attention with their hearing parents during free-flowing object play. We found that toddlers with hearing loss (age: 24-37 months) had similar overall gaze patterns (e.g., gaze length and proportion of face looking) as their normal-hearing peers. In addition, children's hearing status did not affect how likely parents and children attended to the same object at the same time during play. However, when following parents' attention, children with hearing loss used both parents' gaze directions and hand actions as cues, whereas children with normal hearing mainly relied on parents' hand actions. The diversity of pathways leading to coordinated attention suggests the flexibility and robustness of developing systems in using multiple pathways to achieve the same functional end.
Collapse
Affiliation(s)
- Chi-hsin Chen
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, Ohio 43212
| | - Irina Castellanos
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, Ohio 43212
- Nationwide Children’s Hospital, 700 Children’s Dr, Columbus, Ohio 43205
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10 Street, Bloomington, Indiana 47405
| | - Derek M. Houston
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, Ohio 43212
- Nationwide Children’s Hospital, 700 Children’s Dr, Columbus, Ohio 43205
| |
Collapse
|
24
|
Bayard C, Machart L, Strauß A, Gerber S, Aubanel V, Schwartz JL. Cued Speech Enhances Speech-in-Noise Perception. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2019; 24:223-233. [PMID: 30809665 DOI: 10.1093/deafed/enz003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 01/28/2019] [Accepted: 01/31/2019] [Indexed: 06/09/2023]
Abstract
Speech perception in noise remains challenging for Deaf/Hard of Hearing people (D/HH), even fitted with hearing aids or cochlear implants. The perception of sentences in noise by 20 implanted or aided D/HH subjects mastering Cued Speech (CS), a system of hand gestures complementing lip movements, was compared with the perception of 15 typically hearing (TH) controls in three conditions: audio only, audiovisual, and audiovisual + CS. Similar audiovisual scores were obtained for signal-to-noise ratios (SNRs) 11 dB higher in D/HH participants compared with TH ones. Adding CS information enabled D/HH participants to reach a mean score of 83% in the audiovisual + CS condition at a mean SNR of 0 dB, similar to the usual audio score for TH participants at this SNR. This confirms that the combination of lipreading and Cued Speech system remains extremely important for persons with hearing loss, particularly in adverse hearing conditions.
Collapse
Affiliation(s)
| | | | - Antje Strauß
- Zukunftskolleg, FB Sprachwissenschaft, University of Konstanz
| | | | | | | |
Collapse
|
25
|
Mastrantuono E, Saldaña D, Rodríguez-Ortiz IR. Inferencing in Deaf Adolescents during Sign-Supported Speech Comprehension. DISCOURSE PROCESSES 2019. [DOI: 10.1080/0163853x.2018.1490133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Eliana Mastrantuono
- Departamento de Psicología Evolutiva y de la Educación, Universidad de Sevilla, Sevilla, Spain
| | - David Saldaña
- Departamento de Psicología Evolutiva y de la Educación, Universidad de Sevilla, Sevilla, Spain
| | | |
Collapse
|
26
|
Ritter C, Vongpaisal T. Multimodal and Spectral Degradation Effects on Speech and Emotion Recognition in Adult Listeners. Trends Hear 2019; 22:2331216518804966. [PMID: 30378469 PMCID: PMC6236866 DOI: 10.1177/2331216518804966] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
For cochlear implant (CI) users, degraded spectral input hampers the
understanding of prosodic vocal emotion, especially in difficult listening
conditions. Using a vocoder simulation of CI hearing, we examined the extent to
which informative multimodal cues in a talker’s spoken expressions improve
normal hearing (NH) adults’ speech and emotion perception under different levels
of spectral degradation (two, three, four, and eight spectral bands).
Participants repeated the words verbatim and identified emotions (among four
alternative options: happy, sad, angry, and neutral) in meaningful sentences
that are semantically congruent with the expression of the intended emotion.
Sentences were presented in their natural speech form and in speech sampled
through a noise-band vocoder in sound (auditory-only) and video
(auditory–visual) recordings of a female talker. Visual information had a more
pronounced benefit in enhancing speech recognition in the lower spectral band
conditions. Spectral degradation, however, did not interfere with emotion
recognition performance when dynamic visual cues in a talker’s expression are
provided as participants scored at ceiling levels across all spectral band
conditions. Our use of familiar sentences that contained congruent semantic and
prosodic information have high ecological validity, which likely optimized
listener performance under simulated CI hearing and may better predict CI users’
outcomes in everyday listening contexts.
Collapse
Affiliation(s)
- Chantel Ritter
- 1 Department of Psychology, MacEwan University, Alberta, Canada
| | - Tara Vongpaisal
- 1 Department of Psychology, MacEwan University, Alberta, Canada
| |
Collapse
|
27
|
Support for parents of deaf children: Common questions and informed, evidence-based answers. Int J Pediatr Otorhinolaryngol 2019; 118:134-142. [PMID: 30623850 DOI: 10.1016/j.ijporl.2018.12.036] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 11/15/2018] [Accepted: 12/27/2018] [Indexed: 11/20/2022]
Abstract
To assist medical and hearing-science professionals in supporting parents of deaf children, we have identified common questions that parents may have and provide evidence-based answers. In doing so, a compassionate and positive narrative about deafness and deaf children is offered, one that relies on recent research evidence regarding the critical nature of early exposure to a fully accessible visual language, which in the United States is American Sign Language (ASL). This evidence includes the role of sign language in language acquisition, cognitive development, and literacy. In order for parents to provide a nurturing and anxiety-free environment for early childhood development, signing at home is important even if their child also has the additional nurturing and care of a signing community. It is not just the early years of a child's life that matter for language acquisition; it's the early months, the early weeks, even the early days. Deaf children cannot wait for accessible language input. The whole family must learn simultaneously as the deaf child learns. Even moderate fluency on the part of the family benefits the child enormously. And learning the sign language together can be one of the strongest bonding experiences that the family and deaf child have.
Collapse
|
28
|
McDaniel J, Camarata S, Yoder P. Comparing Auditory-Only and Audiovisual Word Learning for Children With Hearing Loss. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2018; 23:382-398. [PMID: 29767759 PMCID: PMC6146754 DOI: 10.1093/deafed/eny016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 04/16/2018] [Accepted: 05/04/2018] [Indexed: 06/08/2023]
Abstract
Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.
Collapse
|
29
|
Butera IM, Stevenson RA, Mangus BD, Woynaroski TG, Gifford RH, Wallace MT. Audiovisual Temporal Processing in Postlingually Deafened Adults with Cochlear Implants. Sci Rep 2018; 8:11345. [PMID: 30054512 PMCID: PMC6063927 DOI: 10.1038/s41598-018-29598-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2018] [Accepted: 07/09/2018] [Indexed: 11/17/2022] Open
Abstract
For many cochlear implant (CI) users, visual cues are vitally important for interpreting the impoverished auditory speech information that an implant conveys. Although the temporal relationship between auditory and visual stimuli is crucial for how this information is integrated, audiovisual temporal processing in CI users is poorly understood. In this study, we tested unisensory (auditory alone, visual alone) and multisensory (audiovisual) temporal processing in postlingually deafened CI users (n = 48) and normal-hearing controls (n = 54) using simultaneity judgment (SJ) and temporal order judgment (TOJ) tasks. We varied the timing onsets between the auditory and visual components of either a syllable/viseme or a simple flash/beep pairing, and participants indicated either which stimulus appeared first (TOJ) or if the pair occurred simultaneously (SJ). Results indicate that temporal binding windows-the interval within which stimuli are likely to be perceptually 'bound'-are not significantly different between groups for either speech or non-speech stimuli. However, the point of subjective simultaneity for speech was less visually leading in CI users, who interestingly, also had improved visual-only TOJ thresholds. Further signal detection analysis suggests that this SJ shift may be due to greater visual bias within the CI group, perhaps reflecting heightened attentional allocation to visual cues.
Collapse
Affiliation(s)
- Iliza M Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Western Ontario, London, ON, Canada
- Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Brannon D Mangus
- Murfreesboro Medical Clinic and Surgicenter, Murfreesboro, TN, USA
| | - Tiffany G Woynaroski
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H Gifford
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
30
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
31
|
Language and Sensory Neural Plasticity in the Superior Temporal Cortex of the Deaf. Neural Plast 2018; 2018:9456891. [PMID: 29853853 PMCID: PMC5954881 DOI: 10.1155/2018/9456891] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 03/26/2018] [Indexed: 11/18/2022] Open
Abstract
Visual stimuli are known to activate the auditory cortex of deaf people, presenting evidence of cross-modal plasticity. However, the mechanisms underlying such plasticity are poorly understood. In this functional MRI study, we presented two types of visual stimuli, language stimuli (words, sign language, and lip-reading) and a general stimulus (checkerboard) to investigate neural reorganization in the superior temporal cortex (STC) of deaf subjects and hearing controls. We found that only in the deaf subjects, all visual stimuli activated the STC. The cross-modal activation induced by the checkerboard was mainly due to a sensory component via a feed-forward pathway from the thalamus and primary visual cortex, positively correlated with duration of deafness, indicating a consequence of pure sensory deprivation. In contrast, the STC activity evoked by language stimuli was functionally connected to both the visual cortex and the frontotemporal areas, which were highly correlated with the learning of sign language, suggesting a strong language component via a possible feedback modulation. While the sensory component exhibited specificity to features of a visual stimulus (e.g., selective to the form of words, bodies, or faces) and the language (semantic) component appeared to recruit a common frontotemporal neural network, the two components converged to the STC and caused plasticity with different multivoxel activity patterns. In summary, the present study showed plausible neural pathways for auditory reorganization and correlations of activations of the reorganized cortical areas with developmental factors and provided unique evidence towards the understanding of neural circuits involved in cross-modal plasticity.
Collapse
|
32
|
Validating a Method to Assess Lipreading, Audiovisual Gain, and Integration During Speech Reception With Cochlear-Implanted and Normal-Hearing Subjects Using a Talking Head. Ear Hear 2018; 39:503-516. [DOI: 10.1097/aud.0000000000000502] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
33
|
Yamamoto R, Naito Y, Tona R, Moroto S, Tamaya R, Fujiwara K, Shinohara S, Takebayashi S, Kikuchi M, Michida T. Audio-visual speech perception in prelingually deafened Japanese children following sequential bilateral cochlear implantation. Int J Pediatr Otorhinolaryngol 2017; 102:160-168. [PMID: 29106867 DOI: 10.1016/j.ijporl.2017.09.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 09/15/2017] [Accepted: 09/18/2017] [Indexed: 11/28/2022]
Abstract
OBJECTIVES An effect of audio-visual (AV) integration is observed when the auditory and visual stimuli are incongruent (the McGurk effect). In general, AV integration is helpful especially in subjects wearing hearing aids or cochlear implants (CIs). However, the influence of AV integration on spoken word recognition in individuals with bilateral CIs (Bi-CIs) has not been fully investigated so far. In this study, we investigated AV integration in children with Bi-CIs. METHODS The study sample included thirty one prelingually deafened children who underwent sequential bilateral cochlear implantation. We assessed their responses to congruent and incongruent AV stimuli with three CI-listening modes: only the 1st CI, only the 2nd CI, and Bi-CIs. The responses were assessed in the whole group as well as in two sub-groups: a proficient group (syllable intelligibility ≥80% with the 1st CI) and a non-proficient group (syllable intelligibility < 80% with the 1st CI). RESULTS We found evidence of the McGurk effect in each of the three CI-listening modes. AV integration responses were observed in a subset of incongruent AV stimuli, and the patterns observed with the 1st CI and with Bi-CIs were similar. In the proficient group, the responses with the 2nd CI were not significantly different from those with the 1st CI whereas in the non-proficient group the responses with the 2nd CI were driven by visual stimuli more than those with the 1st CI. CONCLUSION Our results suggested that prelingually deafened Japanese children who underwent sequential bilateral cochlear implantation exhibit AV integration abilities, both in monaural listening as well as in binaural listening. We also observed a higher influence of visual stimuli on speech perception with the 2nd CI in the non-proficient group, suggesting that Bi-CIs listeners with poorer speech recognition rely on visual information more compared to the proficient subjects to compensate for poorer auditory input. Nevertheless, poorer quality auditory input with the 2nd CI did not interfere with AV integration with binaural listening (with Bi-CIs). Overall, the findings of this study might be used to inform future research to identify the best strategies for speech training using AV integration effectively in prelingually deafened children.
Collapse
Affiliation(s)
- Ryosuke Yamamoto
- Department of Otolaryngology, Head and Neck Surgery, Kobe City Medical Center General Hospital, Kobe, Japan
| | - Yasushi Naito
- Department of Otolaryngology, Head and Neck Surgery, Kobe City Medical Center General Hospital, Kobe, Japan.
| | - Risa Tona
- Department of Otolaryngology, Head and Neck Surgery, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Saburo Moroto
- Department of Otolaryngology, Head and Neck Surgery, Kobe City Medical Center General Hospital, Kobe, Japan
| | - Rinko Tamaya
- Department of Otolaryngology, Head and Neck Surgery, Kobe City Medical Center General Hospital, Kobe, Japan
| | - Keizo Fujiwara
- Department of Otolaryngology, Head and Neck Surgery, Kobe City Medical Center General Hospital, Kobe, Japan
| | - Shogo Shinohara
- Department of Otolaryngology, Head and Neck Surgery, Kobe City Medical Center General Hospital, Kobe, Japan
| | - Shinji Takebayashi
- Department of Otolaryngology, Head and Neck Surgery, Kobe City Medical Center General Hospital, Kobe, Japan
| | - Masahiro Kikuchi
- Department of Otolaryngology, Head and Neck Surgery, Kobe City Medical Center General Hospital, Kobe, Japan
| | - Tetsuhiko Michida
- Department of Otolaryngology, Institute of Biomedical Research and Innovation, Kobe, Japan
| |
Collapse
|
34
|
Pimperton H, Ralph-Lewis A, MacSweeney M. Speechreading in Deaf Adults with Cochlear Implants: Evidence for Perceptual Compensation. Front Psychol 2017; 8:106. [PMID: 28223951 PMCID: PMC5294775 DOI: 10.3389/fpsyg.2017.00106] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2016] [Accepted: 01/16/2017] [Indexed: 11/13/2022] Open
Abstract
Previous research has provided evidence for a speechreading advantage in congenitally deaf adults compared to hearing adults. A 'perceptual compensation' account of this finding proposes that prolonged early onset deafness leads to a greater reliance on visual, as opposed to auditory, information when perceiving speech which in turn results in superior visual speech perception skills in deaf adults. In the current study we tested whether previous demonstrations of a speechreading advantage for profoundly congenitally deaf adults with hearing aids, or no amplificiation, were also apparent in adults with the same deafness profile but who have experienced greater access to the auditory elements of speech via a cochlear implant (CI). We also tested the prediction that, in line with the perceptual compensation account, receiving a CI at a later age is associated with superior speechreading skills due to later implanted individuals having experienced greater dependence on visual speech information. We designed a speechreading task in which participants viewed silent videos of 123 single words spoken by a model and were required to indicate which word they thought had been said via a free text response. We compared congenitally deaf adults who had received CIs in childhood or adolescence (N = 15) with a comparison group of hearing adults (N = 15) matched on age and education level. The adults with CI showed significantly better scores on the speechreading task than the hearing comparison group. Furthermore, within the group of adults with CI, there was a significant positive correlation between age at implantation and speechreading performance; earlier implantation was associated with lower speechreading scores. These results are both consistent with the hypothesis of perceptual compensation in the domain of speech perception, indicating that more prolonged dependence on visual speech information in speech perception may lead to improvements in the perception of visual speech. In addition our study provides metrics of the 'speechreadability' of 123 words produced in British English: one derived from hearing adults (N = 61) and one from deaf adults with CI (N = 15). Evidence for the validity of these 'speechreadability' metrics come from correlations with visual lexical competition data.
Collapse
Affiliation(s)
- Hannah Pimperton
- Institute of Cognitive Neuroscience, University College London London, UK
| | - Amelia Ralph-Lewis
- Institute of Cognitive Neuroscience, University College London London, UK
| | - Mairéad MacSweeney
- Institute of Cognitive Neuroscience, University College LondonLondon, UK; Deafness, Cognition and Language Centre, University College LondonLondon, UK
| |
Collapse
|
35
|
Pavani F, Venturini M, Baruffaldi F, Artesini L, Bonfioli F, Frau GN, van Zoest W. Spatial and non-spatial multisensory cueing in unilateral cochlear implant users. Hear Res 2017; 344:24-37. [DOI: 10.1016/j.heares.2016.10.025] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Revised: 10/21/2016] [Accepted: 10/27/2016] [Indexed: 11/30/2022]
|
36
|
Oryadi-Zanjani MM, Vahab M, Rahimi Z, Mayahi A. Audiovisual sentence repetition as a clinical criterion for auditory development in Persian-language children with hearing loss. Int J Pediatr Otorhinolaryngol 2017; 93:167-171. [PMID: 28109491 DOI: 10.1016/j.ijporl.2016.12.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2016] [Revised: 12/11/2016] [Accepted: 12/12/2016] [Indexed: 11/30/2022]
Abstract
OBJECTIVES It is important for clinician such as speech-language pathologists and audiologists to develop more efficient procedures to assess the development of auditory, speech and language skills in children using hearing aid and/or cochlear implant compared to their peers with normal hearing. So, the aim of study was the comparison of the performance of 5-to-7-year-old Persian-language children with and without hearing loss in visual-only, auditory-only, and audiovisual presentation of sentence repetition task. METHODS The research was administered as a cross-sectional study. The sample size was 92 Persian 5-7 year old children including: 60 with normal hearing and 32 with hearing loss. The children with hearing loss were recruited from Soroush rehabilitation center for Persian-language children with hearing loss in Shiraz, Iran, through consecutive sampling method. All the children had unilateral cochlear implant or bilateral hearing aid. The assessment tool was the Sentence Repetition Test. The study included three computer-based experiments including visual-only, auditory-only, and audiovisual. The scores were compared within and among the three groups through statistical tests in α = 0.05. RESULTS The score of sentence repetition task between V-only, A-only, and AV presentation was significantly different in the three groups; in other words, the highest to lowest scores belonged respectively to audiovisual, auditory-only, and visual-only format in the children with normal hearing (P < 0.01), cochlear implant (P < 0.01), and hearing aid (P < 0.01). In addition, there was no significant correlationship between the visual-only and audiovisual sentence repetition scores in all the 5-to-7-year-old children (r = 0.179, n = 92, P = 0.088), but audiovisual sentence repetition scores were found to be strongly correlated with auditory-only scores in all the 5-to-7-year-old children (r = 0.943, n = 92, P = 0.000). CONCLUSIONS According to the study's findings, audiovisual integration occurs in the 5-to-7-year-old Persian children using hearing aid or cochlear implant during sentence repetition similar to their peers with normal hearing. Therefore, it is recommended that audiovisual sentence repetition should be used as a clinical criterion for auditory development in Persian-language children with hearing loss.
Collapse
Affiliation(s)
- Mohammad Majid Oryadi-Zanjani
- Department of Speech Therapy, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran; Rehabilitation Sciences Research Center, Shiraz University of Medical Sciences, Shiraz, Iran.
| | - Maryam Vahab
- Department of Speech Therapy, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran; Rehabilitation Sciences Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Zahra Rahimi
- Department of Speech Therapy, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Anis Mayahi
- Department of Speech Therapy, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
37
|
Anderson CA, Lazard DS, Hartley DEH. Plasticity in bilateral superior temporal cortex: Effects of deafness and cochlear implantation on auditory and visual speech processing. Hear Res 2017; 343:138-149. [PMID: 27473501 DOI: 10.1016/j.heares.2016.07.013] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Revised: 07/20/2016] [Accepted: 07/25/2016] [Indexed: 12/01/2022]
Abstract
While many individuals can benefit substantially from cochlear implantation, the ability to perceive and understand auditory speech with a cochlear implant (CI) remains highly variable amongst adult recipients. Importantly, auditory performance with a CI cannot be reliably predicted based solely on routinely obtained information regarding clinical characteristics of the CI candidate. This review argues that central factors, notably cortical function and plasticity, should also be considered as important contributors to the observed individual variability in CI outcome. Superior temporal cortex (STC), including auditory association areas, plays a crucial role in the processing of auditory and visual speech information. The current review considers evidence of cortical plasticity within bilateral STC, and how these effects may explain variability in CI outcome. Furthermore, evidence of audio-visual interactions in temporal and occipital cortices is examined, and relation to CI outcome is discussed. To date, longitudinal examination of changes in cortical function and plasticity over the period of rehabilitation with a CI has been restricted by methodological challenges. The application of functional near-infrared spectroscopy (fNIRS) in studying cortical function in CI users is becoming increasingly recognised as a potential solution to these problems. Here we suggest that fNIRS offers a powerful neuroimaging tool to elucidate the relationship between audio-visual interactions, cortical plasticity during deafness and following cochlear implantation, and individual variability in auditory performance with a CI.
Collapse
Affiliation(s)
- Carly A Anderson
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, Ropewalk House, 113 The Ropewalk, Nottingham, NG1 5DU, United Kingdom; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, United Kingdom.
| | - Diane S Lazard
- Institut Arthur Vernes, ENT Surgery, Paris, 75006, France; Nottingham University Hospitals NHS Trust, Derby Road, Nottingham, NG7 2UH, United Kingdom.
| | - Douglas E H Hartley
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, Ropewalk House, 113 The Ropewalk, Nottingham, NG1 5DU, United Kingdom; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, United Kingdom; Nottingham University Hospitals NHS Trust, Derby Road, Nottingham, NG7 2UH, United Kingdom; Medical Research Council (MRC) Institute of Hearing Research, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom.
| |
Collapse
|
38
|
Glick H, Sharma A. Cross-modal plasticity in developmental and age-related hearing loss: Clinical implications. Hear Res 2017; 343:191-201. [PMID: 27613397 PMCID: PMC6590524 DOI: 10.1016/j.heares.2016.08.012] [Citation(s) in RCA: 101] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Revised: 08/16/2016] [Accepted: 08/19/2016] [Indexed: 10/21/2022]
Abstract
This review explores cross-modal cortical plasticity as a result of auditory deprivation in populations with hearing loss across the age spectrum, from development to adulthood. Cross-modal plasticity refers to the phenomenon when deprivation in one sensory modality (e.g. the auditory modality as in deafness or hearing loss) results in the recruitment of cortical resources of the deprived modality by intact sensory modalities (e.g. visual or somatosensory systems). We discuss recruitment of auditory cortical resources for visual and somatosensory processing in deafness and in lesser degrees of hearing loss. We describe developmental cross-modal re-organization in the context of congenital or pre-lingual deafness in childhood and in the context of adult-onset, age-related hearing loss, with a focus on how cross-modal plasticity relates to clinical outcomes. We provide both single-subject and group-level evidence of cross-modal re-organization by the visual and somatosensory systems in bilateral, congenital deafness, single-sided deafness, adults with early-stage, mild-moderate hearing loss, and individual adult and pediatric patients exhibit excellent and average speech perception with hearing aids and cochlear implants. We discuss a framework in which changes in cortical resource allocation secondary to hearing loss results in decreased intra-modal plasticity in auditory cortex, accompanied by increased cross-modal recruitment of auditory cortices by the other sensory systems, and simultaneous compensatory activation of frontal cortices. The frontal cortices, as we will discuss, play an important role in mediating cognitive compensation in hearing loss. Given the wide range of variability in behavioral performance following audiological intervention, changes in cortical plasticity may play a valuable role in the prediction of clinical outcomes following intervention. Further, the development of new technologies and rehabilitation strategies that incorporate brain-based biomarkers may help better serve hearing impaired populations across the lifespan.
Collapse
Affiliation(s)
- Hannah Glick
- Department of Speech, Language, & Hearing Science; Institute of Cognitive Science, University of Colorado at Boulder, 2501 Kittredge Loop Road, 409 UCB, Boulder, CO 80309, USA
| | - Anu Sharma
- Department of Speech, Language, & Hearing Science; Institute of Cognitive Science, University of Colorado at Boulder, 2501 Kittredge Loop Road, 409 UCB, Boulder, CO 80309, USA.
| |
Collapse
|
39
|
Blom H, Marschark M, Machmer E. Simultaneous communication supports learning in noise by cochlear implant users. Cochlear Implants Int 2016; 18:49-56. [PMID: 28010675 DOI: 10.1080/14670100.2016.1265188] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVES This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant (CI) users in noisy contexts. METHODS Forty eight college students who were active CI users, watched videos of three short presentations, the text versions of which were standardized at the 8th-grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants' spoken language and sign language skills were obtained via self-reports and objective assessments. RESULTS When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants' receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. DISCUSSION Students who are CI users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those CI users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. CONCLUSION Accompanying spoken language with signs can benefit learners who are CI users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined.
Collapse
Affiliation(s)
- Helen Blom
- a Behavioural Science Institute, Radboud University and Royal Dutch Kentalis , P.O. Box 9104, Nijmegen 6500 HE , The Netherlands
| | - Marc Marschark
- b Center for Education Research Partnerships, National Technical Institute for the Deaf, Rochester Institute of Technology , 52 Lomb Memorial Drive, Rochester , NY 14623 , USA.,c School of Psychology , University of Aberdeen , Regent Walk, Aberdeen , AB24 3FX , Scotland
| | - Elizabeth Machmer
- b Center for Education Research Partnerships, National Technical Institute for the Deaf, Rochester Institute of Technology , 52 Lomb Memorial Drive, Rochester , NY 14623 , USA
| |
Collapse
|
40
|
Atypical audiovisual word processing in school-age children with a history of specific language impairment: an event-related potential study. J Neurodev Disord 2016; 8:33. [PMID: 27597881 PMCID: PMC5011345 DOI: 10.1186/s11689-016-9168-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2016] [Accepted: 08/17/2016] [Indexed: 11/12/2022] Open
Abstract
Background Visual speech cues influence different aspects of language acquisition. However, whether developmental language disorders may be associated with atypical processing of visual speech is unknown. In this study, we used behavioral and ERP measures to determine whether children with a history of SLI (H-SLI) differ from their age-matched typically developing (TD) peers in the ability to match auditory words with corresponding silent visual articulations. Methods Nineteen 7–13-year-old H-SLI children and 19 age-matched TD children participated in the study. Children first heard a word and then saw a speaker silently articulating a word. In half of trials, the articulated word matched the auditory word (congruent trials), while in another half, it did not (incongruent trials). Children specified whether the auditory and the articulated words matched. We examined ERPs elicited by the onset of visual stimuli (visual P1, N1, and P2) as well as ERPs elicited by the articulatory movements themselves—namely, N400 to incongruent articulations and late positive complex (LPC) to congruent articulations. We also examined whether ERP measures of visual speech processing could predict (1) children’s linguistic skills and (2) the use of visual speech cues when listening to speech-in-noise (SIN). Results H-SLI children were less accurate in matching auditory words with visual articulations. They had a significantly reduced P1 to the talker’s face and a smaller N400 to incongruent articulations. In contrast, congruent articulations elicited LPCs of similar amplitude in both groups of children. The P1 and N400 amplitude was significantly correlated with accuracy enhancement on the SIN task when seeing the talker’s face. Conclusions H-SLI children have poorly defined correspondences between speech sounds and visually observed articulatory movements that produce them.
Collapse
|
41
|
Poliva O. From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language. Front Neurosci 2016; 10:307. [PMID: 27445676 PMCID: PMC4928493 DOI: 10.3389/fnins.2016.00307] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Accepted: 06/17/2016] [Indexed: 11/24/2022] Open
Abstract
The auditory cortex communicates with the frontal lobe via the middle temporal gyrus (auditory ventral stream; AVS) or the inferior parietal lobule (auditory dorsal stream; ADS). Whereas the AVS is ascribed only with sound recognition, the ADS is ascribed with sound localization, voice detection, prosodic perception/production, lip-speech integration, phoneme discrimination, articulation, repetition, phonological long-term memory and working memory. Previously, I interpreted the juxtaposition of sound localization, voice detection, audio-visual integration and prosodic analysis, as evidence that the behavioral precursor to human speech is the exchange of contact calls in non-human primates. Herein, I interpret the remaining ADS functions as evidence of additional stages in language evolution. According to this model, the role of the ADS in vocal control enabled early Homo (Hominans) to name objects using monosyllabic calls, and allowed children to learn their parents' calls by imitating their lip movements. Initially, the calls were forgotten quickly but gradually were remembered for longer periods. Once the representations of the calls became permanent, mimicry was limited to infancy, and older individuals encoded in the ADS a lexicon for the names of objects (phonological lexicon). Consequently, sound recognition in the AVS was sufficient for activating the phonological representations in the ADS and mimicry became independent of lip-reading. Later, by developing inhibitory connections between acoustic-syllabic representations in the AVS and phonological representations of subsequent syllables in the ADS, Hominans became capable of concatenating the monosyllabic calls for repeating polysyllabic words (i.e., developed working memory). Finally, due to strengthening of connections between phonological representations in the ADS, Hominans became capable of encoding several syllables as a single representation (chunking). Consequently, Hominans began vocalizing and mimicking/rehearsing lists of words (sentences).
Collapse
|
42
|
Giezen MR, Escudero P, Baker AE. Rapid learning of minimally different words in five- to six-year-old children: effects of acoustic salience and hearing impairment. JOURNAL OF CHILD LANGUAGE 2016; 43:310-337. [PMID: 25994361 DOI: 10.1017/s0305000915000197] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This study investigates the role of acoustic salience and hearing impairment in learning phonologically minimal pairs. Picture-matching and object-matching tasks were used to investigate the learning of consonant and vowel minimal pairs in five- to six-year-old deaf children with a cochlear implant (CI), and children of the same age with normal hearing (NH). In both tasks, the CI children showed clear difficulties with learning minimal pairs. The NH children also showed some difficulties, however, particularly in the picture-matching task. Vowel minimal pairs were learned more successfully than consonant minimal pairs, particularly in the object-matching task. These results suggest that the ability to encode phonetic detail in novel words is not fully developed at age six and is affected by task demands and acoustic salience. CI children experience persistent difficulties with accurately mapping sound contrasts to novel meanings, but seem to benefit from the relative acoustic salience of vowel sounds.
Collapse
Affiliation(s)
- Marcel R Giezen
- Laboratory for Language and Cognitive Neuroscience,San Diego State University
| | | | - Anne E Baker
- University of Amsterdam,Department of Linguistics
| |
Collapse
|
43
|
Campbell J, Sharma A. Visual Cross-Modal Re-Organization in Children with Cochlear Implants. PLoS One 2016; 11:e0147793. [PMID: 26807850 PMCID: PMC4726603 DOI: 10.1371/journal.pone.0147793] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2015] [Accepted: 01/09/2016] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Visual cross-modal re-organization is a neurophysiological process that occurs in deafness. The intact sensory modality of vision recruits cortical areas from the deprived sensory modality of audition. Such compensatory plasticity is documented in deaf adults and animals, and is related to deficits in speech perception performance in cochlear-implanted adults. However, it is unclear whether visual cross-modal re-organization takes place in cochlear-implanted children and whether it may be a source of variability contributing to speech and language outcomes. Thus, the aim of this study was to determine if visual cross-modal re-organization occurs in cochlear-implanted children, and whether it is related to deficits in speech perception performance. METHODS Visual evoked potentials (VEPs) were recorded via high-density EEG in 41 normal hearing children and 14 cochlear-implanted children, aged 5-15 years, in response to apparent motion and form change. Comparisons of VEP amplitude and latency, as well as source localization results, were conducted between the groups in order to view evidence of visual cross-modal re-organization. Finally, speech perception in background noise performance was correlated to the visual response in the implanted children. RESULTS Distinct VEP morphological patterns were observed in both the normal hearing and cochlear-implanted children. However, the cochlear-implanted children demonstrated larger VEP amplitudes and earlier latency, concurrent with activation of right temporal cortex including auditory regions, suggestive of visual cross-modal re-organization. The VEP N1 latency was negatively related to speech perception in background noise for children with cochlear implants. CONCLUSION Our results are among the first to describe cross modal re-organization of auditory cortex by the visual modality in deaf children fitted with cochlear implants. Our findings suggest that, as a group, children with cochlear implants show evidence of visual cross-modal recruitment, which may be a contributing source of variability in speech perception outcomes with their implant.
Collapse
Affiliation(s)
- Julia Campbell
- Brain and Behavior Laboratory, University of Colorado at Boulder, 409 UCB, 2501 Kittredge Loop Road, Boulder, Colorado, 80309, United States of America
- Institute of Cognitive Science, University of Colorado at Boulder, 344 UCB, Boulder, Colorado, 80309, United States of America
- Department of Speech, Language and Hearing Sciences, University of Colorado at Boulder, 409 UCB, 2501 Kittredge Loop Road, Boulder, Colorado, 80309, United States of America
| | - Anu Sharma
- Brain and Behavior Laboratory, University of Colorado at Boulder, 409 UCB, 2501 Kittredge Loop Road, Boulder, Colorado, 80309, United States of America
- Institute of Cognitive Science, University of Colorado at Boulder, 344 UCB, Boulder, Colorado, 80309, United States of America
- Department of Speech, Language and Hearing Sciences, University of Colorado at Boulder, 409 UCB, 2501 Kittredge Loop Road, Boulder, Colorado, 80309, United States of America
| |
Collapse
|
44
|
Shaw KE, Bortfeld H. Sources of Confusion in Infant Audiovisual Speech Perception Research. Front Psychol 2015; 6:1844. [PMID: 26696919 PMCID: PMC4678229 DOI: 10.3389/fpsyg.2015.01844] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2015] [Accepted: 11/13/2015] [Indexed: 12/01/2022] Open
Abstract
Speech is a multimodal stimulus, with information provided in both the auditory and visual modalities. The resulting audiovisual signal provides relatively stable, tightly correlated cues that support speech perception and processing in a range of contexts. Despite the clear relationship between spoken language and the moving mouth that produces it, there remains considerable disagreement over how sensitive early language learners-infants-are to whether and how sight and sound co-occur. Here we examine sources of this disagreement, with a focus on how comparisons of data obtained using different paradigms and different stimuli may serve to exacerbate misunderstanding.
Collapse
Affiliation(s)
- Kathleen E. Shaw
- Department of Psychology, University of ConnecticutStorrs, CT, USA
| | - Heather Bortfeld
- Psychological Sciences, University of California, MercedMerced, CA, USA
- Haskins LaboratoriesNew Haven, CT, USA
| |
Collapse
|
45
|
Oryadi-Zanjani MM, Vahab M, Bazrafkan M, Haghjoo A. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss. Int J Pediatr Otorhinolaryngol 2015; 79:2424-7. [PMID: 26607564 DOI: 10.1016/j.ijporl.2015.11.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2015] [Revised: 11/01/2015] [Accepted: 11/03/2015] [Indexed: 10/22/2022]
Abstract
OBJECTIVES The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. DESIGN This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. RESULTS The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (P<0.01) and cochlear implant (P<0.05); however, in the children with hearing aid, there was no significant difference between word perception score in auditory-only and audiovisual presentation conditions (P>0.05). CONCLUSIONS The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation.
Collapse
Affiliation(s)
- Mohammad Majid Oryadi-Zanjani
- Department of Speech Therapy, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Maryam Vahab
- Department of Speech Therapy, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran.
| | - Mozhdeh Bazrafkan
- Department of Speech Therapy, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Asghar Haghjoo
- Department of Speech Therapy, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
46
|
Strelnikov K, Rouger J, Lagleyre S, Fraysse B, Démonet JF, Déguine O, Barone P. Increased audiovisual integration in cochlear-implanted deaf patients: independent components analysis of longitudinal positron emission tomography data. Eur J Neurosci 2015; 41:677-85. [PMID: 25728184 DOI: 10.1111/ejn.12827] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Revised: 12/05/2014] [Accepted: 12/07/2014] [Indexed: 01/12/2023]
Abstract
It has been demonstrated in earlier studies that patients with a cochlear implant have increased abilities for audio-visual integration because the crude information transmitted by the cochlear implant requires the persistent use of the complementary speech information from the visual channel. The brain network for these abilities needs to be clarified. We used an independent components analysis (ICA) of the activation (H2(15)O) positron emission tomography data to explore occipito-temporal brain activity in post-lingually deaf patients with unilaterally implanted cochlear implants at several months post-implantation (T1), shortly after implantation (T0) and in normal hearing controls. In between-group analysis, patients at T1 had greater blood flow in the left middle temporal cortex as compared with T0 and normal hearing controls. In within-group analysis, patients at T0 had a task-related ICA component in the visual cortex, and patients at T1 had one task-related ICA component in the left middle temporal cortex and the other in the visual cortex. The time courses of temporal and visual activities during the positron emission tomography examination at T1 were highly correlated, meaning that synchronized integrative activity occurred. The greater involvement of the visual cortex and its close coupling with the temporal cortex at T1 confirm the importance of audio-visual integration in more experienced cochlear implant subjects at the cortical level.
Collapse
Affiliation(s)
- K. Strelnikov
- Université Toulouse; CerCo; Université Paul Sabatier; Toulouse France
- Cerveau & Cognition; CNRS UMR 5549; Faculté de Médecine de Purpan; Pavillon Baudot; CHU Purpan; BP 25202; 31052 Toulouse France
- Service d'Oto-Rhino-Laryngologie et Oto-Neurologie; Hopital Purpan; Toulouse France
| | - J. Rouger
- Université Toulouse; CerCo; Université Paul Sabatier; Toulouse France
- Cerveau & Cognition; CNRS UMR 5549; Faculté de Médecine de Purpan; Pavillon Baudot; CHU Purpan; BP 25202; 31052 Toulouse France
| | - S. Lagleyre
- Service d'Oto-Rhino-Laryngologie et Oto-Neurologie; Hopital Purpan; Toulouse France
| | - B. Fraysse
- Service d'Oto-Rhino-Laryngologie et Oto-Neurologie; Hopital Purpan; Toulouse France
| | - J.-F. Démonet
- Clinical Neuroscience Department; Leenaards Memory Centre; University Hospital of Lausanne CHUV; University of Lausanne; Lausanne Switzerland
| | - O. Déguine
- Université Toulouse; CerCo; Université Paul Sabatier; Toulouse France
- Cerveau & Cognition; CNRS UMR 5549; Faculté de Médecine de Purpan; Pavillon Baudot; CHU Purpan; BP 25202; 31052 Toulouse France
- Service d'Oto-Rhino-Laryngologie et Oto-Neurologie; Hopital Purpan; Toulouse France
| | - P. Barone
- Université Toulouse; CerCo; Université Paul Sabatier; Toulouse France
- Cerveau & Cognition; CNRS UMR 5549; Faculté de Médecine de Purpan; Pavillon Baudot; CHU Purpan; BP 25202; 31052 Toulouse France
| |
Collapse
|
47
|
Marschark M, Spencer LJ, Durkin A, Borgna G, Convertino C, Machmer E, Kronenberger WG, Trani A. Understanding Language, Hearing Status, and Visual-Spatial Skills. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2015; 20:310-330. [PMID: 26141071 PMCID: PMC4836709 DOI: 10.1093/deafed/env025] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2015] [Revised: 06/09/2015] [Accepted: 06/12/2015] [Indexed: 05/29/2023]
Abstract
It is frequently assumed that deaf individuals have superior visual-spatial abilities relative to hearing peers and thus, in educational settings, they are often considered visual learners. There is some empirical evidence to support the former assumption, although it is inconsistent, and apparently none to support the latter. Three experiments examined visual-spatial and related cognitive abilities among deaf individuals who varied in their preferred language modality and use of cochlear implants (CIs) and hearing individuals who varied in their sign language skills. Sign language and spoken language assessments accompanied tasks involving visual-spatial processing, working memory, nonverbal logical reasoning, and executive function. Results were consistent with other recent studies indicating no generalized visual-spatial advantage for deaf individuals and suggested that their performance in that domain may be linked to the strength of their preferred language skills regardless of modality. Hearing individuals performed more strongly than deaf individuals on several visual-spatial and self-reported executive functioning measures, regardless of sign language skills or use of CIs. Findings are inconsistent with assumptions that deaf individuals are visual learners or are superior to hearing individuals across a broad range of visual-spatial tasks. Further, performance of deaf and hearing individuals on the same visual-spatial tasks was associated with differing cognitive abilities, suggesting that different cognitive processes may be involved in visual-spatial processing in these groups.
Collapse
|
48
|
Pons F, Bosch L, Lewkowicz DJ. Bilingualism modulates infants' selective attention to the mouth of a talking face. Psychol Sci 2015; 26:490-8. [PMID: 25767208 PMCID: PMC4398611 DOI: 10.1177/0956797614568320] [Citation(s) in RCA: 106] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2014] [Accepted: 12/23/2014] [Indexed: 11/16/2022] Open
Abstract
Infants growing up in bilingual environments succeed at learning two languages. What adaptive processes enable them to master the more complex nature of bilingual input? One possibility is that bilingual infants take greater advantage of the redundancy of the audiovisual speech that they usually experience during social interactions. Thus, we investigated whether bilingual infants' need to keep languages apart increases their attention to the mouth as a source of redundant and reliable speech cues. We measured selective attention to talking faces in 4-, 8-, and 12-month-old Catalan and Spanish monolingual and bilingual infants. Monolinguals looked more at the eyes than the mouth at 4 months and more at the mouth than the eyes at 8 months in response to both native and nonnative speech, but they looked more at the mouth than the eyes at 12 months only in response to nonnative speech. In contrast, bilinguals looked equally at the eyes and mouth at 4 months, more at the mouth than the eyes at 8 months, and more at the mouth than the eyes at 12 months, and these patterns of responses were found for both native and nonnative speech at all ages. Thus, to support their dual-language acquisition processes, bilingual infants exploit the greater perceptual salience of redundant audiovisual speech cues at an earlier age and for a longer time than monolingual infants.
Collapse
Affiliation(s)
- Ferran Pons
- Department of Basic Psychology, Universitat de Barcelona Institute for Brain, Cognition and Behavior (IR3C), Barcelona, Spain
| | - Laura Bosch
- Department of Basic Psychology, Universitat de Barcelona Institute for Brain, Cognition and Behavior (IR3C), Barcelona, Spain
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University
| |
Collapse
|
49
|
Campbell R, MacSweeney M, Woll B. Cochlear implantation (CI) for prelingual deafness: the relevance of studies of brain organization and the role of first language acquisition in considering outcome success. Front Hum Neurosci 2014; 8:834. [PMID: 25368567 PMCID: PMC4201085 DOI: 10.3389/fnhum.2014.00834] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2014] [Accepted: 09/30/2014] [Indexed: 11/13/2022] Open
Abstract
Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections-including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant-however that may be achieved, and whatever the success of auditory restoration.
Collapse
Affiliation(s)
- Ruth Campbell
- Deafness Cognition and Language Research Centre, University College LondonLondon, UK
| | - Mairéad MacSweeney
- Deafness Cognition and Language Research Centre, University College LondonLondon, UK
- Institute of Cognitive Neuroscience, University College LondonLondon, UK
| | - Bencie Woll
- Deafness Cognition and Language Research Centre, University College LondonLondon, UK
| |
Collapse
|
50
|
Heimler B, Weisz N, Collignon O. Revisiting the adaptive and maladaptive effects of crossmodal plasticity. Neuroscience 2014; 283:44-63. [PMID: 25139761 DOI: 10.1016/j.neuroscience.2014.08.003] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2014] [Revised: 08/01/2014] [Accepted: 08/06/2014] [Indexed: 11/15/2022]
Abstract
One of the most striking demonstrations of experience-dependent plasticity comes from studies of sensory-deprived individuals (e.g., blind or deaf), showing that brain regions deprived of their natural inputs change their sensory tuning to support the processing of inputs coming from the spared senses. These mechanisms of crossmodal plasticity have been traditionally conceptualized as having a double-edged sword effect on behavior. On one side, crossmodal plasticity is conceived as adaptive for the development of enhanced behavioral skills in the remaining senses of early-deaf or blind individuals. On the other side, crossmodal plasticity raises crucial challenges for sensory restoration and is typically conceived as maladaptive since its presence may prevent optimal recovery in sensory-re-afferented individuals. In the present review we stress that this dichotomic vision is oversimplified and we emphasize that the notions of the unavoidable adaptive/maladaptive effects of crossmodal reorganization for sensory compensation/restoration may actually be misleading. For this purpose we critically review the findings from the blind and deaf literatures, highlighting the complementary nature of these two fields of research. The integrated framework we propose here has the potential to impact on the way rehabilitation programs for sensory recovery are carried out, with the promising prospect of eventually improving their final outcomes.
Collapse
Affiliation(s)
- B Heimler
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy.
| | - N Weisz
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
| | - O Collignon
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
| |
Collapse
|