1
|
Alemi R, Wolfe J, Neumann S, Manning J, Towler W, Koirala N, Gracco VL, Deroche M. Audiovisual integration in children with cochlear implants revealed through EEG and fNIRS. Brain Res Bull 2023; 205:110817. [PMID: 37989460 DOI: 10.1016/j.brainresbull.2023.110817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 09/22/2023] [Accepted: 11/13/2023] [Indexed: 11/23/2023]
Abstract
Sensory deprivation can offset the balance of audio versus visual information in multimodal processing. Such a phenomenon could persist for children born deaf, even after they receive cochlear implants (CIs), and could potentially explain why one modality is given priority over the other. Here, we recorded cortical responses to a single speaker uttering two syllables, presented in audio-only (A), visual-only (V), and audio-visual (AV) modes. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were successively recorded in seventy-five school-aged children. Twenty-five were children with normal hearing (NH) and fifty wore CIs, among whom 26 had relatively high language abilities (HL) comparable to those of NH children, while 24 others had low language abilities (LL). In EEG data, visual-evoked potentials were captured in occipital regions, in response to V and AV stimuli, and they were accentuated in the HL group compared to the LL group (the NH group being intermediate). Close to the vertex, auditory-evoked potentials were captured in response to A and AV stimuli and reflected a differential treatment of the two syllables but only in the NH group. None of the EEG metrics revealed any interaction between group and modality. In fNIRS data, each modality induced a corresponding activity in visual or auditory regions, but no group difference was observed in A, V, or AV stimulation. The present study did not reveal any sign of abnormal AV integration in children with CI. An efficient multimodal integrative network (at least for rudimentary speech materials) is clearly not a sufficient condition to exhibit good language and literacy.
Collapse
Affiliation(s)
- Razieh Alemi
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada.
| | - Jace Wolfe
- Oberkotter Foundation, Oklahoma City, OK, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Will Towler
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Nabin Koirala
- Haskins Laboratories, 300 George St., New Haven, CT 06511, USA
| | | | - Mickael Deroche
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada
| |
Collapse
|
2
|
Audiovisual Integration for Saccade and Vergence Eye Movements Increases with Presbycusis and Loss of Selective Attention on the Stroop Test. Brain Sci 2022; 12:brainsci12050591. [PMID: 35624979 PMCID: PMC9139407 DOI: 10.3390/brainsci12050591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022] Open
Abstract
Multisensory integration is a capacity allowing us to merge information from different sensory modalities in order to improve the salience of the signal. Audiovisual integration is one of the most used kinds of multisensory integration, as vision and hearing are two senses used very frequently in humans. However, the literature regarding age-related hearing loss (presbycusis) on audiovisual integration abilities is almost nonexistent, despite the growing prevalence of presbycusis in the population. In that context, the study aims to assess the relationship between presbycusis and audiovisual integration using tests of saccade and vergence eye movements to visual vs. audiovisual targets, with a pure tone as an auditory signal. Tests were run with the REMOBI and AIDEAL technologies coupled with the pupil core eye tracker. Hearing abilities, eye movement characteristics (latency, peak velocity, average velocity, amplitude) for saccade and vergence eye movements, and the Stroop Victoria test were measured in 69 elderly and 30 young participants. The results indicated (i) a dual pattern of aging effect on audiovisual integration for convergence (a decrease in the aged group relative to the young one, but an increase with age within the elderly group) and (ii) an improvement of audiovisual integration for saccades for people with presbycusis associated with lower scores of selective attention in the Stroop test, regardless of age. These results bring new insight on an unknown topic, that of audio visuomotor integration in normal aging and in presbycusis. They highlight the potential interest of using eye movement targets in the 3D space and pure tone sound to objectively evaluate audio visuomotor integration capacities.
Collapse
|
3
|
Lasfargues-Delannoy A, Strelnikov K, Deguine O, Marx M, Barone P. Supra-normal skills in processing of visuo-auditory prosodic information by cochlear-implanted deaf patients. Hear Res 2021; 410:108330. [PMID: 34492444 DOI: 10.1016/j.heares.2021.108330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 07/08/2021] [Accepted: 08/02/2021] [Indexed: 10/20/2022]
Abstract
Cochlear implanted (CI) adults with acquired deafness are known to depend on multisensory integration skills (MSI) for speech comprehension through the fusion of speech reading skills and their deficient auditory perception. But, little is known on how CI patients perceive prosodic information relating to speech content. Our study aimed to identify how CI patients use MSI between visual and auditory information to process paralinguistic prosodic information of multimodal speech and the visual strategies employed. A psychophysics assessment was developed, in which CI patients and hearing controls (NH) had to distinguish between a question and a statement. The controls were separated into two age groups (young and aged-matched) to dissociate any effect of aging. In addition, the oculomotor strategies used when facing a speaker in this prosodic decision task were recorded using an eye-tracking device and compared to controls. This study confirmed that prosodic processing is multisensory but it revealed that CI patients showed significant supra-normal audiovisual integration for prosodic information compared to hearing controls irrespective of age. This study clearly showed that CI patients had a visuo-auditory gain more than 3 times larger than that observed in hearing controls. Furthermore, CI participants performed better in the visuo-auditory situation through a specific oculomotor exploration of the face as they significantly fixate the mouth region more than young NH participants who fixate the eyes, whereas the aged-matched controls presented an intermediate exploration pattern equally reported between the eyes and mouth. To conclude, our study demonstrated that CI patients have supra-normal skills MSI when integrating visual and auditory linguistic prosodic information, and a specific adaptive strategy developed as it participates directly in speech content comprehension.
Collapse
Affiliation(s)
- Anne Lasfargues-Delannoy
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France.
| | - Kuzma Strelnikov
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse, France
| | - Olivier Deguine
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France
| | - Mathieu Marx
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France
| | - Pascal Barone
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France
| |
Collapse
|
4
|
Tahmasebi S, Gajȩcki T, Nogueira W. Design and Evaluation of a Real-Time Audio Source Separation Algorithm to Remix Music for Cochlear Implant Users. Front Neurosci 2020; 14:434. [PMID: 32508564 PMCID: PMC7248365 DOI: 10.3389/fnins.2020.00434] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Accepted: 04/09/2020] [Indexed: 11/29/2022] Open
Abstract
A cochlear implant (CI) is a surgically implanted electronic device that partially restores hearing to people suffering from profound hearing loss. Although CI users, in general, obtain a very good reception of continuous speech in the absence of background noise, they face severe limitations in the context of music perception and appreciation. The main reasons for these limitations are related to channel interactions created by the broad spread of electrical fields in the cochlea and to the low number of electrodes that stimulate it. Moreover, CIs have severe limitations when it comes to transmitting the temporal fine structure of acoustic signals, and hence, these devices elicit poor pitch and timber perception. For these reasons, several signal processing algorithms have been proposed to make music more accessible for CI users, trying to reduce the complexity of music signals or remixing them to enhance certain components, such as the lead singing voice. In this work, a deep neural network that performs real-time audio source separation to remix music for CI users is presented. The implementation is based on multi-layer perception (MLP) and has been evaluated using objective instrumental measurements to ensure clean source estimation. Furthermore, experiments in 10 normal hearing (NH) and 13 CI users to investigate how the vocals to instruments ratio (VIR) set by the tested listeners were affected in realistic environments with and without visual information. The objective instrumental results fulfill the benchmark reported in previous studies by introducing distortions that are shown to not be perceived by CI users. Moreover, the implemented model was optimized to perform real-time source separation. The experimental results show that CI users prefer vocals 8 dB enhanced with the respect to the instruments independent of acoustic sound scenarios and visual information. In contrast, NH listeners did not prefer a VIR different than zero dB.
Collapse
Affiliation(s)
- Sina Tahmasebi
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence "Hearing4all", Hanover, Germany
| | - Tom Gajȩcki
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence "Hearing4all", Hanover, Germany
| | - Waldo Nogueira
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence "Hearing4all", Hanover, Germany
| |
Collapse
|
5
|
Auditory Selective Attention Hindered by Visual Stimulus in Prelingually Deaf Children With Cochlear Implants. Otol Neurotol 2019; 40:e542-e547. [PMID: 31083094 DOI: 10.1097/mao.0000000000002169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To compare the influence of visual distractors on the performance of auditory selective attention between prelingually deaf children with a CI (cochlear implant) and children with normal-hearing. DESIGN Twenty-two patients who had a cochlear implant device (10 males and 12 females, aged 6.64 ± 0.99 yrs) and 16 normal-hearing children (6 males and 10 females, aged 6.09 ± 0.51 yrs) were recruited. Half of the auditory stimuli were presented together with visual stimuli, and participants were required to complete an auditory identification task. Reaction times and discriminability (d') for these two groups were recorded and calculated. RESULTS The normal-hearing group had shorter mean reaction times than the CI group in detecting auditory targets. With visual distraction, the d' of the normal-hearing group was significantly better than that of CI group (t = 2.649, p = 0.012), while no statistical significance was found between the two groups without visual distraction (t = 0.693, p = 0.493). CONCLUSION Enhanced processing of visual stimuli interferes with auditory perception in CI users by occupying the capacity-limited attention.
Collapse
|
6
|
Cartocci G, Maglione AG, Vecchiato G, Modica E, Rossi D, Malerba P, Marsella P, Scorpecci A, Giannantonio S, Mosca F, Leone CA, Grassia R, Babiloni F. Frontal brain asymmetries as effective parameters to assess the quality of audiovisual stimuli perception in adult and young cochlear implant users. ACTA ACUST UNITED AC 2019; 38:346-360. [PMID: 30197426 PMCID: PMC6146571 DOI: 10.14639/0392-100x-1407] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2016] [Accepted: 08/01/2017] [Indexed: 11/23/2022]
Abstract
How is music perceived by cochlear implant (CI) users? This question arises as “the next step” given the impressive performance obtained by these patients in language perception. Furthermore, how can music perception be evaluated beyond self-report rating, in order to obtain measurable data? To address this question, estimation of the frontal electroencephalographic (EEG) alpha activity imbalance, acquired through a 19-channel EEG cap, appears to be a suitable instrument to measure the approach/withdrawal (AW index) reaction to external stimuli. Specifically, a greater value of AW indicates an increased propensity to stimulus approach, and vice versa a lower one a tendency to withdraw from the stimulus. Additionally, due to prelingually and postlingually deafened pathology acquisition, children and adults, respectively, would probably differ in music perception. The aim of the present study was to investigate children and adult CI users, in unilateral (UCI) and bilateral (BCI) implantation conditions, during three experimental situations of music exposure (normal, distorted and mute). Additionally, a study of functional connectivity patterns within cerebral networks was performed to investigate functioning patterns in different experimental populations. As a general result, congruency among patterns between BCI patients and control (CTRL) subjects was seen, characterised by lowest values for the distorted condition (vs. normal and mute conditions) in the AW index and in the connectivity analysis. Additionally, the normal and distorted conditions were significantly different in CI and CTRL adults, and in CTRL children, but not in CI children. These results suggest a higher capacity of discrimination and approach motivation towards normal music in CTRL and BCI subjects, but not for UCI patients. Therefore, for perception of music CTRL and BCI participants appear more similar than UCI subjects, as estimated by measurable and not self-reported parameters.
Collapse
Affiliation(s)
- G Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Italy.,These authors equally contributed to the present article
| | - A G Maglione
- BrainSigns Srl, Rome, Italy.,These authors equally contributed to the present article
| | - G Vecchiato
- Department of Molecular Medicine, Sapienza University of Rome, Italy
| | - E Modica
- Department of Anatomical, Histological, Forensic & Orthopedic Sciences, Sapienza University of Rome, Italy
| | - D Rossi
- Department of Anatomical, Histological, Forensic & Orthopedic Sciences, Sapienza University of Rome, Italy
| | - P Malerba
- Cochlear Italia Srl., Bologna, Italy
| | - P Marsella
- Department of Otorhinolaryngology, Audiology and Otology Unit, "Bambino Gesù" Pediatric Hospital, Rome, Italy
| | - A Scorpecci
- Department of Otorhinolaryngology, Audiology and Otology Unit, "Bambino Gesù" Pediatric Hospital, Rome, Italy
| | - S Giannantonio
- Department of Otorhinolaryngology, Audiology and Otology Unit, "Bambino Gesù" Pediatric Hospital, Rome, Italy
| | - F Mosca
- ENT Department, Azienda Ospedaliera Dei Colli Monaldi, Naples, Italy
| | - C A Leone
- ENT Department, Azienda Ospedaliera Dei Colli Monaldi, Naples, Italy
| | - R Grassia
- ENT Department, Azienda Ospedaliera Dei Colli Monaldi, Naples, Italy
| | - F Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Italy.,BrainSigns Srl, Rome, Italy
| |
Collapse
|
7
|
Wang L, Tsao Y, Chen F. Congruent Visual Stimulation Facilitates Auditory Frequency Change Detection: An ERP Study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:2446-2449. [PMID: 30440902 DOI: 10.1109/embc.2018.8512835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Exploring effective methods to improve the ability of frequency change detection for normal-hearing listeners and hearing-impaired patients is important to enhance their auditory perception, particularly in noise. This work studied the effect of congruent visual stimulation on facilitating auditory frequency change detection. Specially, an event-related potential (ERP) experiment was designed to investigate the functional mechanism underlying the audiovisual integration ability. Subjects were stimulated in three types of modalities, i.e., auditory-only, visual-only, and audiovisual. ERP components (e.g., N1 and P2) were compared among the three modalities. Results showed that the congruent visual stimulation significantly improved the perceptual ability of auditory frequency change detection. Compared with the other two unimodal modalities, the audiovisual modality yielded larger amplitudes in N1 and P2 components. This work provided neurophysiological evidence supporting that the ability of frequency change detection could be facilitated by the congruent visual stimulation.
Collapse
|
8
|
Stropahl M, Chen LC, Debener S. Cortical reorganization in postlingually deaf cochlear implant users: Intra-modal and cross-modal considerations. Hear Res 2017; 343:128-137. [DOI: 10.1016/j.heares.2016.07.005] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Revised: 07/12/2016] [Accepted: 07/18/2016] [Indexed: 10/21/2022]
|
9
|
Fengler I, Nava E, Röder B. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments. Front Integr Neurosci 2015; 9:31. [PMID: 25954166 PMCID: PMC4406062 DOI: 10.3389/fnint.2015.00031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2015] [Accepted: 04/05/2015] [Indexed: 11/22/2022] Open
Abstract
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations.
Collapse
Affiliation(s)
- Ineke Fengler
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, Institute for Psychology, University of Hamburg Hamburg, Germany
| | - Elena Nava
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, Institute for Psychology, University of Hamburg Hamburg, Germany ; Department of Psychology, University of Milan-Bicocca Milan, Italy ; NeuroMI Milan Center for Neuroscience Milan, Italy
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, Institute for Psychology, University of Hamburg Hamburg, Germany
| |
Collapse
|
10
|
Age-related hearing loss increases cross-modal distractibility. Hear Res 2014; 316:28-36. [DOI: 10.1016/j.heares.2014.07.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Revised: 07/11/2014] [Accepted: 07/16/2014] [Indexed: 12/11/2022]
|
11
|
Landry SP, Guillemot JP, Champoux F. Audiotactile interaction can change over time in cochlear implant users. Front Hum Neurosci 2014; 8:316. [PMID: 24904359 PMCID: PMC4033126 DOI: 10.3389/fnhum.2014.00316] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2014] [Accepted: 04/28/2014] [Indexed: 11/13/2022] Open
Abstract
Recent results suggest that audiotactile interactions are disturbed in cochlear implant (CI) users. However, further exploration regarding the factors responsible for such abnormal sensory processing is still required. Considering the temporal nature of a previously used multisensory task, it remains unclear whether any aberrant results were caused by the specificity of the interaction studied or rather if it reflects an overall abnormal interaction. Moreover, although duration of experience with a CI has often been linked with the recovery of auditory functions, its impact on multisensory performance remains uncertain. In the present study, we used the parchment-skin illusion, a robust illustration of sound-biased perception of touch based on changes in auditory frequencies, to investigate the specificities of audiotactile interactions in CI users. Whereas individuals with relatively little experience with the CI performed similarly to the control group, experienced CI users showed a significantly greater illusory percept. The overall results suggest that despite being able to ignore auditory distractors in a temporal audiotactile task, CI users develop to become greatly influenced by auditory input in a spectral audiotactile task. When considered with the existing body of research, these results confirm that normal sensory interaction processing can be compromised in CI users.
Collapse
Affiliation(s)
- Simon P Landry
- Centre de Recherche en Neuropsychologie Expérimentale et Cognition, Université de Montréal Montréal, QC, Canada ; Département de Kinanthropologie, Université du Québec à Montréal Montréal, QC, Canada
| | - Jean-Paul Guillemot
- Centre de Recherche en Neuropsychologie Expérimentale et Cognition, Université de Montréal Montréal, QC, Canada ; Département de Kinanthropologie, Université du Québec à Montréal Montréal, QC, Canada
| | - François Champoux
- Centre de Recherche en Neuropsychologie Expérimentale et Cognition, Université de Montréal Montréal, QC, Canada ; Institut Raymond-Dewar, Centre de Recherche Interdisciplinaire en Réadaptation du Montréal Métropolitain Montréal, QC, Canada ; École d'Orthophonie et d'Audiologie, Faculté de Médecine, Université de Montréal Montréal, QC, Canada
| |
Collapse
|
12
|
Bayard C, Colin C, Leybaert J. How is the McGurk effect modulated by Cued Speech in deaf and hearing adults? Front Psychol 2014; 5:416. [PMID: 24904451 PMCID: PMC4032946 DOI: 10.3389/fpsyg.2014.00416] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Accepted: 04/21/2014] [Indexed: 11/21/2022] Open
Abstract
Speech perception for both hearing and deaf people involves an integrative process between auditory and lip-reading information. In order to disambiguate information from lips, manual cues from Cued Speech may be added. Cued Speech (CS) is a system of manual aids developed to help deaf people to clearly and completely understand speech visually (Cornett, 1967). Within this system, both labial and manual information, as lone input sources, remain ambiguous. Perceivers, therefore, have to combine both types of information in order to get one coherent percept. In this study, we examined how audio-visual (AV) integration is affected by the presence of manual cues and on which form of information (auditory, labial or manual) the CS receptors primarily rely. To address this issue, we designed a unique experiment that implemented the use of AV McGurk stimuli (audio /pa/ and lip-reading /ka/) which were produced with or without manual cues. The manual cue was congruent with either auditory information, lip information or the expected fusion. Participants were asked to repeat the perceived syllable aloud. Their responses were then classified into four categories: audio (when the response was /pa/), lip-reading (when the response was /ka/), fusion (when the response was /ta/) and other (when the response was something other than /pa/, /ka/ or /ta/). Data were collected from hearing impaired individuals who were experts in CS (all of which had either cochlear implants or binaural hearing aids; N = 8), hearing-individuals who were experts in CS (N = 14) and hearing-individuals who were completely naïve of CS (N = 15). Results confirmed that, like hearing-people, deaf people can merge auditory and lip-reading information into a single unified percept. Without manual cues, McGurk stimuli induced the same percentage of fusion responses in both groups. Results also suggest that manual cues can modify the AV integration and that their impact differs between hearing and deaf people.
Collapse
Affiliation(s)
- Clémence Bayard
- Center for Research in Cognition and Neurosciences, Université Libre de BruxellesBrussels, Belgium
| | | | | |
Collapse
|