1
|
Higgins NC, Monjaras AG, Yerkes BD, Little DF, Nave-Blodgett JE, Elhilali M, Snyder JS. Resetting of Auditory and Visual Segregation Occurs After Transient Stimuli of the Same Modality. Front Psychol 2021; 12:720131. [PMID: 34621219 PMCID: PMC8490814 DOI: 10.3389/fpsyg.2021.720131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 08/16/2021] [Indexed: 12/03/2022] Open
Abstract
In the presence of a continually changing sensory environment, maintaining stable but flexible awareness is paramount, and requires continual organization of information. Determining which stimulus features belong together, and which are separate is therefore one of the primary tasks of the sensory systems. Unknown is whether there is a global or sensory-specific mechanism that regulates the final perceptual outcome of this streaming process. To test the extent of modality independence in perceptual control, an auditory streaming experiment, and a visual moving-plaid experiment were performed. Both were designed to evoke alternating perception of an integrated or segregated percept. In both experiments, transient auditory and visual distractor stimuli were presented in separate blocks, such that the distractors did not overlap in frequency or space with the streaming or plaid stimuli, respectively, thus preventing peripheral interference. When a distractor was presented in the opposite modality as the bistable stimulus (visual distractors during auditory streaming or auditory distractors during visual streaming), the probability of percept switching was not significantly different than when no distractor was presented. Conversely, significant differences in switch probability were observed following within-modality distractors, but only when the pre-distractor percept was segregated. Due to the modality-specificity of the distractor-induced resetting, the results suggest that conscious perception is at least partially controlled by modality-specific processing. The fact that the distractors did not have peripheral overlap with the bistable stimuli indicates that the perceptual reset is due to interference at a locus in which stimuli of different frequencies and spatial locations are integrated.
Collapse
Affiliation(s)
- Nathan C Higgins
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, NV, United States
| | - Ambar G Monjaras
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, NV, United States
| | - Breanne D Yerkes
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, NV, United States
| | - David F Little
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | | | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Joel S Snyder
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, NV, United States
| |
Collapse
|
2
|
Abstract
OBJECTIVE To assess the benefits of bimodal listening (i.e., addition of contralateral hearing aid) for cochlear implant (CI) users on real-world tasks involving high-talker variability speech materials, environmental sounds, and self-reported quality of life (quality of hearing) in listeners' own best-aided conditions. STUDY DESIGN Cross-sectional study between groups. SETTING Outpatient hearing clinic. PATIENTS Fifty experienced adult CI users divided into groups based on normal daily listening conditions (i.e., best-aided conditions): unilateral CI (CI), unilateral CI with contralateral HA (bimodal listening; CIHA), or bilateral CI (CICI). INTERVENTION Task-specific measures of speech recognition with low (Harvard Standard Sentences) and high (Perceptually Robust English Sentence Test Open-set corpus) talker variability, environmental sound recognition (Familiar Environmental Sounds Test-Identification), and hearing-related quality of life (Nijmegen Cochlear Implant Questionnaire). MAIN OUTCOME MEASURES Test group differences among CI, CIHA, and CICI conditions. RESULTS No group effect was observed for speech recognition with low or high-talker variability, or hearing-related quality of life. Bimodal listeners demonstrated a benefit in environmental sound recognition compared with unilateral CI listeners, with a trend of greater benefit than the bilateral CI group. There was also a visual trend for benefit on high-talker variability speech recognition. CONCLUSIONS Findings provide evidence that bimodal listeners demonstrate stronger environmental sound recognition compared with unilateral CI listeners, and support the idea that there are additional advantages to bimodal listening after implantation other than speech recognition measures, which are at risk of being lost if considering bilateral implantation.
Collapse
|
3
|
Patro C, Mendel LL. Semantic influences on the perception of degraded speech by individuals with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1778. [PMID: 32237796 DOI: 10.1121/10.0000934] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Accepted: 03/02/2020] [Indexed: 06/11/2023]
Abstract
This study investigated whether speech intelligibility in cochlear implant (CI) users is affected by semantic context. Three groups participated in two experiments: Two groups of listeners with normal hearing (NH) listened to either full spectrum speech or vocoded speech, and one CI group listened to full spectrum speech. Experiment 1 measured participants' sentence recognition as a function of target-to-masker ratio (four-talker babble masker), and experiment 2 measured perception of interrupted speech as a function of duty cycles (long/short uninterrupted speech). Listeners were presented with both semantic congruent/incongruent targets. Results from the two experiments suggested that NH listeners benefitted more from the semantic cues as the listening conditions became more challenging (lower signal-to-noise ratios and interrupted speech with longer silent intervals). However, the CI group received minimal benefit from context, and therefore performed poorly in such conditions. On the contrary, in the conditions that were less challenging, CI users benefitted greatly from the semantic context, and NH listeners did not rely on such cues. The results also confirmed that such differential use of semantic cues appears to originate from the spectro-temporal degradations experienced by CI users, which could be a contributing factor for their poor performance in suboptimal environments.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55414, USA
| | - Lisa Lucks Mendel
- School of Communication Sciences and Disorders, University of Memphis, Memphis, Tennessee 38152, USA
| |
Collapse
|
4
|
Effects of Additional Low-Pass-Filtered Speech on Listening Effort for Noise-Band-Vocoded Speech in Quiet and in Noise. Ear Hear 2019; 40:3-17. [PMID: 29757801 PMCID: PMC6319586 DOI: 10.1097/aud.0000000000000587] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Objectives: Residual acoustic hearing in electric–acoustic stimulation (EAS) can benefit cochlear implant (CI) users in increased sound quality, speech intelligibility, and improved tolerance to noise. The goal of this study was to investigate whether the low-pass–filtered acoustic speech in simulated EAS can provide the additional benefit of reducing listening effort for the spectrotemporally degraded signal of noise-band–vocoded speech. Design: Listening effort was investigated using a dual-task paradigm as a behavioral measure, and the NASA Task Load indeX as a subjective self-report measure. The primary task of the dual-task paradigm was identification of sentences presented in three experiments at three fixed intelligibility levels: at near-ceiling, 50%, and 79% intelligibility, achieved by manipulating the presence and level of speech-shaped noise in the background. Listening effort for the primary intelligibility task was reflected in the performance on the secondary, visual response time task. Experimental speech processing conditions included monaural or binaural vocoder, with added low-pass–filtered speech (to simulate EAS) or without (to simulate CI). Results: In Experiment 1, in quiet with intelligibility near-ceiling, additional low-pass–filtered speech reduced listening effort compared with binaural vocoder, in line with our expectations, although not compared with monaural vocoder. In Experiments 2 and 3, for speech in noise, added low-pass–filtered speech allowed the desired intelligibility levels to be reached at less favorable speech-to-noise ratios, as expected. It is interesting that this came without the cost of increased listening effort usually associated with poor speech-to-noise ratios; at 50% intelligibility, even a reduction in listening effort on top of the increased tolerance to noise was observed. The NASA Task Load indeX did not capture these differences. Conclusions: The dual-task results provide partial evidence for a potential decrease in listening effort as a result of adding low-frequency acoustic speech to noise-band–vocoded speech. Whether these findings translate to CI users with residual acoustic hearing will need to be addressed in future research because the quality and frequency range of low-frequency acoustic sound available to listeners with hearing loss may differ from our idealized simulations, and additional factors, such as advanced age and varying etiology, may also play a role.
Collapse
|
5
|
Patro C, Mendel LL. Gated Word Recognition by Postlingually Deafened Adults With Cochlear Implants: Influence of Semantic Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:145-158. [PMID: 29242894 DOI: 10.1044/2017_jslhr-h-17-0141] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 08/28/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. METHOD Listeners with CIs as well as those with normal hearing (NH) participated in the study. In Experiment 1, the CI users listened to unprocessed (full-spectrum) stimuli and individuals with NH listened to full-spectrum or vocoder processed speech. IPs were determined for both groups who listened to gated consonant-nucleus-consonant words that were selected based on lexical properties. In Experiment 2, the role of semantic context on IPs was evaluated. Target stimuli were chosen from the Revised Speech Perception in Noise corpus based on the lexical properties of the final words. RESULTS The results indicated that spectrotemporal degradations impacted IPs for gated words adversely, and CI users as well as participants with NH listening to vocoded speech had longer IPs than participants with NH who listened to full-spectrum speech. In addition, there was a clear disadvantage due to lack of semantic context in all groups regardless of the spectral composition of the target speech (full spectrum or vocoded). Finally, we showed that CI users (and users with NH with vocoded speech) can overcome such word processing difficulties with the help of semantic context and perform as well as listeners with NH. CONCLUSION Word recognition occurs even before the entire word is heard because listeners with NH associate an acoustic input with its mental representation to understand speech. The results of this study provide insight into the role of spectral degradation on the processing of spoken words in isolation and the potential benefits of semantic context. These results may also explain why CI users rely substantially on semantic context.
Collapse
Affiliation(s)
| | - Lisa Lucks Mendel
- School of Communication Sciences & Disorders, University of Memphis, TN
| |
Collapse
|
6
|
Top-Down Processes in Simulated Electric-Acoustic Hearing: The Effect of Linguistic Context on Bimodal Benefit for Temporally Interrupted Speech. Ear Hear 2018; 37:582-92. [PMID: 27007220 DOI: 10.1097/aud.0000000000000298] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
OBJECTIVES Previous studies have documented the benefits of bimodal hearing as compared with a cochlear implant alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences. DESIGN Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners' ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50% duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined. RESULTS Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18 points higher) conditions. For the individual gain comparisons that used a similar range of baseline performance, CUNY sentences showed bimodal benefits that were significantly higher (7% points, or 15 points normalized gain) than those for IEEE sentences. The bimodal benefits observed here for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech. Furthermore, unlike previous findings for continuous speech, no bimodal benefit was observed when LPHCs were presented to the LP ear. CONCLUSIONS Findings indicate that linguistic context has a significant influence on bimodal benefit for temporally interrupted speech and support the hypothesis that low-frequency acoustic information presented to the residual-hearing ear facilitates the use of top-down linguistic processing in bimodal hearing. However, bimodal benefit is reduced for temporally interrupted speech as compared with continuous speech, suggesting that listeners' ability to restore missing speech information depends not only on top-down linguistic knowledge but also on the quality of the bottom-up sensory input.
Collapse
|
7
|
Stilp C, Donaldson G, Oh S, Kong YY. Influences of noise-interruption and information-bearing acoustic changes on understanding simulated electric-acoustic speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:3971. [PMID: 27908030 PMCID: PMC6909990 DOI: 10.1121/1.4967445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2016] [Revised: 08/18/2016] [Accepted: 10/28/2016] [Indexed: 06/06/2023]
Abstract
In simulations of electrical-acoustic stimulation (EAS), vocoded speech intelligibility is aided by preservation of low-frequency acoustic cues. However, the speech signal is often interrupted in everyday listening conditions, and effects of interruption on hybrid speech intelligibility are poorly understood. Additionally, listeners rely on information-bearing acoustic changes to understand full-spectrum speech (as measured by cochlea-scaled entropy [CSE]) and vocoded speech (CSECI), but how listeners utilize these informational changes to understand EAS speech is unclear. Here, normal-hearing participants heard noise-vocoded sentences with three to six spectral channels in two conditions: vocoder-only (80-8000 Hz) and simulated hybrid EAS (vocoded above 500 Hz; original acoustic signal below 500 Hz). In each sentence, four 80-ms intervals containing high-CSECI or low-CSECI acoustic changes were replaced with speech-shaped noise. As expected, performance improved with the preservation of low-frequency fine-structure cues (EAS). This improvement decreased for continuous EAS sentences as more spectral channels were added, but increased as more channels were added to noise-interrupted EAS sentences. Performance was impaired more when high-CSECI intervals were replaced by noise than when low-CSECI intervals were replaced, but this pattern did not differ across listening modes. Utilizing information-bearing acoustic changes to understand speech is predicted to generalize to cochlear implant users who receive EAS inputs.
Collapse
Affiliation(s)
- Christian Stilp
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA
| | - Gail Donaldson
- Department of Communication Sciences and Disorders, University of South Florida, PCD 1017, 4202 East Fowler Avenue, Tampa, Florida 33620, USA
| | - Soohee Oh
- Department of Communication Sciences and Disorders, University of South Florida, PCD 1017, 4202 East Fowler Avenue, Tampa, Florida 33620, USA
| | - Ying-Yee Kong
- Department of Communication Sciences and Disorders, Northeastern University, 226 Forsyth Building, 360 Huntington Avenue, Boston, Massachusetts 02115, USA
| |
Collapse
|
8
|
Patro C, Mendel LL. Role of contextual cues on the perception of spectrally reduced interrupted speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1336. [PMID: 27586760 DOI: 10.1121/1.4961450] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Understanding speech within an auditory scene is constantly challenged by interfering noise in suboptimal listening environments when noise hinders the continuity of the speech stream. In such instances, a typical auditory-cognitive system perceptually integrates available speech information and "fills in" missing information in the light of semantic context. However, individuals with cochlear implants (CIs) find it difficult and effortful to understand interrupted speech compared to their normal hearing counterparts. This inefficiency in perceptual integration of speech could be attributed to further degradations in the spectral-temporal domain imposed by CIs making it difficult to utilize the contextual evidence effectively. To address these issues, 20 normal hearing adults listened to speech that was spectrally reduced and spectrally reduced interrupted in a manner similar to CI processing. The Revised Speech Perception in Noise test, which includes contextually rich and contextually poor sentences, was used to evaluate the influence of semantic context on speech perception. Results indicated that listeners benefited more from semantic context when they listened to spectrally reduced speech alone. For the spectrally reduced interrupted speech, contextual information was not as helpful under significant spectral reductions, but became beneficial as the spectral resolution improved. These results suggest top-down processing facilitates speech perception up to a point, and it fails to facilitate speech understanding when the speech signals are significantly degraded.
Collapse
Affiliation(s)
- Chhayakanta Patro
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee, 38152, USA
| | - Lisa Lucks Mendel
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee, 38152, USA
| |
Collapse
|
9
|
Vermeulen A, Verschuur C. Robustness against distortion of fundamental frequency cues in simulated electro-acoustic hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:229. [PMID: 27475149 DOI: 10.1121/1.4954752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Speech recognition by cochlear implant users can be improved by adding an audible low frequency acoustic signal to electrical hearing; the resulting improvement is deemed "electro-acoustic stimulation (EAS) benefit." However, a crucial low frequency cue, fundamental frequency (F0), can be distorted via the impaired auditory system. In order to understand how F0 distortions may affect EAS benefit, normal-hearing listeners were presented monaurally with vocoded speech (frequencies >250 Hz) and an acoustical signal (frequencies <250 Hz) with differing manipulations of the F0 signal, specifically: a pure tone with the correct mean F0 but with smaller variations around this mean, or a narrowband of white noise centered around F0, at varying bandwidths; a pure tone down-shifted in frequency by 50 Hz but keeping overall frequency modulations. Speech-recognition thresholds improved when tones with reduced frequency modulation were presented, and improved significantly for noise bands maintaining F0 information. A down-shifted tone, or only a tone to indicate voicing, showed no EAS benefit. These results confirm that the presence of the target's F0 is beneficial for EAS hearing in a noisy environment, and they indicate that the benefit is robust to F0 distortion, as long as the mean F0 and frequency modulations of F0 are preserved.
Collapse
Affiliation(s)
- Arthur Vermeulen
- Hearing and Balance Centre, Institute of Sound and Vibration Research, University of Southampton, Highfield, Southampton SO17 1BJ, United Kingdom
| | - Carl Verschuur
- University of Southampton, Auditory Implant Service, Highfield, Southampton SO17 1BJ, United Kingdom
| |
Collapse
|
10
|
The Intelligibility of Interrupted Speech: Cochlear Implant Users and Normal Hearing Listeners. J Assoc Res Otolaryngol 2016; 17:475-91. [PMID: 27090115 PMCID: PMC5023536 DOI: 10.1007/s10162-016-0565-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2015] [Accepted: 03/18/2016] [Indexed: 11/13/2022] Open
Abstract
Compared with normal-hearing listeners, cochlear implant (CI) users display a loss of intelligibility of speech interrupted by silence or noise, possibly due to reduced ability to integrate and restore speech glimpses across silence or noise intervals. The present study was conducted to establish the extent of the deficit typical CI users have in understanding interrupted high-context sentences as a function of a range of interruption rates (1.5 to 24 Hz) and duty cycles (50 and 75 %). Further, factors such as reduced signal quality of CI signal transmission and advanced age, as well as potentially lower speech intelligibility of CI users even in the lack of interruption manipulation, were explored by presenting young, as well as age-matched, normal-hearing (NH) listeners with full-spectrum and vocoded speech (eight-channel and speech intelligibility baseline performance matched). While the actual CI users had more difficulties in understanding interrupted speech and taking advantage of faster interruption rates and increased duty cycle than the eight-channel noise-band vocoded listeners, their performance was similar to the matched noise-band vocoded listeners. These results suggest that while loss of spectro-temporal resolution indeed plays an important role in reduced intelligibility of interrupted speech, these factors alone cannot entirely explain the deficit. Other factors associated with real CIs, such as aging or failure in transmission of essential speech cues, seem to additionally contribute to poor intelligibility of interrupted speech.
Collapse
|
11
|
Oh SH, Donaldson GS, Kong YY. The role of continuous low-frequency harmonicity cues for interrupted speech perception in bimodal hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:1747. [PMID: 27106322 PMCID: PMC4833731 DOI: 10.1121/1.4945747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Low-frequency acoustic cues have been shown to enhance speech perception by cochlear-implant users, particularly when target speech occurs in a competing background. The present study examined the extent to which a continuous representation of low-frequency harmonicity cues contributes to bimodal benefit in simulated bimodal listeners. Experiment 1 examined the benefit of restoring a continuous temporal envelope to the low-frequency ear while the vocoder ear received a temporally interrupted stimulus. Experiment 2 examined the effect of providing continuous harmonicity cues in the low-frequency ear as compared to restoring a continuous temporal envelope in the vocoder ear. Findings indicate that bimodal benefit for temporally interrupted speech increases when continuity is restored to either or both ears. The primary benefit appears to stem from the continuous temporal envelope in the low-frequency region providing additional phonetic cues related to manner and F1 frequency; a secondary contribution is provided by low-frequency harmonicity cues when a continuous representation of the temporal envelope is present in the low-frequency, or both ears. The continuous temporal envelope and harmonicity cues of low-frequency speech are thought to support bimodal benefit by facilitating identification of word and syllable boundaries, and by restoring partial phonetic cues that occur during gaps in the temporally interrupted stimulus.
Collapse
Affiliation(s)
- Soo Hee Oh
- Department of Communication Sciences and Disorders, University of South Florida, PCD 1017, 4202 East Fowler Avenue, Tampa, Florida 33620, USA
| | - Gail S Donaldson
- Department of Communication Sciences and Disorders, University of South Florida, PCD 1017, 4202 East Fowler Avenue, Tampa, Florida 33620, USA
| | - Ying-Yee Kong
- Department of Communication Sciences and Disorders, Northeastern University, 226 Forsyth Building, 360 Huntington Avenue, Boston, Massachusetts 02115, USA
| |
Collapse
|
12
|
Clarke J, Başkent D, Gaudrain E. Pitch and spectral resolution: A systematic comparison of bottom-up cues for top-down repair of degraded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:395-405. [PMID: 26827034 DOI: 10.1121/1.4939962] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The brain is capable of restoring missing parts of speech, a top-down repair mechanism that enhances speech understanding in noisy environments. This enhancement can be quantified using the phonemic restoration paradigm, i.e., the improvement in intelligibility when silent interruptions of interrupted speech are filled with noise. Benefit from top-down repair of speech differs between cochlear implant (CI) users and normal-hearing (NH) listeners. This difference could be due to poorer spectral resolution and/or weaker pitch cues inherent to CI transmitted speech. In CIs, those two degradations cannot be teased apart because spectral degradation leads to weaker pitch representation. A vocoding method was developed to evaluate independently the roles of pitch and spectral resolution for restoration in NH individuals. Sentences were resynthesized with different spectral resolutions and with either retaining the original pitch cues or discarding them all. The addition of pitch significantly improved restoration only at six-bands spectral resolution. However, overall intelligibility of interrupted speech was improved both with the addition of pitch and with the increase in spectral resolution. This improvement may be due to better discrimination of speech segments from the filler noise, better grouping of speech segments together, and/or better bottom-up cues available in the speech segments.
Collapse
Affiliation(s)
- Jeanne Clarke
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, P.O. Box 30.001, BB21, 9700 RB Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, P.O. Box 30.001, BB21, 9700 RB Groningen, The Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, P.O. Box 30.001, BB21, 9700 RB Groningen, The Netherlands
| |
Collapse
|
13
|
The effect of visual cues on top-down restoration of temporally interrupted speech, with and without further degradations. Hear Res 2015; 328:24-33. [PMID: 26117407 DOI: 10.1016/j.heares.2015.06.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2015] [Revised: 06/15/2015] [Accepted: 06/22/2015] [Indexed: 11/21/2022]
Abstract
In complex listening situations, cognitive restoration mechanisms are commonly used to enhance perception of degraded speech with inaudible segments. Profoundly hearing-impaired people with a cochlear implant (CI) show less benefit from such mechanisms. However, both normal hearing (NH) listeners and CI users do benefit from visual speech cues in these listening situations. In this study we investigated if an accompanying video of the speaker can enhance the intelligibility of interrupted sentences and the phonemic restoration benefit, measured by an increase in intelligibility when the silent intervals are filled with noise. Similar to previous studies, restoration benefit was observed with interrupted speech without spectral degradations (Experiment 1), but was absent in acoustic simulations of CIs (Experiment 2) and was present again in simulations of electric-acoustic stimulation (Experiment 3). In all experiments, the additional speech information provided by the complementary visual cues lead to overall higher intelligibility, however, these cues did not influence the occurrence or extent of the phonemic restoration benefit of filler noise. Results imply that visual cues do not show a synergistic effect with the filler noise, as adding them equally increased the intelligibility of interrupted sentences with or without the filler noise.
Collapse
|
14
|
Kong YY, Donaldson G, Somarowthu A. Effects of contextual cues on speech recognition in simulated electric-acoustic stimulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:2846-57. [PMID: 25994712 PMCID: PMC4441702 DOI: 10.1121/1.4919337] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2013] [Revised: 04/10/2015] [Accepted: 04/11/2015] [Indexed: 05/26/2023]
Abstract
Low-frequency acoustic cues have shown to improve speech perception in cochlear-implant listeners. However, the mechanisms underlying this benefit are still not well understood. This study investigated the extent to which low-frequency cues can facilitate listeners' use of linguistic knowledge in simulated electric-acoustic stimulation (EAS). Experiment 1 examined differences in the magnitude of EAS benefit at the phoneme, word, and sentence levels. Speech materials were processed via noise-channel vocoding and lowpass (LP) filtering. The amount of spectral degradation in the vocoder speech was varied by applying different numbers of vocoder channels. Normal-hearing listeners were tested on vocoder-alone, LP-alone, and vocoder + LP conditions. Experiment 2 further examined factors that underlie the context effect on EAS benefit at the sentence level by limiting the low-frequency cues to temporal envelope and periodicity (AM + FM). Results showed that EAS benefit was greater for higher-context than for lower-context speech materials even when the LP ear received only low-frequency AM + FM cues. Possible explanations for the greater EAS benefit observed with higher-context materials may lie in the interplay between perceptual and expectation-driven processes for EAS speech recognition, and/or the band-importance functions for different types of speech materials.
Collapse
Affiliation(s)
- Ying-Yee Kong
- Department of Communication Sciences and Disorders, Northeastern University, 226 Forsyth Building, 360 Huntington Avenue, Boston, Massachusetts 02115, USA
| | - Gail Donaldson
- Department of Communication Sciences and Disorders, University of South Florida, PCD 1017, 4202 East Fowler Avenue, Tampa, Florida 33620, USA
| | - Ala Somarowthu
- Department of Bioengineering, Northeastern University, 360 Huntington Avenue, Boston, Massachusetts 02115, USA
| |
Collapse
|
15
|
Stilp CE, Goupell MJ. Spectral and temporal resolutions of information-bearing acoustic changes for understanding vocoded sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:844-55. [PMID: 25698018 PMCID: PMC4336249 DOI: 10.1121/1.4906179] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2014] [Revised: 12/12/2014] [Accepted: 12/27/2014] [Indexed: 06/04/2023]
Abstract
Short-time spectral changes in the speech signal are important for understanding noise-vocoded sentences. These information-bearing acoustic changes, measured using cochlea-scaled entropy in cochlear implant simulations [CSECI; Stilp et al. (2013). J. Acoust. Soc. Am. 133(2), EL136-EL141; Stilp (2014). J. Acoust. Soc. Am. 135(3), 1518-1529], may offer better understanding of speech perception by cochlear implant (CI) users. However, perceptual importance of CSECI for normal-hearing listeners was tested at only one spectral resolution and one temporal resolution, limiting generalizability of results to CI users. Here, experiments investigated the importance of these informational changes for understanding noise-vocoded sentences at different spectral resolutions (4-24 spectral channels; Experiment 1), temporal resolutions (4-64 Hz cutoff for low-pass filters that extracted amplitude envelopes; Experiment 2), or when both parameters varied (6-12 channels, 8-32 Hz; Experiment 3). Sentence intelligibility was reduced more by replacing high-CSECI intervals with noise than replacing low-CSECI intervals, but only when sentences had sufficient spectral and/or temporal resolution. High-CSECI intervals were more important for speech understanding as spectral resolution worsened and temporal resolution improved. Trade-offs between CSECI and intermediate spectral and temporal resolutions were minimal. These results suggest that signal processing strategies that emphasize information-bearing acoustic changes in speech may improve speech perception for CI users.
Collapse
Affiliation(s)
- Christian E Stilp
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742
| |
Collapse
|
16
|
Ardoint M, Green T, Rosen S. The intelligibility of interrupted speech depends upon its uninterrupted intelligibility. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:EL275-EL280. [PMID: 25324110 DOI: 10.1121/1.4895096] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Recognition of sentences containing periodic, 5-Hz, silent interruptions of differing duty cycles was assessed for three types of processed speech. Processing conditions employed different combinations of spectral resolution and the availability of fundamental frequency (F0) information, chosen to yield similar, below-ceiling performance for uninterrupted speech. Performance declined with decreasing duty cycle similarly for each processing condition, suggesting that, at least for certain forms of speech processing and interruption rates, performance with interrupted speech may reflect that obtained with uninterrupted speech. This highlights the difficulty in interpreting differences in interrupted speech performance across conditions for which uninterrupted performance is at ceiling.
Collapse
Affiliation(s)
- Marine Ardoint
- Speech Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom , ,
| | - Tim Green
- Speech Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom , ,
| | - Stuart Rosen
- Speech Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom , ,
| |
Collapse
|
17
|
Benard MR, Başkent D. Perceptual learning of temporally interrupted spectrally degraded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:1344. [PMID: 25190407 DOI: 10.1121/1.4892756] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Normal-hearing (NH) listeners make use of context, speech redundancy and top-down linguistic processes to perceptually restore inaudible or masked portions of speech. Previous research has shown poorer perception and restoration of interrupted speech in CI users and NH listeners tested with acoustic simulations of CIs. Three hypotheses were investigated: (1) training with CI simulations of interrupted sentences can teach listeners to use the high-level restoration mechanisms more effectively, (2) phonemic restoration benefit, an increase in intelligibility of interrupted sentences once its silent gaps are filled with noise, can be induced with training, and (3) perceptual learning of interrupted sentences can be reflected in clinical speech audiometry. To test these hypotheses, NH listeners were trained using periodically interrupted sentences, also spectrally degraded with a noiseband vocoder as CI simulation. Feedback was presented by displaying the sentence text and playing back both the intact and the interrupted CI simulation of the sentence. Training induced no phonemic restoration benefit, and learning was not transferred to speech audiometry measured with words. However, a significant improvement was observed in overall intelligibility of interrupted spectrally degraded sentences, with or without filler noise, suggesting possibly better use of restoration mechanisms as a result of training.
Collapse
Affiliation(s)
- Michel Ruben Benard
- Pento Audiology Center Zwolle, Oosterlaan 20, 8011 GC Zwolle, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
18
|
Nittrouer S, Tarr E, Bolster V, Caldwell-Tarr A, Moberly AC, Lowenstein JH. Low-frequency signals support perceptual organization of implant-simulated speech for adults and children. Int J Audiol 2014; 53:270-84. [PMID: 24456179 PMCID: PMC3954900 DOI: 10.3109/14992027.2013.871649] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE Using signals processed to simulate speech received through cochlear implants and low-frequency extended hearing aids, this study examined the proposal that low-frequency signals facilitate the perceptual organization of broader, spectrally degraded signals. DESIGN In two experiments, words and sentences were presented in diotic and dichotic configurations as four-channel noise-vocoded signals (VOC-only), and as those signals combined with the acoustic signal below 0.25 kHz (LOW-plus). Dependent measures were percent correct recognition, and the difference between scores for the two processing conditions given as proportions of recognition scores for VOC-only. The influence of linguistic context was also examined. STUDY SAMPLE Participants had normal hearing. In all, 40 adults, 40 seven-year-olds, and 20 five-year-olds participated. RESULTS Participants of all ages showed benefits of adding the low-frequency signal. The effect was greater for sentences than words, but no effect of diotic versus dichotic presentation was found. The influence of linguistic context was similar across age groups, and did not contribute to the low-frequency effect. Listeners who had poorer VOC-only scores showed greater low-frequency effects. CONCLUSION The benefit of adding a low-frequency signal to a broader, spectrally degraded signal derives in some part from its facilitative influence on perceptual organization of the sensory input.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Otolaryngology, The Ohio State University , Columbus , USA
| | | | | | | | | | | |
Collapse
|
19
|
Bhargava P, Gaudrain E, Başkent D. Top–down restoration of speech in cochlear-implant users. Hear Res 2014; 309:113-23. [DOI: 10.1016/j.heares.2013.12.003] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/11/2013] [Revised: 11/21/2013] [Accepted: 12/12/2013] [Indexed: 10/25/2022]
|
20
|
Fogerty D. Acoustic predictors of intelligibility for segmentally interrupted speech: temporal envelope, voicing, and duration. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1402-8. [PMID: 23838986 PMCID: PMC4064467 DOI: 10.1044/1092-4388(2013/12-0203)] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
PURPOSE Temporal interruption limits the perception of speech to isolated temporal glimpses. An analysis was conducted to determine the acoustic parameter that best predicts speech recognition from temporal fragments that preserve different types of speech information-namely, consonants and vowels. METHOD Young listeners with normal hearing previously completed word and sentence recognition tasks that required them to repeat word and sentence material that was temporally interrupted. Interruptions were designed to replace various portions of consonants or vowels with low-level noise. Acoustic analysis of preserved consonant and vowel segments was conducted to investigate the role of the preserved temporal envelope, voicing, and speech duration in predicting performance. RESULTS Results demonstrate that the temporal envelope, predominantly from vowels, is most important for sentence recognition and largely predicts results across consonant and vowel conditions. In contrast, for isolated words the proportion of speech preserved was the best predictor of performance regardless of whether glimpses were from consonants or vowels. CONCLUSION These findings suggest consideration of the vowel temporal envelope in speech transmission and amplification technologies for improving the intelligibility of temporally interrupted sentences.
Collapse
|
21
|
Abstract
The intelligibility of periodically interrupted speech improves once the silent gaps are filled with noise bursts. This improvement has been attributed to phonemic restoration, a top-down repair mechanism that helps intelligibility of degraded speech in daily life. Two hypotheses were investigated using perceptual learning of interrupted speech. If different cognitive processes played a role in restoring interrupted speech with and without filler noise, the two forms of speech would be learned at different rates and with different perceived mental effort. If the restoration benefit were an artificial outcome of using the ecologically invalid stimulus of speech with silent gaps, this benefit would diminish with training. Two groups of normal-hearing listeners were trained, one with interrupted sentences with the filler noise, and the other without. Feedback was provided with the auditory playback of the unprocessed and processed sentences, as well as the visual display of the sentence text. Training increased the overall performance significantly, however restoration benefit did not diminish. The increase in intelligibility and the decrease in perceived mental effort were relatively similar between the groups, implying similar cognitive mechanisms for the restoration of the two types of interruptions. Training effects were generalizable, as both groups improved their performance also with the other form of speech than that they were trained with, and retainable. Due to null results and relatively small number of participants (10 per group), further research is needed to more confidently draw conclusions. Nevertheless, training with interrupted speech seems to be effective, stimulating participants to more actively and efficiently use the top-down restoration. This finding further implies the potential of this training approach as a rehabilitative tool for hearing-impaired/elderly populations.
Collapse
|
22
|
Stilp CE, Goupell MJ, Kluender KR. Speech perception in simulated electric hearing exploits information-bearing acoustic change. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:EL136-EL141. [PMID: 23363194 PMCID: PMC3562329 DOI: 10.1121/1.4776773] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2012] [Revised: 12/12/2012] [Indexed: 05/28/2023]
Abstract
Stilp and Kluender [(2010). Proc. Natl. Acad. Sci. U.S.A. 107(27), 12387-12392] reported measures of sensory change over time (cochlea-scaled spectral entropy, CSE) reliably predicted sentence intelligibility for normal-hearing listeners. Here, implications for listeners with atypical hearing were explored using noise-vocoded speech. CSE was parameterized as Euclidean distances between biologically scaled spectra [measured before sentences were noise vocoded (CSE)] or between channel amplitude profiles in simulated cochlear-implant processing [measured after vocoding (CSE(CI))]. Sentence intelligibility worsened with greater amounts of information replaced by noise; patterns of performance did not differ between CSE and CSE(CI). Results demonstrate the importance of information-bearing change for speech perception in simulated electric hearing.
Collapse
Affiliation(s)
- Christian E Stilp
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA.
| | | | | |
Collapse
|
23
|
Abstract
Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review, see Bregman, 1990; Warren, 1999). The neural mechanisms underlying this continuity illusion have been studied mostly with schematic stimuli (e.g., simple tones) and are still a matter of debate (for review, see Petkov and Sutter, 2011). The goal of the present study was to elucidate how these mechanisms operate under more natural conditions. Using psychophysics and electroencephalography (EEG), we assessed simultaneously the perceived continuity of a human vowel sound through interrupting noise and the concurrent neural activity. We found that vowel continuity illusions were accompanied by a suppression of the 4 Hz EEG power in auditory cortex (AC) that was evoked by the vowel interruption. This suppression was stronger than the suppression accompanying continuity illusions of a simple tone. Finally, continuity perception and 4 Hz power depended on the intactness of the sound that preceded the vowel (i.e., the auditory context). These findings show that a natural sound may be restored during noise due to the suppression of 4 Hz AC activity evoked early during the noise. This mechanism may attenuate sudden pitch changes, adapt the resistance of the auditory system to extraneous sounds across auditory scenes, and provide a useful model for assisted hearing devices.
Collapse
|
24
|
Effect of speech degradation on top-down repair: phonemic restoration with simulations of cochlear implants and combined electric-acoustic stimulation. J Assoc Res Otolaryngol 2012; 13:683-92. [PMID: 22569838 PMCID: PMC3441953 DOI: 10.1007/s10162-012-0334-3] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2011] [Accepted: 04/24/2012] [Indexed: 11/11/2022] Open
Abstract
The brain, using expectations, linguistic knowledge, and context, can perceptually restore inaudible portions of speech. Such top-down repair is thought to enhance speech intelligibility in noisy environments. Hearing-impaired listeners with cochlear implants commonly complain about not understanding speech in noise. We hypothesized that the degradations in the bottom-up speech signals due to the implant signal processing may have a negative effect on the top-down repair mechanisms, which could partially be responsible for this complaint. To test the hypothesis, phonemic restoration of interrupted sentences was measured with young normal-hearing listeners using a noise-band vocoder simulation of implant processing. Decreasing the spectral resolution (by reducing the number of vocoder processing channels from 32 to 4) systematically degraded the speech stimuli. Supporting the hypothesis, the size of the restoration benefit varied as a function of spectral resolution. A significant benefit was observed only at the highest spectral resolution of 32 channels. With eight channels, which resembles the resolution available to most implant users, there was no significant restoration effect. Combined electric–acoustic hearing has been previously shown to provide better intelligibility of speech in adverse listening environments. In a second configuration, combined electric–acoustic hearing was simulated by adding low-pass-filtered acoustic speech to the vocoder processing. There was a slight improvement in phonemic restoration compared to the first configuration; the restoration benefit was observed at spectral resolutions of both 16 and 32 channels. However, the restoration was not observed at lower spectral resolutions (four or eight channels). Overall, the findings imply that the degradations in the bottom-up signals alone (such as occurs in cochlear implants) may reduce the top-down restoration of speech.
Collapse
|
25
|
Seldran F, Micheyl C, Truy E, Berger-Vachon C, Thai-Van H, Gallego S. A model-based analysis of the "combined-stimulation advantage". Hear Res 2011; 282:252-64. [PMID: 21801823 DOI: 10.1016/j.heares.2011.06.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2011] [Revised: 06/19/2011] [Accepted: 06/20/2011] [Indexed: 10/17/2022]
Abstract
Improvements in speech-recognition performance resulting from the addition of low-frequency information to electric (or vocoded) signals have attracted considerable interest in recent years. An important question is whether these improvements reflect a form of constructive perceptual interaction-whereby acoustic cues enhance the perception of electric or vocoded signals-or whether they can be explained without assuming any interaction. To address this question, speech-recognition performance was measured in 24 normal-hearing listeners using lowpass-filtered, vocoded, and "combined" (lowpass + vocoded) words presented either in quiet or in a realistic background (cafeteria noise), for different signal-to-noise ratios, different lowpass-filter cutoff frequencies, and different numbers of vocoder bands. The results of these measures were then compared to the predictions of three models of cue combination, including a "probability-summation" model and two Gaussian signal detection theory (SDT) models-one (the "independent-noises" model) involving pre-combination noises, and the other (the "late-noise" model) involving post-combination noise. Consistent with previous findings, speech-recognition performance with combined stimulation was significantly higher than performance with vocoded or lowpass stimuli alone, and it was also higher than predicted by the probability-summation model. The two Gaussian-SDT models could account quantitatively for the data. Moreover, a Bayesian model-comparison procedure demonstrated that, given the data, these two models were far more likely than the probability-summation model. Since these models do not involve any constructive-interaction mechanism, this demonstrates that constructive interactions are not needed to explain the combined-stimulation benefits measured in this study. It will be important for future studies to investigate whether this conclusion generalizes to other test conditions, including real EAS, and to further test the assumptions of these different models of the combined-stimulation advantage.
Collapse
Affiliation(s)
- Fabien Seldran
- INSERM U1028, Lyon Neuroscience Research Center, PACS Team (Speech, Audiology, Communication Health), Lyon F-69000, France.
| | | | | | | | | | | |
Collapse
|