1
|
Fitzgerald LP, DeDe G, Shen J. Effects of linguistic context and noise type on speech comprehension. Front Psychol 2024; 15:1345619. [PMID: 38375107 PMCID: PMC10875108 DOI: 10.3389/fpsyg.2024.1345619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/17/2024] [Indexed: 02/21/2024] Open
Abstract
Introduction Understanding speech in background noise is an effortful endeavor. When acoustic challenges arise, linguistic context may help us fill in perceptual gaps. However, more knowledge is needed regarding how different types of background noise affect our ability to construct meaning from perceptually complex speech input. Additionally, there is limited evidence regarding whether perceptual complexity (e.g., informational masking) and linguistic complexity (e.g., occurrence of contextually incongruous words) interact during processing of speech material that is longer and more complex than a single sentence. Our first research objective was to determine whether comprehension of spoken sentence pairs is impacted by the informational masking from a speech masker. Our second objective was to identify whether there is an interaction between perceptual and linguistic complexity during speech processing. Methods We used multiple measures including comprehension accuracy, reaction time, and processing effort (as indicated by task-evoked pupil response), making comparisons across three different levels of linguistic complexity in two different noise conditions. Context conditions varied by final word, with each sentence pair ending with an expected exemplar (EE), within-category violation (WV), or between-category violation (BV). Forty young adults with typical hearing performed a speech comprehension in noise task over three visits. Each participant heard sentence pairs presented in either multi-talker babble or spectrally shaped steady-state noise (SSN), with the same noise condition across all three visits. Results We observed an effect of context but not noise on accuracy. Further, we observed an interaction of noise and context in peak pupil dilation data. Specifically, the context effect was modulated by noise type: context facilitated processing only in the more perceptually complex babble noise condition. Discussion These findings suggest that when perceptual complexity arises, listeners make use of the linguistic context to facilitate comprehension of speech obscured by background noise. Our results extend existing accounts of speech processing in noise by demonstrating how perceptual and linguistic complexity affect our ability to engage in higher-level processes, such as construction of meaning from speech segments that are longer than a single sentence.
Collapse
Affiliation(s)
- Laura P. Fitzgerald
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Gayle DeDe
- Speech, Language, and Brain Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Jing Shen
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| |
Collapse
|
2
|
Chiossi JSC, Patou F, Ng EHN, Faulkner KF, Lyxell B. Phonological discrimination and contrast detection in pupillometry. Front Psychol 2023; 14:1232262. [PMID: 38023001 PMCID: PMC10646334 DOI: 10.3389/fpsyg.2023.1232262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The perception of phonemes is guided by both low-level acoustic cues and high-level linguistic context. However, differentiating between these two types of processing can be challenging. In this study, we explore the utility of pupillometry as a tool to investigate both low- and high-level processing of phonological stimuli, with a particular focus on its ability to capture novelty detection and cognitive processing during speech perception. Methods Pupillometric traces were recorded from a sample of 22 Danish-speaking adults, with self-reported normal hearing, while performing two phonological-contrast perception tasks: a nonword discrimination task, which included minimal-pair combinations specific to the Danish language, and a nonword detection task involving the detection of phonologically modified words within sentences. The study explored the perception of contrasts in both unprocessed speech and degraded speech input, processed with a vocoder. Results No difference in peak pupil dilation was observed when the contrast occurred between two isolated nonwords in the nonword discrimination task. For unprocessed speech, higher peak pupil dilations were measured when phonologically modified words were detected within a sentence compared to sentences without the nonwords. For vocoded speech, higher peak pupil dilation was observed for sentence stimuli, but not for the isolated nonwords, although performance decreased similarly for both tasks. Conclusion Our findings demonstrate the complexity of pupil dynamics in the presence of acoustic and phonological manipulation. Pupil responses seemed to reflect higher-level cognitive and lexical processing related to phonological perception rather than low-level perception of acoustic cues. However, the incorporation of multiple talkers in the stimuli, coupled with the relatively low task complexity, may have affected the pupil dilation.
Collapse
Affiliation(s)
- Julia S. C. Chiossi
- Oticon A/S, Smørum, Denmark
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| | | | - Elaine Hoi Ning Ng
- Oticon A/S, Smørum, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | | | - Björn Lyxell
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| |
Collapse
|
3
|
Shen J, Heller Murray E, Kulick ER. The Effect of Breathy Vocal Quality on Speech Intelligibility and Listening Effort in Background Noise. Trends Hear 2023; 27:23312165231206925. [PMID: 37817666 PMCID: PMC10566269 DOI: 10.1177/23312165231206925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 09/06/2023] [Accepted: 09/25/2023] [Indexed: 10/12/2023] Open
Abstract
Speech perception is challenging under adverse conditions. However, there is limited evidence regarding how multiple adverse conditions affect speech perception. The present study investigated two conditions that are frequently encountered in real-life communication: background noise and breathy vocal quality. The study first examined the effects of background noise and breathiness on speech perception as measured by intelligibility. Secondly, the study tested the hypothesis that both noise and breathiness affect listening effort, as indicated by linear and nonlinear changes in pupil dilation. Low-context sentences were resynthesized to create three levels of breathiness (original, mild-moderate, and severe). The sentences were presented in a fluctuating nonspeech noise with two signal-to-noise ratios (SNRs) of -5 dB (favorable) and -9 dB (adverse) SNR. Speech intelligibility and pupil dilation data were collected from young listeners with normal hearing thresholds. The results demonstrated that a breathy vocal quality presented in noise negatively affected speech intelligibility, with the degree of breathiness playing a critical role. Listening effort, as measured by the magnitude of pupil dilation, showed significant effects with both severe and mild-moderate breathy voices that were independent of noise level. The findings contributed to the literature by demonstrating the impact of vocal quality on the perception of speech in noise. They also highlighted the complex dynamics between overall task demand and processing resources in understanding the combined impact of multiple adverse conditions.
Collapse
Affiliation(s)
- Jing Shen
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, USA
| | - Elizabeth Heller Murray
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, USA
| | - Erin R. Kulick
- Department of Epidemiology and Biostatistics, College of Public Health, Temple University, Philadelphia, PA, USA
| |
Collapse
|
4
|
Brungart DS, Sherlock LP, Kuchinsky SE, Perry TT, Bieber RE, Grant KW, Bernstein JGW. Assessment methods for determining small changes in hearing performance over time. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3866. [PMID: 35778214 DOI: 10.1121/10.0011509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.
Collapse
Affiliation(s)
- Douglas S Brungart
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - LaGuinn P Sherlock
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Trevor T Perry
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Rebecca E Bieber
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Ken W Grant
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
5
|
Reduced Semantic Context and Signal-to-Noise Ratio Increase Listening Effort As Measured Using Functional Near-Infrared Spectroscopy. Ear Hear 2021; 43:836-848. [PMID: 34623112 DOI: 10.1097/aud.0000000000001137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Understanding speech-in-noise can be highly effortful. Decreasing the signal-to-noise ratio (SNR) of speech increases listening effort, but it is relatively unclear if decreasing the level of semantic context does as well. The current study used functional near-infrared spectroscopy to evaluate two primary hypotheses: (1) listening effort (operationalized as oxygenation of the left lateral PFC) increases as the SNR decreases and (2) listening effort increases as context decreases. DESIGN Twenty-eight younger adults with normal hearing completed the Revised Speech Perception in Noise Test, in which they listened to sentences and reported the final word. These sentences either had an easy SNR (+4 dB) or a hard SNR (-2 dB), and were either low in semantic context (e.g., "Tom could have thought about the sport") or high in context (e.g., "She had to vacuum the rug"). PFC oxygenation was measured throughout using functional near-infrared spectroscopy. RESULTS Accuracy on the Revised Speech Perception in Noise Test was worse when the SNR was hard than when it was easy, and worse for sentences low in semantic context than high in context. Similarly, oxygenation across the entire PFC (including the left lateral PFC) was greater when the SNR was hard, and left lateral PFC oxygenation was greater when context was low. CONCLUSIONS These results suggest that activation of the left lateral PFC (interpreted here as reflecting listening effort) increases to compensate for acoustic and linguistic challenges. This may reflect the increased engagement of domain-general and domain-specific processes subserved by the dorsolateral prefrontal cortex (e.g., cognitive control) and inferior frontal gyrus (e.g., predicting the sensory consequences of articulatory gestures), respectively.
Collapse
|
6
|
Colby S, McMurray B. Cognitive and Physiological Measures of Listening Effort During Degraded Speech Perception: Relating Dual-Task and Pupillometry Paradigms. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3627-3652. [PMID: 34491779 PMCID: PMC8642090 DOI: 10.1044/2021_jslhr-20-00583] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 04/01/2021] [Accepted: 05/21/2021] [Indexed: 06/13/2023]
Abstract
Purpose Listening effort is quickly becoming an important metric for assessing speech perception in less-than-ideal situations. However, the relationship between the construct of listening effort and the measures used to assess it remains unclear. We compared two measures of listening effort: a cognitive dual task and a physiological pupillometry task. We sought to investigate the relationship between these measures of effort and whether engaging effort impacts speech accuracy. Method In Experiment 1, 30 participants completed a dual task and a pupillometry task that were carefully matched in stimuli and design. The dual task consisted of a spoken word recognition task and a visual match-to-sample task. In the pupillometry task, pupil size was monitored while participants completed a spoken word recognition task. Both tasks presented words at three levels of listening difficulty (unmodified, eight-channel vocoding, and four-channel vocoding) and provided response feedback on every trial. We refined the pupillometry task in Experiment 2 (n = 31); crucially, participants no longer received response feedback. Finally, we ran a new group of subjects on both tasks in Experiment 3 (n = 30). Results In Experiment 1, accuracy in the visual task decreased with increased signal degradation in the dual task, but pupil size was sensitive to accuracy and not vocoding condition. After removing feedback in Experiment 2, changes in pupil size were predicted by listening condition, suggesting the task was now sensitive to engaged effort. Both tasks were sensitive to listening difficulty in Experiment 3, but there was no relationship between the tasks and neither task predicted speech accuracy. Conclusions Consistent with previous work, we found little evidence for a relationship between different measures of listening effort. We also found no evidence that effort predicts speech accuracy, suggesting that engaging more effort does not lead to improved speech recognition. Cognitive and physiological measures of listening effort are likely sensitive to different aspects of the construct of listening effort. Supplemental Material https://doi.org/10.23641/asha.16455900.
Collapse
Affiliation(s)
- Sarah Colby
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City
| | - Bob McMurray
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City
| |
Collapse
|
7
|
Koelewijn T, Gaudrain E, Tamati T, Başkent D. The effects of lexical content, acoustic and linguistic variability, and vocoding on voice cue perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:1620. [PMID: 34598602 DOI: 10.1121/10.0005938] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 08/02/2021] [Indexed: 06/13/2023]
Abstract
Perceptual differences in voice cues, such as fundamental frequency (F0) and vocal tract length (VTL), can facilitate speech understanding in challenging conditions. Yet, we hypothesized that in the presence of spectrotemporal signal degradations, as imposed by cochlear implants (CIs) and vocoders, acoustic cues that overlap for voice perception and phonemic categorization could be mistaken for one another, leading to a strong interaction between linguistic and indexical (talker-specific) content. Fifteen normal-hearing participants performed an odd-one-out adaptive task measuring just-noticeable differences (JNDs) in F0 and VTL. Items used were words (lexical content) or time-reversed words (no lexical content). The use of lexical content was either promoted (by using variable items across comparison intervals) or not (fixed item). Finally, stimuli were presented without or with vocoding. Results showed that JNDs for both F0 and VTL were significantly smaller (better) for non-vocoded compared with vocoded speech and for fixed compared with variable items. Lexical content (forward vs reversed) affected VTL JNDs in the variable item condition, but F0 JNDs only in the non-vocoded, fixed condition. In conclusion, lexical content had a positive top-down effect on VTL perception when acoustic and linguistic variability was present but not on F0 perception. Lexical advantage persisted in the most degraded conditions and vocoding even enhanced the effect of item variability, suggesting that linguistic content could support compensation for poor voice perception in CI users.
Collapse
Affiliation(s)
- Thomas Koelewijn
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Etienne Gaudrain
- CNRS Unité Mixte de Recherche 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Institut National de la Santé et de la Recherche Médicale, UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Terrin Tamati
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, The Ohio State University, Columbus, Ohio, USA
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
8
|
Morett LM, Roche JM, Fraundorf SH, McPartland JC. Contrast Is in the Eye of the Beholder: Infelicitous Beat Gesture Increases Cognitive Load During Online Spoken Discourse Comprehension. Cogn Sci 2021; 44:e12912. [PMID: 33073404 DOI: 10.1111/cogs.12912] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Revised: 05/15/2020] [Accepted: 09/02/2020] [Indexed: 11/30/2022]
Abstract
We investigated how two cues to contrast-beat gesture and contrastive pitch accenting-affect comprehenders' cognitive load during processing of spoken referring expressions. In two visual-world experiments, we orthogonally manipulated the presence of these cues and their felicity, or fit, with the local (sentence-level) referential context in critical referring expressions while comprehenders' task-evoked pupillary responses (TEPRs) were examined. In Experiment 1, beat gesture and contrastive accenting always matched the referential context of filler referring expressions and were therefore relatively felicitous on the global (experiment) level, whereas in Experiment 2, beat gesture and contrastive accenting never fit the referential context of filler referring expressions and were therefore infelicitous on the global level. The results revealed that both beat gesture and contrastive accenting increased comprehenders' cognitive load. For beat gesture, this increase in cognitive load was driven by both local and global infelicity. For contrastive accenting, this increase in cognitive load was unaffected when cues were globally felicitous but exacerbated when cues were globally infelicitous. Together, these results suggest that comprehenders' cognitive resources are taxed by processing infelicitous use of beat gesture and contrastive accenting to convey contrast on both the local and global levels.
Collapse
Affiliation(s)
- Laura M Morett
- Department of Educational Studies in Psychology, Research Methodology, and Counseling, University of Alabama
| | - Jennifer M Roche
- Department of Speech Pathology and Audiology, Kent State University
| | - Scott H Fraundorf
- Department of Psychology, Learning Research and Development Center, University of Pittsburgh
| | | |
Collapse
|
9
|
Silcox JW, Payne BR. The costs (and benefits) of effortful listening on context processing: A simultaneous electrophysiology, pupillometry, and behavioral study. Cortex 2021; 142:296-316. [PMID: 34332197 DOI: 10.1016/j.cortex.2021.06.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 04/02/2021] [Accepted: 06/10/2021] [Indexed: 11/24/2022]
Abstract
There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words ("The prisoners were planning their escape/party") or were low-constraint sentences with unexpected sentence-final words ("All day she thought about the party"). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants' showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory.
Collapse
Affiliation(s)
| | - Brennan R Payne
- Department of Psychology, University of Utah, USA; Interdepartmental Neuroscience Program, University of Utah, USA
| |
Collapse
|
10
|
Kaplan EC, Wagner AE, Toffanin P, Başkent D. Do Musicians and Non-musicians Differ in Speech-on-Speech Processing? Front Psychol 2021; 12:623787. [PMID: 33679539 PMCID: PMC7931613 DOI: 10.3389/fpsyg.2021.623787] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 01/21/2021] [Indexed: 12/18/2022] Open
Abstract
Earlier studies have shown that musically trained individuals may have a benefit in adverse listening situations when compared to non-musicians, especially in speech-on-speech perception. However, the literature provides mostly conflicting results. In the current study, by employing different measures of spoken language processing, we aimed to test whether we could capture potential differences between musicians and non-musicians in speech-on-speech processing. We used an offline measure of speech perception (sentence recall task), which reveals a post-task response, and online measures of real time spoken language processing: gaze-tracking and pupillometry. We used stimuli of comparable complexity across both paradigms and tested the same groups of participants. In the sentence recall task, musicians recalled more words correctly than non-musicians. In the eye-tracking experiment, both groups showed reduced fixations to the target and competitor words' images as the level of speech maskers increased. The time course of gaze fixations to the competitor did not differ between groups in the speech-in-quiet condition, while the time course dynamics did differ between groups as the two-talker masker was added to the target signal. As the level of two-talker masker increased, musicians showed reduced lexical competition as indicated by the gaze fixations to the competitor. The pupil dilation data showed differences mainly in one target-to-masker ratio. This does not allow to draw conclusions regarding potential differences in the use of cognitive resources between groups. Overall, the eye-tracking measure enabled us to observe that musicians may be using a different strategy than non-musicians to attain spoken word recognition as the noise level increased. However, further investigation with more fine-grained alignment between the processes captured by online and offline measures is necessary to establish whether musicians differ due to better cognitive control or sound processing.
Collapse
Affiliation(s)
- Elif Canseza Kaplan
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Anita E Wagner
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Paolo Toffanin
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| |
Collapse
|
11
|
Camero R, Martínez V, Gallego C. Gaze Following and Pupil Dilation as Early Diagnostic Markers of Autism in Toddlers. CHILDREN-BASEL 2021; 8:children8020113. [PMID: 33562656 PMCID: PMC7914719 DOI: 10.3390/children8020113] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 01/29/2021] [Accepted: 02/01/2021] [Indexed: 01/21/2023]
Abstract
Background: Children with autism spectrum disorder (ASD) show certain characteristics in visual attention. These may generate differences with non-autistic children in the integration of relevant social information to set the basis of communication. Reliable and objective measurement of these characteristics in a language learning context could contribute to a more accurate early diagnosis of ASD. Gaze following and pupil dilation are being studied as possible reliable measures of visual attention for the early detection of ASD. The eye-tracking methodology allows objective measurement of these biomarkers. The aim of this study is to determine whether measurements of gaze following and pupillary dilation in a linguistic interaction task are potential objective biomarkers for the early diagnosis of ASD. Method: A group of 20 children between 17 and 24 months of age, made up of 10 neurotypical children (NT) and 10 children with an increased likelihood of developing ASD were paired together according to chronological age. A human face on a monitor pronounced pseudowords associated with pseudo-objects. Gaze following and pupil dilation were registered during the task These measurements were captured using eye-tracking methodology. Results: Significant statistical differences were found in the time of gaze fixation on the human face and on the object, as well as in the number of gazes. Children with an increased possibility of developing ASD showed a slightly higher pupil dilation than NT children. However, this difference was not statistically significant. Nevertheless, their pupil dilation was uniform throughout the different periods of the task while NT participants showed greater dilation on hearing the pseudoword. Conclusions: The fixing and the duration of gaze, objectively measured by a Tobii eye-tracking system, could be considered as potential biomarkers for early detection of ASD. Additionally, pupil dilation measurement could reflect differential activation patterns during word processing in possible ASD toddlers and NT toddlers.
Collapse
Affiliation(s)
- Raquel Camero
- Department of Psychology, University of Oviedo, 33003 Oviedo, Spain;
| | - Verónica Martínez
- Department of Psychology, University of Oviedo, 33003 Oviedo, Spain;
- Correspondence:
| | - Carlos Gallego
- Department of Experimental Psychology, Cognitive Processes and Speech Therapy, Complutense University of Madrid, 28223 Madrid, Spain;
| |
Collapse
|
12
|
Russo FY, Hoen M, Karoui C, Demarcy T, Ardoint M, Tuset MP, De Seta D, Sterkers O, Lahlou G, Mosnier I. Pupillometry Assessment of Speech Recognition and Listening Experience in Adult Cochlear Implant Patients. Front Neurosci 2020; 14:556675. [PMID: 33240035 PMCID: PMC7677588 DOI: 10.3389/fnins.2020.556675] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 09/29/2020] [Indexed: 11/17/2022] Open
Abstract
Objective The aim of the present study was to investigate the pupillary response to word identification in cochlear implant (CI) patients. Authors hypothesized that when task difficulty (i.e., addition of background noise) increased, pupil dilation markers such as the peak dilation or the latency of the peak dilation would increase in CI users, as already observed in normal-hearing and hearing-impaired subjects. Methods Pupillometric measures in 10 CI patients were combined to standard speech recognition scores used to evaluate CI outcomes, namely, speech audiometry in quiet and in noise at +10 dB signal-to-noise ratio (SNR). The main outcome measures of pupillometry were mean pupil dilation, maximal pupil dilation, dilation latency, and mean dilation during return to baseline or retention interval. Subjective hearing quality was evaluated by means of one self-reported fatigue questionnaire, and the Speech, Spatial, and Qualities (SSQ) of Hearing scale. Results All pupil dilation data were transformed to percent change in event-related pupil dilation (ERPD, %). Analyses show that the peak amplitudes for both mean pupil dilation and maximal pupil dilation were higher during the speech-in-noise test. Mean peak dilation was measured at 3.47 ± 2.29% noise vs. 2.19 ± 2.46 in quiet and maximal peak value was detected at 9.17 ± 3.25% in noise vs. 8.72 ± 2.93% in quiet. Concerning the questionnaires, the mean pupil dilation during the retention interval was significantly correlated with the spatial subscale score of the SSQ Hearing scale [r(8) = −0.84, p = 0.0023], and with the global score [r(8) = −0.78, p = 0.0018]. Conclusion The analysis of pupillometric traces, obtained during speech audiometry in quiet and in noise in CI users, provided interesting information about the different processes engaged in this task. Pupillometric measures could be indicative of listening difficulty, phoneme intelligibility, and were correlated with general hearing experience as evaluated by the SSQ of Hearing scale. These preliminary results show that pupillometry constitutes a promising tool to improve objective quantification of CI performance in clinical settings.
Collapse
Affiliation(s)
- Francesca Yoshie Russo
- INSERM U1159 Réhabilitation Chirurgicale Mini-Invasive Robotisée De l'Audition, Paris, France.,Assistance Publique Hôpitaux de Paris Sorbonne Université, Service Oto-Rhino-Laryngologie (ORL), Unité Fonctionnelle Implants Auditifs, Groupe Hospitalier Pitié-Salpêtrière, Paris, France.,Department of Sense Organs, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | | | | | | | | | - Maria-Pia Tuset
- Assistance Publique Hôpitaux de Paris Sorbonne Université, Service Oto-Rhino-Laryngologie (ORL), Unité Fonctionnelle Implants Auditifs, Groupe Hospitalier Pitié-Salpêtrière, Paris, France
| | - Daniele De Seta
- INSERM U1159 Réhabilitation Chirurgicale Mini-Invasive Robotisée De l'Audition, Paris, France.,Assistance Publique Hôpitaux de Paris Sorbonne Université, Service Oto-Rhino-Laryngologie (ORL), Unité Fonctionnelle Implants Auditifs, Groupe Hospitalier Pitié-Salpêtrière, Paris, France.,Department of Sense Organs, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Olivier Sterkers
- INSERM U1159 Réhabilitation Chirurgicale Mini-Invasive Robotisée De l'Audition, Paris, France.,Assistance Publique Hôpitaux de Paris Sorbonne Université, Service Oto-Rhino-Laryngologie (ORL), Unité Fonctionnelle Implants Auditifs, Groupe Hospitalier Pitié-Salpêtrière, Paris, France
| | - Ghizlène Lahlou
- INSERM U1120 Génétique et Physiologie de l'Audition, Paris, France.,APHP Sorbonne Université, Service ORL, GH Pitié Salpêtrière, Paris, France
| | - Isabelle Mosnier
- INSERM U1159 Réhabilitation Chirurgicale Mini-Invasive Robotisée De l'Audition, Paris, France.,Assistance Publique Hôpitaux de Paris Sorbonne Université, Service Oto-Rhino-Laryngologie (ORL), Unité Fonctionnelle Implants Auditifs, Groupe Hospitalier Pitié-Salpêtrière, Paris, France
| |
Collapse
|
13
|
Pals C, Sarampalis A, Beynon A, Stainsby T, Başkent D. Effect of Spectral Channels on Speech Recognition, Comprehension, and Listening Effort in Cochlear-Implant Users. Trends Hear 2020; 24:2331216520904617. [PMID: 32189585 PMCID: PMC7082863 DOI: 10.1177/2331216520904617] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In favorable listening conditions, cochlear-implant (CI) users can reach high
speech recognition scores with as little as seven active electrodes. Here, we
hypothesized that even when speech recognition is high, additional spectral
channels may still benefit other aspects of speech perception, such as
comprehension and listening effort. Twenty-five adult, postlingually deafened CI
users, selected from two Dutch implant centers for high clinical word
identification scores, participated in two experiments. Experimental conditions
were created by varying the number of active electrodes of the CIs between 7 and
15. In Experiment 1, response times (RTs) on the secondary task in a dual-task
paradigm were used as an indirect measure of listening effort, and in Experiment
2, sentence verification task (SVT) accuracy and RTs were used to measure speech
comprehension and listening effort, respectively. Speech recognition was near
ceiling for all conditions tested, as intended by the design. However, the
dual-task paradigm failed to show the hypothesized decrease in RTs with
increasing spectral channels. The SVT did show a systematic improvement in both
speech comprehension and response speed across all conditions. In conclusion,
the SVT revealed additional benefits in both speech comprehension and listening
effort for conditions in which high speech recognition was already achieved.
Hence, adding spectral channels may provide benefits for CI listeners that may
not be reflected by traditional speech tests. The SVT is a relatively simple
task that is easy to implement and may therefore be a good candidate for
identifying such additional benefits in research or clinical settings.
Collapse
Affiliation(s)
- Carina Pals
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| | | | - Andy Beynon
- Department of Otorhinolaryngology, Head and Neck Surgery, Hearing and Implants, Radboud University Medical Centre, Nijmegen, the Netherlands
| | | | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| |
Collapse
|
14
|
Ng EHN, Rönnberg J. Hearing aid experience and background noise affect the robust relationship between working memory and speech recognition in noise. Int J Audiol 2019; 59:208-218. [PMID: 31809220 DOI: 10.1080/14992027.2019.1677951] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Objective: The aim of this study was to examine how background noise and hearing aid experience affect the robust relationship between working memory and speech recognition.Design: Matrix sentences were used to measure speech recognition in noise. Three measures of working memory were administered. Study sample: 148 participants with at least 2 years of hearing aid experience.Results: A stronger overall correlation between working memory and speech recognition performance was found in a four-talker babble than in a stationary noise background. This correlation was significantly weaker in participants with most hearing aid experience than those with least experience when background noise was stationary. In the four-talker babble, however, no significant difference was found between the strength of correlations between users with different experience.Conclusion: In general, more explicit processing of working memory is invoked when listening in a multi-talker babble. The matching processes (cf. Ease of Language Understanding model, ELU) were more efficient for experienced than for less experienced users when perceiving speech. This study extends the existing ELU model that mismatch may also lead to the establishment of new phonological representations in the long-term memory.
Collapse
Affiliation(s)
- Elaine Hoi Ning Ng
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
15
|
Abstract
It is widely accepted that seeing a talker improves a listener's ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone-indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018).
Collapse
|
16
|
Tamati TN, Janse E, Başkent D. Perceptual Discrimination of Speaking Style Under Cochlear Implant Simulation. Ear Hear 2019; 40:63-76. [PMID: 29742545 PMCID: PMC6319584 DOI: 10.1097/aud.0000000000000591] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 03/12/2018] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Real-life, adverse listening conditions involve a great deal of speech variability, including variability in speaking style. Depending on the speaking context, talkers may use a more casual, reduced speaking style or a more formal, careful speaking style. Attending to fine-grained acoustic-phonetic details characterizing different speaking styles facilitates the perception of the speaking style used by the talker. These acoustic-phonetic cues are poorly encoded in cochlear implants (CIs), potentially rendering the discrimination of speaking style difficult. As a first step to characterizing CI perception of real-life speech forms, the present study investigated the perception of different speaking styles in normal-hearing (NH) listeners with and without CI simulation. DESIGN The discrimination of three speaking styles (conversational reduced speech, speech from retold stories, and carefully read speech) was assessed using a speaking style discrimination task in two experiments. NH listeners classified sentence-length utterances, produced in one of the three styles, as either formal (careful) or informal (conversational). Utterances were presented with unmodified speaking rates in experiment 1 (31 NH, young adult Dutch speakers) and with modified speaking rates set to the average rate across all utterances in experiment 2 (28 NH, young adult Dutch speakers). In both experiments, acoustic noise-vocoder simulations of CIs were used to produce 12-channel (CI-12) and 4-channel (CI-4) vocoder simulation conditions, in addition to a no-simulation condition without CI simulation. RESULTS In both experiments 1 and 2, NH listeners were able to reliably discriminate the speaking styles without CI simulation. However, this ability was reduced under CI simulation. In experiment 1, participants showed poor discrimination of speaking styles under CI simulation. Listeners used speaking rate as a cue to make their judgements, even though it was not a reliable cue to speaking style in the study materials. In experiment 2, without differences in speaking rate among speaking styles, listeners showed better discrimination of speaking styles under CI simulation, using additional cues to complete the task. CONCLUSIONS The findings from the present study demonstrate that perceiving differences in three speaking styles under CI simulation is a difficult task because some important cues to speaking style are not fully available in these conditions. While some cues like speaking rate are available, this information alone may not always be a reliable indicator of a particular speaking style. Some other reliable speaking styles cues, such as degraded acoustic-phonetic information and variability in speaking rate within an utterance, may be available but less salient. However, as in experiment 2, listeners' perception of speaking styles may be modified if they are constrained or trained to use these additional cues, which were more reliable in the context of the present study. Taken together, these results suggest that dealing with speech variability in real-life listening conditions may be a challenge for CI users.
Collapse
Affiliation(s)
- Terrin N. Tamati
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| | - Esther Janse
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
17
|
Effects of Additional Low-Pass-Filtered Speech on Listening Effort for Noise-Band-Vocoded Speech in Quiet and in Noise. Ear Hear 2019; 40:3-17. [PMID: 29757801 PMCID: PMC6319586 DOI: 10.1097/aud.0000000000000587] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Objectives: Residual acoustic hearing in electric–acoustic stimulation (EAS) can benefit cochlear implant (CI) users in increased sound quality, speech intelligibility, and improved tolerance to noise. The goal of this study was to investigate whether the low-pass–filtered acoustic speech in simulated EAS can provide the additional benefit of reducing listening effort for the spectrotemporally degraded signal of noise-band–vocoded speech. Design: Listening effort was investigated using a dual-task paradigm as a behavioral measure, and the NASA Task Load indeX as a subjective self-report measure. The primary task of the dual-task paradigm was identification of sentences presented in three experiments at three fixed intelligibility levels: at near-ceiling, 50%, and 79% intelligibility, achieved by manipulating the presence and level of speech-shaped noise in the background. Listening effort for the primary intelligibility task was reflected in the performance on the secondary, visual response time task. Experimental speech processing conditions included monaural or binaural vocoder, with added low-pass–filtered speech (to simulate EAS) or without (to simulate CI). Results: In Experiment 1, in quiet with intelligibility near-ceiling, additional low-pass–filtered speech reduced listening effort compared with binaural vocoder, in line with our expectations, although not compared with monaural vocoder. In Experiments 2 and 3, for speech in noise, added low-pass–filtered speech allowed the desired intelligibility levels to be reached at less favorable speech-to-noise ratios, as expected. It is interesting that this came without the cost of increased listening effort usually associated with poor speech-to-noise ratios; at 50% intelligibility, even a reduction in listening effort on top of the increased tolerance to noise was observed. The NASA Task Load indeX did not capture these differences. Conclusions: The dual-task results provide partial evidence for a potential decrease in listening effort as a result of adding low-frequency acoustic speech to noise-band–vocoded speech. Whether these findings translate to CI users with residual acoustic hearing will need to be addressed in future research because the quality and frequency range of low-frequency acoustic sound available to listeners with hearing loss may differ from our idealized simulations, and additional factors, such as advanced age and varying etiology, may also play a role.
Collapse
|
18
|
Zekveld AA, Koelewijn T, Kramer SE. The Pupil Dilation Response to Auditory Stimuli: Current State of Knowledge. Trends Hear 2019; 22:2331216518777174. [PMID: 30249172 PMCID: PMC6156203 DOI: 10.1177/2331216518777174] [Citation(s) in RCA: 124] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
The measurement of cognitive resource allocation during listening, or listening effort, provides valuable insight in the factors influencing auditory processing. In recent years, many studies inside and outside the field of hearing science have measured the pupil response evoked by auditory stimuli. The aim of the current review was to provide an exhaustive overview of these studies. The 146 studies included in this review originated from multiple domains, including hearing science and linguistics, but the review also covers research into motivation, memory, and emotion. The present review provides a unique overview of these studies and is organized according to the components of the Framework for Understanding Effortful Listening. A summary table presents the sample characteristics, an outline of the study design, stimuli, the pupil parameters analyzed, and the main findings of each study. The results indicate that the pupil response is sensitive to various task manipulations as well as interindividual differences. Many of the findings have been replicated. Frequent interactions between the independent factors affecting the pupil response have been reported, which indicates complex processes underlying cognitive resource allocation. This complexity should be taken into account in future studies that should focus more on interindividual differences, also including older participants. This review facilitates the careful design of new studies by indicating the factors that should be controlled for. In conclusion, measuring the pupil dilation response to auditory stimuli has been demonstrated to be sensitive method applicable to numerous research questions. The sensitivity of the measure calls for carefully designed stimuli.
Collapse
Affiliation(s)
- Adriana A Zekveld
- 1 Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands.,2 Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Sweden.,3 Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Thomas Koelewijn
- 1 Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| | - Sophia E Kramer
- 1 Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| |
Collapse
|
19
|
Wagner AE, Nagels L, Toffanin P, Opie JM, Başkent D. Individual Variations in Effort: Assessing Pupillometry for the Hearing Impaired. Trends Hear 2019; 23:2331216519845596. [PMID: 31131729 PMCID: PMC6537294 DOI: 10.1177/2331216519845596] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2018] [Revised: 03/19/2019] [Accepted: 03/25/2019] [Indexed: 12/20/2022] Open
Abstract
Assessing effort in speech comprehension for hearing-impaired (HI) listeners is important, as effortful processing of speech can limit their hearing rehabilitation. We examined the measure of pupil dilation in its capacity to accommodate the heterogeneity that is present within clinical populations by studying lexical access in users with sensorineural hearing loss, who perceive speech via cochlear implants (CIs). We compared the pupillary responses of 15 experienced CI users and 14 age-matched normal-hearing (NH) controls during auditory lexical decision. A growth curve analysis was applied to compare the responses between the groups. NH listeners showed a coherent pattern of pupil dilation that reflects the task demands of the experimental manipulation and a homogenous time course of dilation. CI listeners showed more variability in the morphology of pupil dilation curves, potentially reflecting variable sources of effort across individuals. In follow-up analyses, we examined how speech perception, a task that relies on multiple stages of perceptual analyses, poses multiple sources of increased effort for HI listeners, wherefore we might not be measuring the same source of effort for HI as for NH listeners. We argue that interindividual variability among HI listeners can be clinically meaningful in attesting not only the magnitude but also the locus of increased effort. The understanding of individual variations in effort requires experimental paradigms that (a) differentiate the task demands during speech comprehension, (b) capture pupil dilation in its time course per individual listeners, and (c) investigate the range of individual variability present within clinical and NH populations.
Collapse
Affiliation(s)
- Anita E. Wagner
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
- Graduate School of Medical Sciences,
School of Behavioral and Cognitive Neuroscience, University of Groningen, the
Netherlands
| | - Leanne Nagels
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
- Center for Language and Cognition
Groningen, University of Groningen, the Netherlands
| | - Paolo Toffanin
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
| | | | - Deniz Başkent
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
- Graduate School of Medical Sciences,
School of Behavioral and Cognitive Neuroscience, University of Groningen, the
Netherlands
| |
Collapse
|
20
|
Strand JF, Brown VA, Merchant MB, Brown HE, Smith J. Measuring Listening Effort: Convergent Validity, Sensitivity, and Links With Cognitive and Personality Measures. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1463-1486. [PMID: 29800081 DOI: 10.1044/2018_jslhr-h-17-0257] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Accepted: 02/06/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE Listening effort (LE) describes the attentional or cognitive requirements for successful listening. Despite substantial theoretical and clinical interest in LE, inconsistent operationalization makes it difficult to make generalizations across studies. The aims of this large-scale validation study were to evaluate the convergent validity and sensitivity of commonly used measures of LE and assess how scores on those tasks relate to cognitive and personality variables. METHOD Young adults with normal hearing (N = 111) completed 7 tasks designed to measure LE, 5 tests of cognitive ability, and 2 personality measures. RESULTS Scores on some behavioral LE tasks were moderately intercorrelated but were generally not correlated with subjective and physiological measures of LE, suggesting that these tasks may not be tapping into the same underlying construct. LE measures differed in their sensitivity to changes in signal-to-noise ratio and the extent to which they correlated with cognitive and personality variables. CONCLUSIONS Given that LE measures do not show consistent, strong intercorrelations and differ in their relationships with cognitive and personality predictors, these findings suggest caution in generalizing across studies that use different measures of LE. The results also indicate that people with greater cognitive ability appear to use their resources more efficiently, thereby diminishing the detrimental effects associated with increased background noise during language processing.
Collapse
Affiliation(s)
- Julia F Strand
- Department of Psychology, Carleton College, Northfield, MN
| | - Violet A Brown
- Department of Psychology, Carleton College, Northfield, MN
| | | | - Hunter E Brown
- Department of Psychology, Carleton College, Northfield, MN
| | - Julia Smith
- Department of Psychology, Carleton College, Northfield, MN
| |
Collapse
|
21
|
Patro C, Mendel LL. Gated Word Recognition by Postlingually Deafened Adults With Cochlear Implants: Influence of Semantic Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:145-158. [PMID: 29242894 DOI: 10.1044/2017_jslhr-h-17-0141] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 08/28/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. METHOD Listeners with CIs as well as those with normal hearing (NH) participated in the study. In Experiment 1, the CI users listened to unprocessed (full-spectrum) stimuli and individuals with NH listened to full-spectrum or vocoder processed speech. IPs were determined for both groups who listened to gated consonant-nucleus-consonant words that were selected based on lexical properties. In Experiment 2, the role of semantic context on IPs was evaluated. Target stimuli were chosen from the Revised Speech Perception in Noise corpus based on the lexical properties of the final words. RESULTS The results indicated that spectrotemporal degradations impacted IPs for gated words adversely, and CI users as well as participants with NH listening to vocoded speech had longer IPs than participants with NH who listened to full-spectrum speech. In addition, there was a clear disadvantage due to lack of semantic context in all groups regardless of the spectral composition of the target speech (full spectrum or vocoded). Finally, we showed that CI users (and users with NH with vocoded speech) can overcome such word processing difficulties with the help of semantic context and perform as well as listeners with NH. CONCLUSION Word recognition occurs even before the entire word is heard because listeners with NH associate an acoustic input with its mental representation to understand speech. The results of this study provide insight into the role of spectral degradation on the processing of spoken words in isolation and the potential benefits of semantic context. These results may also explain why CI users rely substantially on semantic context.
Collapse
Affiliation(s)
| | - Lisa Lucks Mendel
- School of Communication Sciences & Disorders, University of Memphis, TN
| |
Collapse
|
22
|
Vavatzanidis NK, Mürbe D, Friederici AD, Hahne A. Establishing a mental lexicon with cochlear implants: an ERP study with young children. Sci Rep 2018; 8:910. [PMID: 29343736 PMCID: PMC5772553 DOI: 10.1038/s41598-017-18852-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 12/18/2017] [Indexed: 11/19/2022] Open
Abstract
In the present study we explore the implications of acquiring language when relying mainly or exclusively on input from a cochlear implant (CI), a device providing auditory input to otherwise deaf individuals. We focus on the time course of semantic learning in children within the second year of implant use; a period that equals the auditory age of normal hearing children during which vocabulary emerges and extends dramatically. 32 young bilaterally implanted children saw pictures paired with either matching or non-matching auditory words. Their electroencephalographic responses were recorded after 12, 18 and 24 months of implant use, revealing a large dichotomy: Some children failed to show semantic processing throughout their second year of CI use, which fell in line with their poor language outcomes. The majority of children, though, demonstrated semantic processing in form of the so-called N400 effect already after 12 months of implant use, even when their language experience relied exclusively on the implant. This is slightly earlier than observed for normal hearing children of the same auditory age, suggesting that more mature cognitive faculties at the beginning of language acquisition lead to faster semantic learning.
Collapse
Affiliation(s)
- Niki K Vavatzanidis
- Max Planck Institute for Human and Cognitive Brain Sciences, Leipzig, Germany. .,Saxonian Cochlear Implant Center, Technische Universität Dresden, Dresden, Germany.
| | - Dirk Mürbe
- Saxonian Cochlear Implant Center, Technische Universität Dresden, Dresden, Germany
| | - Angela D Friederici
- Max Planck Institute for Human and Cognitive Brain Sciences, Leipzig, Germany
| | - Anja Hahne
- Saxonian Cochlear Implant Center, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
23
|
Winn MB, Wendt D, Koelewijn T, Kuchinsky SE. Best Practices and Advice for Using Pupillometry to Measure Listening Effort: An Introduction for Those Who Want to Get Started. Trends Hear 2018; 22:2331216518800869. [PMID: 30261825 PMCID: PMC6166306 DOI: 10.1177/2331216518800869] [Citation(s) in RCA: 113] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Revised: 08/07/2018] [Accepted: 08/14/2018] [Indexed: 01/12/2023] Open
Abstract
Within the field of hearing science, pupillometry is a widely used method for quantifying listening effort. Its use in research is growing exponentially, and many labs are (considering) applying pupillometry for the first time. Hence, there is a growing need for a methods paper on pupillometry covering topics spanning from experiment logistics and timing to data cleaning and what parameters to analyze. This article contains the basic information and considerations needed to plan, set up, and interpret a pupillometry experiment, as well as commentary about how to interpret the response. Included are practicalities like minimal system requirements for recording a pupil response and specifications for peripheral, equipment, experiment logistics and constraints, and different kinds of data processing. Additional details include participant inclusion and exclusion criteria and some methodological considerations that might not be necessary in other auditory experiments. We discuss what data should be recorded and how to monitor the data quality during recording in order to minimize artifacts. Data processing and analysis are considered as well. Finally, we share insights from the collective experience of the authors and discuss some of the challenges that still lie ahead.
Collapse
Affiliation(s)
- Matthew B. Winn
- Speech-Language-Hearing Sciences,
University
of Minnesota, Minneapolis, MN, USA
| | - Dorothea Wendt
- Eriksholm Research Centre, Snekkersten,
Denmark
- Hearing Systems, Department of
Electrical Engineering, Technical University of Denmark, Kongens Lyngby,
Denmark
| | - Thomas Koelewijn
- Section Ear & Hearing, Department of
Otolaryngology–Head and Neck Surgery, Amsterdam Public Health Research Institute, VU
University Medical Center, the Netherlands
| | - Stefanie E. Kuchinsky
- National Military Audiology and Speech
Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD,
USA
| |
Collapse
|
24
|
Başkent D, Clarke J, Pals C, Benard MR, Bhargava P, Saija J, Sarampalis A, Wagner A, Gaudrain E. Cognitive Compensation of Speech Perception With Hearing Impairment, Cochlear Implants, and Aging. Trends Hear 2016. [PMCID: PMC5056620 DOI: 10.1177/2331216516670279] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
External degradations in incoming speech reduce understanding, and hearing impairment further compounds the problem. While cognitive mechanisms alleviate some of the difficulties, their effectiveness may change with age. In our research, reviewed here, we investigated cognitive compensation with hearing impairment, cochlear implants, and aging, via (a) phonemic restoration as a measure of top-down filling of missing speech, (b) listening effort and response times as a measure of increased cognitive processing, and (c) visual world paradigm and eye gazing as a measure of the use of context and its time course. Our results indicate that between speech degradations and their cognitive compensation, there is a fine balance that seems to vary greatly across individuals. Hearing impairment or inadequate hearing device settings may limit compensation benefits. Cochlear implants seem to allow the effective use of sentential context, but likely at the cost of delayed processing. Linguistic and lexical knowledge, which play an important role in compensation, may be successfully employed in advanced age, as some compensatory mechanisms seem to be preserved. These findings indicate that cognitive compensation in hearing impairment can be highly complicated—not always absent, but also not easily predicted by speech intelligibility tests only.
Collapse
Affiliation(s)
- Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Jeanne Clarke
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Carina Pals
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Michel R. Benard
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Pento Speech and Hearing Center Zwolle, Zwolle, Netherlands
| | - Pranesh Bhargava
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Jefta Saija
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Anastasios Sarampalis
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Department of Psychology, University of Groningen, Netherlands
| | - Anita Wagner
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Auditory Cognition and Psychoacoustics, CNRS, Lyon Neuroscience Research Center, Lyon, France
| |
Collapse
|
25
|
Thiel CM, Özyurt J, Nogueira W, Puschmann S. Effects of Age on Long Term Memory for Degraded Speech. Front Hum Neurosci 2016; 10:473. [PMID: 27708570 PMCID: PMC5030220 DOI: 10.3389/fnhum.2016.00473] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 09/07/2016] [Indexed: 12/15/2022] Open
Abstract
Prior research suggests that acoustical degradation impacts encoding of items into memory, especially in elderly subjects. We here aimed to investigate whether acoustically degraded items that are initially encoded into memory are more prone to forgetting as a function of age. Young and old participants were tested with a vocoded and unvocoded serial list learning task involving immediate and delayed free recall. We found that degraded auditory input increased forgetting of previously encoded items, especially in older participants. We further found that working memory capacity predicted forgetting of degraded information in young participants. In old participants, verbal IQ was the most important predictor for forgetting acoustically degraded information. Our data provide evidence that acoustically degraded information, even if encoded, is especially vulnerable to forgetting in old age.
Collapse
Affiliation(s)
- Christiane M Thiel
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany; Research Center Neurosensory Science, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Jale Özyurt
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| | - Waldo Nogueira
- Cluster of Excellence "Hearing4all", Department of Otolaryngology, Medical University Hannover Hannover, Germany
| | - Sebastian Puschmann
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| |
Collapse
|