1
|
van Schoonhoven J, Rhebergen KS, Dreschler WA. A context-based model to predict the intelligibility of sentences in non-stationary noises. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2849-2859. [PMID: 38682914 DOI: 10.1121/10.0025772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 04/03/2024] [Indexed: 05/01/2024]
Abstract
The context-based Extended Speech Transmission Index (cESTI) (van Schoonhoven et al., 2022, J. Acoust. Soc. Am. 151, 1404-1415) was successfully applied to predict the intelligibility of monosyllabic words with different degrees of context in interrupted noise. The current study aimed to use the same model for the prediction of sentence intelligibility in different types of non-stationary noise. The necessary context factors and transfer functions were based on values found in existing literature. The cESTI performed similar to or better than the original ESTI when noise had speech-like characteristics. We hypothesize that the remaining inaccuracies in model predictions can be attributed to the limits of the modelling approach with regard to mechanisms, such as modulation masking and informational masking.
Collapse
Affiliation(s)
- Jelmer van Schoonhoven
- Department of Clinical and Experimental Audiology, Amsterdam University Medical Center, 1105 AZ Amsterdam, The Netherlands
| | - Koenraad S Rhebergen
- Department of Otorhinolaryngology and Head & Neck Surgery, Rudolf Magnus Institute of Neuroscience, University Medical Center Utrecht, Postbus 85500, 3508 GA Utrecht, The Netherlands
| | - Wouter A Dreschler
- Department of Clinical and Experimental Audiology, Amsterdam University Medical Center, 1105 AZ Amsterdam, The Netherlands
| |
Collapse
|
2
|
Everhardt MK, Jung DE, Stiensma B, Lowie W, Başkent D, Sarampalis A. Foreign Language Acquisition in Adolescent Cochlear Implant Users. Ear Hear 2024; 45:174-185. [PMID: 37747307 PMCID: PMC10718217 DOI: 10.1097/aud.0000000000001410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 06/20/2023] [Indexed: 09/26/2023]
Abstract
OBJECTIVES This study explores to what degree adolescent cochlear implant (CI) users can learn a foreign language in a school setting similar to their normal-hearing (NH) peers despite the degraded auditory input. DESIGN A group of native Dutch adolescent CI users (age range 13 to 17 years) learning English as a foreign language at secondary school and a group of NH controls (age range 12 to 15 years) were assessed on their Dutch and English language skills using various language tasks that either relied on the processing of auditory information (i.e., listening task) or on the processing of orthographic information (i.e., reading and/or gap-fill task). The test battery also included various auditory and cognitive tasks to assess whether the auditory and cognitive functioning of the learners could explain the potential variation in language skills. RESULTS Results showed that adolescent CI users can learn English as a foreign language, as the English language skills of the CI users and their NH peers were comparable when assessed with reading or gap-fill tasks. However, the performance of the adolescent CI users was lower for English listening tasks. This discrepancy between task performance was not observed in their native language Dutch. The auditory tasks confirmed that the adolescent CI users had coarser temporal and spectral resolution than their NH peers, supporting the notion that the difference in foreign language listening skills may be due to a difference in auditory functioning. No differences in the cognitive functioning of the CI users and their NH peers were found that could explain the variation in the foreign language listening tasks. CONCLUSIONS In short, acquiring a foreign language with degraded auditory input appears to affect foreign language listening skills, yet does not appear to impact foreign language skills when assessed with tasks that rely on the processing of orthographic information. CI users could take advantage of orthographic information to facilitate foreign language acquisition and potentially support the development of listening-based foreign language skills.
Collapse
Affiliation(s)
- Marita K. Everhardt
- Center for Language and Cognition Groningen, University of Groningen, Netherlands
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Dorit Enja Jung
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Department of Psychology, University of Groningen, Netherlands
| | - Berrit Stiensma
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Wander Lowie
- Center for Language and Cognition Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Netherlands
| | - Anastasios Sarampalis
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Department of Psychology, University of Groningen, Netherlands
| |
Collapse
|
3
|
Koelewijn T, Gaudrain E, Shehab T, Treczoks T, Başkent D. The Role of Word Content, Sentence Information, and Vocoding for Voice Cue Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3665-3676. [PMID: 37556819 DOI: 10.1044/2023_jslhr-22-00491] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
PURPOSE For voice perception, two voice cues, the fundamental frequency (fo) and/or vocal tract length (VTL), seem to largely contribute to identification of voices and speaker characteristics. Acoustic content related to these voice cues is altered in cochlear implant transmitted speech, rendering voice perception difficult for the implant user. In everyday listening, there could be some facilitation from top-down compensatory mechanisms such as from use of linguistic content. Recently, we have shown a lexical content benefit on just-noticeable differences (JNDs) in VTL perception, which was not affected by vocoding. Whether this observed benefit relates to lexicality or phonemic content and whether additional sentence information can affect voice cue perception as well were investigated in this study. METHOD This study examined lexical benefit on VTL perception, by comparing words, time-reversed words, and nonwords, to investigate the contribution of lexical (words vs. nonwords) or phonetic (nonwords vs. reversed words) information. In addition, we investigated the effect of amount of speech (auditory) information on fo and VTL voice cue perception, by comparing words to sentences. In both experiments, nonvocoded and vocoded auditory stimuli were presented. RESULTS The outcomes showed a replication of the detrimental effect reversed words have on VTL perception. Smaller JNDs were shown for stimuli containing lexical and/or phonemic information. Experiment 2 showed a benefit in processing full sentences compared to single words in both fo and VTL perception. In both experiments, there was an effect of vocoding, which only interacted with sentence information for fo. CONCLUSIONS In addition to previous findings suggesting a lexical benefit, the current results show, more specifically, that lexical and phonemic information improves VTL perception. fo and VTL perception benefits from more sentence information compared to words. These results indicate that cochlear implant users may be able to partially compensate for voice cue perception difficulties by relying on the linguistic content and rich acoustic cues of everyday speech. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.23796405.
Collapse
Affiliation(s)
- Thomas Koelewijn
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
- Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, UCBL, UJM, Lyon, France
| | - Thawab Shehab
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Neurolinguistics, Faculty of Arts, University of Groningen, the Netherlands
| | - Tobias Treczoks
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Medical Physics and Cluster of Excellence "Hearing4all," Department of Medical Physics and Acoustics, Faculty VI Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Germany
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| |
Collapse
|
4
|
Kallioinen P, Olofsson JK, von Mentzer CN. Semantic processing in children with Cochlear Implants: A review of current N400 studies and recommendations for future research. Biol Psychol 2023; 182:108655. [PMID: 37541539 DOI: 10.1016/j.biopsycho.2023.108655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/06/2023]
Abstract
Deaf and hard of hearing children with cochlear implants (CI) often display impaired spoken language skills. While a large number of studies investigated brain responses to sounds in this population, relatively few focused on semantic processing. Here we summarize and discuss findings in four studies of the N400, a cortical response that reflects semantic processing, in children with CI. A study with auditory target stimuli found N400 effects at delayed latencies at 12 months after implantation, but at 18 and 24 months after implantation effects had typical latencies. In studies with visual target stimuli N400 effects were larger than or similar to controls in children with CI, despite lower semantic abilities. We propose that in children with CI, the observed large N400 effect reflects a stronger reliance on top-down predictions, relative to bottom-up language processing. Recent behavioral studies of children and adults with CI suggest that top-down processing is a common compensatory strategy, but with distinct limitations such as being effortful. A majority of the studies have small sample sizes (N < 20), and only responses to image targets were studied repeatedly in similar paradigms. This precludes strong conclusions. We give suggestions for future research and ways to overcome the scarcity of participants, including extending research to children with conventional hearing aids, an understudied group.
Collapse
Affiliation(s)
- Petter Kallioinen
- Department of Linguistics, Stockholm University, Stockholm, Sweden; Lund University Cognitive Science, Lund University, Lund, Sweden.
| | - Jonas K Olofsson
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | | |
Collapse
|
5
|
Beckers L, Tromp N, Philips B, Mylanus E, Huinck W. Exploring neurocognitive factors and brain activation in adult cochlear implant recipients associated with speech perception outcomes-A scoping review. Front Neurosci 2023; 17:1046669. [PMID: 36816114 PMCID: PMC9932917 DOI: 10.3389/fnins.2023.1046669] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 01/05/2023] [Indexed: 02/05/2023] Open
Abstract
Background Cochlear implants (CIs) are considered an effective treatment for severe-to-profound sensorineural hearing loss. However, speech perception outcomes are highly variable among adult CI recipients. Top-down neurocognitive factors have been hypothesized to contribute to this variation that is currently only partly explained by biological and audiological factors. Studies investigating this, use varying methods and observe varying outcomes, and their relevance has yet to be evaluated in a review. Gathering and structuring this evidence in this scoping review provides a clear overview of where this research line currently stands, with the aim of guiding future research. Objective To understand to which extent different neurocognitive factors influence speech perception in adult CI users with a postlingual onset of hearing loss, by systematically reviewing the literature. Methods A systematic scoping review was performed according to the PRISMA guidelines. Studies investigating the influence of one or more neurocognitive factors on speech perception post-implantation were included. Word and sentence perception in quiet and noise were included as speech perception outcome metrics and six key neurocognitive domains, as defined by the DSM-5, were covered during the literature search (Protocol in open science registries: 10.17605/OSF.IO/Z3G7W of searches in June 2020, April 2022). Results From 5,668 retrieved articles, 54 articles were included and grouped into three categories using different measures to relate to speech perception outcomes: (1) Nineteen studies investigating brain activation, (2) Thirty-one investigating performance on cognitive tests, and (3) Eighteen investigating linguistic skills. Conclusion The use of cognitive functions, recruiting the frontal cortex, the use of visual cues, recruiting the occipital cortex, and the temporal cortex still available for language processing, are beneficial for adult CI users. Cognitive assessments indicate that performance on non-verbal intelligence tasks positively correlated with speech perception outcomes. Performance on auditory or visual working memory, learning, memory and vocabulary tasks were unrelated to speech perception outcomes and performance on the Stroop task not to word perception in quiet. However, there are still many uncertainties regarding the explanation of inconsistent results between papers and more comprehensive studies are needed e.g., including different assessment times, or combining neuroimaging and behavioral measures. Systematic review registration https://doi.org/10.17605/OSF.IO/Z3G7W.
Collapse
Affiliation(s)
- Loes Beckers
- Cochlear Ltd., Mechelen, Belgium,Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands,*Correspondence: Loes Beckers,
| | - Nikki Tromp
- Cochlear Ltd., Mechelen, Belgium,Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | | | - Emmanuel Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - Wendy Huinck
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| |
Collapse
|
6
|
Gianakas SP, Fitzgerald MB, Winn MB. Identifying Listeners Whose Speech Intelligibility Depends on a Quiet Extra Moment After a Sentence. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4852-4865. [PMID: 36472938 PMCID: PMC9934912 DOI: 10.1044/2022_jslhr-21-00622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 05/29/2022] [Accepted: 08/16/2022] [Indexed: 06/03/2023]
Abstract
PURPOSE An extra moment after a sentence is spoken may be important for listeners with hearing loss to mentally repair misperceptions during listening. The current audiologic test battery cannot distinguish between a listener who repaired a misperception versus a listener who heard the speech accurately with no need for repair. This study aims to develop a behavioral method to identify individuals who are at risk for relying on a quiet moment after a sentence. METHOD Forty-three individuals with hearing loss (32 cochlear implant users, 11 hearing aid users) heard sentences that were followed by either 2 s of silence or 2 s of babble noise. Both high- and low-context sentences were used in the task. RESULTS Some individuals showed notable benefit in accuracy scores (particularly for high-context sentences) when given an extra moment of silent time following the sentence. This benefit was highly variable across individuals and sometimes absent altogether. However, the group-level patterns of results were mainly explained by the use of context and successful perception of the words preceding sentence-final words. CONCLUSIONS These results suggest that some but not all individuals improve their speech recognition score by relying on a quiet moment after a sentence, and that this fragility of speech recognition cannot be assessed using one isolated utterance at a time. Reliance on a quiet moment to repair perceptions would potentially impede the perception of an upcoming utterance, making continuous communication in real-world scenarios difficult especially for individuals with hearing loss. The methods used in this study-along with some simple modifications if necessary-could potentially identify patients with hearing loss who retroactively repair mistakes by using clinically feasible methods that can ultimately lead to better patient-centered hearing health care. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21644801.
Collapse
|
7
|
Taitelbaum-Swead R, Dahan T, Katzenel U, Dorman MF, Litvak LM, Fostick L. AzBio Sentence test in Hebrew (HeBio): development, preliminary validation, and the effect of noise. Cochlear Implants Int 2022; 23:270-279. [DOI: 10.1080/14670100.2022.2083285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Tzofit Dahan
- The Audiology Service, Kaplan Medical Center, Rehovot, Israel
| | - Udi Katzenel
- Department of Otolaryngology Head and Neck Surgery, Kaplan Medical Center, Rehovot, Israel
- Hebrew University, Hadassah Medical School, Jerusalem, Israel
| | - Michael F. Dorman
- Department of Speech and Hearing Science, Arizona State University, Tempe, USA
| | | | - Leah Fostick
- Department of Communication Disorders, Ariel University, Israel
| |
Collapse
|
8
|
Pragt L, van Hengel P, Grob D, Wasmann JWA. Preliminary Evaluation of Automated Speech Recognition Apps for the Hearing Impaired and Deaf. Front Digit Health 2022; 4:806076. [PMID: 35252959 PMCID: PMC8889114 DOI: 10.3389/fdgth.2022.806076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 01/18/2022] [Indexed: 11/26/2022] Open
Abstract
Objective Automated speech recognition (ASR) systems have become increasingly sophisticated, accurate, and deployable on many digital devices, including on a smartphone. This pilot study aims to examine the speech recognition performance of ASR apps using audiological speech tests. In addition, we compare ASR speech recognition performance to normal hearing and hearing impaired listeners and evaluate if standard clinical audiological tests are a meaningful and quick measure of the performance of ASR apps. Methods Four apps have been tested on a smartphone, respectively AVA, Earfy, Live Transcribe, and Speechy. The Dutch audiological speech tests performed were speech audiometry in quiet (Dutch CNC-test), Digits-in-Noise (DIN)-test with steady-state speech-shaped noise, sentences in quiet and in averaged long-term speech-shaped spectrum noise (Plomp-test). For comparison, the app's ability to transcribe a spoken dialogue (Dutch and English) was tested. Results All apps scored at least 50% phonemes correct on the Dutch CNC-test for a conversational speech intensity level (65 dB SPL) and achieved 90–100% phoneme recognition at higher intensity levels. On the DIN-test, AVA and Live Transcribe had the lowest (best) signal-to-noise ratio +8 dB. The lowest signal-to-noise measured with the Plomp-test was +8 to 9 dB for Earfy (Android) and Live Transcribe (Android). Overall, the word error rate for the dialogue in English (19–34%) was lower (better) than for the Dutch dialogue (25–66%). Conclusion The performance of the apps was limited on audiological tests that provide little linguistic context or use low signal to noise levels. For Dutch audiological speech tests in quiet, ASR apps performed similarly to a person with a moderate hearing loss. In noise, the ASR apps performed more poorly than most profoundly deaf people using a hearing aid or cochlear implant. Adding new performance metrics including the semantic difference as a function of SNR and reverberation time could help to monitor and further improve ASR performance.
Collapse
Affiliation(s)
- Leontien Pragt
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center Nijmegen, Nijmegen, Netherlands
- *Correspondence: Leontien Pragt
| | - Peter van Hengel
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center Nijmegen, Nijmegen, Netherlands
- Pento Audiological Center Twente, Hengelo, Netherlands
| | - Dagmar Grob
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, Netherlands
| | - Jan-Willem A. Wasmann
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
9
|
Ratnanather JT, Wang LC, Bae SH, O'Neill ER, Sagi E, Tward DJ. Visualization of Speech Perception Analysis via Phoneme Alignment: A Pilot Study. Front Neurol 2022; 12:724800. [PMID: 35087462 PMCID: PMC8787339 DOI: 10.3389/fneur.2021.724800] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 12/13/2021] [Indexed: 11/13/2022] Open
Abstract
Objective: Speech tests assess the ability of people with hearing loss to comprehend speech with a hearing aid or cochlear implant. The tests are usually at the word or sentence level. However, few tests analyze errors at the phoneme level. So, there is a need for an automated program to visualize in real time the accuracy of phonemes in these tests. Method: The program reads in stimulus-response pairs and obtains their phonemic representations from an open-source digital pronouncing dictionary. The stimulus phonemes are aligned with the response phonemes via a modification of the Levenshtein Minimum Edit Distance algorithm. Alignment is achieved via dynamic programming with modified costs based on phonological features for insertion, deletions and substitutions. The accuracy for each phoneme is based on the F1-score. Accuracy is visualized with respect to place and manner (consonants) or height (vowels). Confusion matrices for the phonemes are used in an information transfer analysis of ten phonological features. A histogram of the information transfer for the features over a frequency-like range is presented as a phonemegram. Results: The program was applied to two datasets. One consisted of test data at the sentence and word levels. Stimulus-response sentence pairs from six volunteers with different degrees of hearing loss and modes of amplification were analyzed. Four volunteers listened to sentences from a mobile auditory training app while two listened to sentences from a clinical speech test. Stimulus-response word pairs from three lists were also analyzed. The other dataset consisted of published stimulus-response pairs from experiments of 31 participants with cochlear implants listening to 400 Basic English Lexicon sentences via different talkers at four different SNR levels. In all cases, visualization was obtained in real time. Analysis of 12,400 actual and random pairs showed that the program was robust to the nature of the pairs. Conclusion: It is possible to automate the alignment of phonemes extracted from stimulus-response pairs from speech tests in real time. The alignment then makes it possible to visualize the accuracy of responses via phonological features in two ways. Such visualization of phoneme alignment and accuracy could aid clinicians and scientists.
Collapse
Affiliation(s)
- J Tilak Ratnanather
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Lydia C Wang
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Seung-Ho Bae
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Erin R O'Neill
- Center for Applied and Translational Sensory Sciences, University of Minnesota, Minneapolis, MN, United States
| | - Elad Sagi
- Department of Otolaryngology, New York University School of Medicine, New York, NY, United States
| | - Daniel J Tward
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.,Departments of Computational Medicine and Neurology, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
10
|
Dingemanse G, Goedegebure A. Listening Effort in Cochlear Implant Users: The Effect of Speech Intelligibility, Noise Reduction Processing, and Working Memory Capacity on the Pupil Dilation Response. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:392-404. [PMID: 34898265 DOI: 10.1044/2021_jslhr-21-00230] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE This study aimed to evaluate the effect of speech recognition performance, working memory capacity (WMC), and a noise reduction algorithm (NRA) on listening effort as measured with pupillometry in cochlear implant (CI) users while listening to speech in noise. METHOD Speech recognition and pupil responses (peak dilation, peak latency, and release of dilation) were measured during a speech recognition task at three speech-to-noise ratios (SNRs) with an NRA in both on and off conditions. WMC was measured with a reading span task. Twenty experienced CI users participated in this study. RESULTS With increasing SNR and speech recognition performance, (a) the peak pupil dilation decreased by only a small amount, (b) the peak latency decreased, and (c) the release of dilation after the sentences increased. The NRA had no effect on speech recognition in noise or on the peak or latency values of the pupil response but caused less release of dilation after the end of the sentences. A lower reading span score was associated with higher peak pupil dilation but was not associated with peak latency, release of dilation, or speech recognition in noise. CONCLUSIONS In CI users, speech perception is effortful, even at higher speech recognition scores and high SNRs, indicating that CI users are in a chronic state of increased effort in communication situations. The application of a clinically used NRA did not improve speech perception, nor did it reduce listening effort. Participants with a relatively low WMC exerted relatively more listening effort but did not have better speech reception thresholds in noise.
Collapse
Affiliation(s)
- Gertjan Dingemanse
- Department of Otorhinolaryngology, Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - André Goedegebure
- Department of Otorhinolaryngology, Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
11
|
Hunter CR. Dual-Task Accuracy and Response Time Index Effects of Spoken Sentence Predictability and Cognitive Load on Listening Effort. Trends Hear 2021; 25:23312165211018092. [PMID: 34674579 PMCID: PMC8543634 DOI: 10.1177/23312165211018092] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
A sequential dual-task design was used to assess the impacts of spoken sentence context and cognitive load on listening effort. Young adults with normal hearing listened to sentences masked by multitalker babble in which sentence-final words were either predictable or unpredictable. Each trial began with visual presentation of a short (low-load) or long (high-load) sequence of to-be-remembered digits. Words were identified more quickly and accurately in predictable than unpredictable sentence contexts. In addition, digits were recalled more quickly and accurately on trials on which the sentence was predictable, indicating reduced listening effort for predictable compared to unpredictable sentences. For word and digit recall response time but not for digit recall accuracy, the effect of predictability remained significant after exclusion of trials with incorrect word responses and was thus independent of speech intelligibility. In addition, under high cognitive load, words were identified more slowly and digits were recalled more slowly and less accurately than under low load. Participants’ working memory and vocabulary were not correlated with the sentence context benefit in either word recognition or digit recall. Results indicate that listening effort is reduced when sentences are predictable and that cognitive load affects the processing of spoken words in sentence contexts.
Collapse
Affiliation(s)
- Cynthia R Hunter
- Speech Perception, Cognition, and Hearing Laboratory, Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas, Lawrence, United States
| |
Collapse
|
12
|
Grisel J, Miller S, Schafer EC. A Novel Performance-Based Paradigm of Care for Cochlear Implant Follow-Up. Laryngoscope 2021; 132 Suppl 1:S1-S10. [PMID: 34013978 DOI: 10.1002/lary.29614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 04/19/2021] [Accepted: 05/01/2021] [Indexed: 11/10/2022]
Abstract
OBJECTIVES Utilize a multi-institutional outcomes database to determine expected performance for adult cochlear implant (CI) users. Estimate the percentage of patients who are high performers and achieve performance plateau. STUDY DESIGN Retrospective database study. METHODS Outcomes from 9,448 implantations were mined to identify 804 adult, unilateral recipients who had one preoperative and at least one postoperative consonant-nucleus-consonant (CNC) word score. Results were examined to determine percent-correct CNC word recognition preoperatively and at 1, 3, 6, 12, and 24 months after activation. Outcomes from 318 similar patients who also had at least three postoperative CNC word scores were examined. Linear mixed-effects regression was used to examine CNC word performance over time. The time when each patient achieved maximum performance was recorded as a surrogate for time of performance plateau. Patients were assigned as candidates for less intense follow-up if they were high performers and achieved performance plateau. RESULTS Among 804 patients with at least one postoperative score, CNC score improved at all time intervals. Average performance after the 3-month time interval was 47.2% to 51.5%, indicating a CNC ≥ 50% cutoff for high performers. Among 318 patients with at least three postoperative scores, performance improved from 1 to 3 (P = .001), 3 to 6 (P = .001), and 6 to 12 (P = .01) months. Scores from the 12- and 24-month intervals did not significantly differ (P = .09). By 12 months after activation, 59.7% of patients were considered candidates for less intense follow-up. CONCLUSION Findings suggest that CNC ≥ 50% is a reasonable cutoff to separate high performers from low performers. Within 12 months after activation, 59.7% of patients were good candidates for less intense follow-up. LEVEL OF EVIDENCE 3 Laryngoscope, 2021.
Collapse
Affiliation(s)
- Jedidiah Grisel
- Head & Neck Surgical Associates, Wichita Falls, Texas, U.S.A
| | - Sharon Miller
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas, U.S.A
| | - Erin C Schafer
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas, U.S.A
| |
Collapse
|
13
|
Mesik J, Ray L, Wojtczak M. Effects of Age on Cortical Tracking of Word-Level Features of Continuous Competing Speech. Front Neurosci 2021; 15:635126. [PMID: 33867920 PMCID: PMC8047075 DOI: 10.3389/fnins.2021.635126] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 03/12/2021] [Indexed: 01/17/2023] Open
Abstract
Speech-in-noise comprehension difficulties are common among the elderly population, yet traditional objective measures of speech perception are largely insensitive to this deficit, particularly in the absence of clinical hearing loss. In recent years, a growing body of research in young normal-hearing adults has demonstrated that high-level features related to speech semantics and lexical predictability elicit strong centro-parietal negativity in the EEG signal around 400 ms following the word onset. Here we investigate effects of age on cortical tracking of these word-level features within a two-talker speech mixture, and their relationship with self-reported difficulties with speech-in-noise understanding. While undergoing EEG recordings, younger and older adult participants listened to a continuous narrative story in the presence of a distractor story. We then utilized forward encoding models to estimate cortical tracking of four speech features: (1) word onsets, (2) "semantic" dissimilarity of each word relative to the preceding context, (3) lexical surprisal for each word, and (4) overall word audibility. Our results revealed robust tracking of all features for attended speech, with surprisal and word audibility showing significantly stronger contributions to neural activity than dissimilarity. Additionally, older adults exhibited significantly stronger tracking of word-level features than younger adults, especially over frontal electrode sites, potentially reflecting increased listening effort. Finally, neuro-behavioral analyses revealed trends of a negative relationship between subjective speech-in-noise perception difficulties and the model goodness-of-fit for attended speech, as well as a positive relationship between task performance and the goodness-of-fit, indicating behavioral relevance of these measures. Together, our results demonstrate the utility of modeling cortical responses to multi-talker speech using complex, word-level features and the potential for their use to study changes in speech processing due to aging and hearing loss.
Collapse
Affiliation(s)
- Juraj Mesik
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| | | | | |
Collapse
|
14
|
Vasil KJ, Ray C, Lewis J, Stefancin E, Tamati TN, Moberly AC. How Does Cochlear Implantation Lead to Improvements on a Cognitive Screening Measure? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1053-1061. [PMID: 33719534 DOI: 10.1044/2020_jslhr-20-00195] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Cognitive screening tools to identify patients at risk for cognitive deficits are frequently used by clinicians who work with aging populations in hearing health care. Although some studies show improvements in performance on cognitive screening exams when hearing loss intervention is provided in the form of a hearing aid or cochlear implant (CI), it is worth examining whether these improvements are attributable to increased auditory access to test items. This study aimed to examine whether performance and pass rate on a cognitive screening measure, the Montréal Cognitive Assessment (MoCA), improve as a result of CI, whether improved performance on auditory-based test items drives changes in MoCA performance, and whether postoperative MoCA performance relates to post-CI speech perception ability. Method Data were collected in adult CI candidates pre-implantation and 6 months postimplantation to examine the effect of intervention on MoCA performance. Participants were 77 CI users between the ages of 55 and 85 years. Participants completed the MoCA, administered audiovisually, and speech perception testing with monosyllabic (CNC) words at both intervals. Results Compared to 31 participants pre-operatively, 45 participants passed the MoCA postoperatively, which was a significant difference in pass rate. An improvement in MoCA scores could be attributed primarily to improvement in the "Delayed Recall" test domain, which was auditory based. Post-CI MoCA performance was related to post-CI CNC speech perception performance. Conclusions Improved performance and pass rates were demonstrated on the traditional MoCA test of cognitive screening from before to 6 months after CI. Improvements could primarily be attributed to better performance on a delayed recall task dependent on auditory access, and post-CI MoCA scores were related to post-CI speech perception abilities. Further studies are needed to investigate the application of cognitive screening tools in patients receiving hearing loss interventions, and these interventions' impact on patients' real-world functioning.
Collapse
Affiliation(s)
- Kara J Vasil
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | - Christin Ray
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | - Jessica Lewis
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | - Erin Stefancin
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | - Terrin N Tamati
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | - Aaron C Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| |
Collapse
|
15
|
O'Neill ER, Parke MN, Kreft HA, Oxenham AJ. Role of semantic context and talker variability in speech perception of cochlear-implant users and normal-hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1224. [PMID: 33639827 PMCID: PMC7895533 DOI: 10.1121/10.0003532] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 01/01/2021] [Accepted: 01/26/2021] [Indexed: 06/12/2023]
Abstract
This study assessed the impact of semantic context and talker variability on speech perception by cochlear-implant (CI) users and compared their overall performance and between-subjects variance with that of normal-hearing (NH) listeners under vocoded conditions. Thirty post-lingually deafened adult CI users were tested, along with 30 age-matched and 30 younger NH listeners, on sentences with and without semantic context, presented in quiet and noise, spoken by four different talkers. Additional measures included working memory, non-verbal intelligence, and spectral-ripple detection and discrimination. Semantic context and between-talker differences influenced speech perception to similar degrees for both CI users and NH listeners. Between-subjects variance for speech perception was greatest in the CI group but remained substantial in both NH groups, despite the uniformly degraded stimuli in these two groups. Spectral-ripple detection and discrimination thresholds in CI users were significantly correlated with speech perception, but a single set of vocoder parameters for NH listeners was not able to capture average CI performance in both speech and spectral-ripple tasks. The lack of difference in the use of semantic context between CI users and NH listeners suggests no overall differences in listening strategy between the groups, when the stimuli are similarly degraded.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Morgan N Parke
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
16
|
Smits C, Zekveld AA. Approaches to mathematical modeling of context effects in sentence recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1371. [PMID: 33639802 DOI: 10.1121/10.0003580] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Accepted: 02/05/2021] [Indexed: 06/12/2023]
Abstract
Probabilistic models to quantify context effects in speech recognition have proven their value in audiology. Boothroyd and Nittrouer [J. Acoust. Soc. Am. 84, 101-114 (1988)] introduced a model with the j-factor and k-factor as context parameters. Later, Bronkhorst, Bosman, and Smoorenburg [J. Acoust. Soc. Am. 93, 499-509 (1993)] proposed an elaborated mathematical model to quantify context effects. The present study explores existing models and proposes a new model to quantify the effect of context in sentence recognition. The effect of context is modeled by parameters that represent the change in the probability that a certain number of words in a sentence are correctly recognized. Data from two studies using a Dutch sentence-in-noise test were analyzed. The most accurate fit was obtained when using signal-to-noise ratio-dependent context parameters. Furthermore, reducing the number of context parameters from five to one had only a small effect on the goodness of fit for the present context model. An analysis of the relationships between context parameters from the different models showed that for a change in word recognition probability, the different context parameters can change in opposite directions, suggesting opposite effects of sentence context. This demonstrates the importance of controlling for the recognition probability of words in isolation when comparing the use of sentence context between different groups of listeners.
Collapse
Affiliation(s)
- Cas Smits
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan 1117, Amsterdam, Netherlands
| | - Adriana A Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan 1117, Amsterdam, Netherlands
| |
Collapse
|
17
|
Dingemanse G, Goedegebure A. Efficient Adaptive Speech Reception Threshold Measurements Using Stochastic Approximation Algorithms. Trends Hear 2020; 23:2331216520919199. [PMID: 32425135 PMCID: PMC7238302 DOI: 10.1177/2331216520919199] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
This study examines whether speech-in-noise tests that use adaptive procedures to
assess a speech reception threshold in noise (SRT50n) can be
optimized using stochastic approximation (SA) methods, especially in
cochlear-implant (CI) users. A simulation model was developed that simulates
intelligibility scores for words from sentences in noise for both CI users and
normal-hearing (NH) listeners. The model was used in Monte Carlo simulations.
Four different SA algorithms were optimized for use in both groups and compared
with clinically used adaptive procedures. The simulation model proved to be
valid, as its results agreed very well with existing experimental data. The four
optimized SA algorithms all provided an efficient estimation of the
SRT50n. They were equally accurate and produced smaller
standard deviations (SDs) than the clinical procedures. In CI users,
SRT50n estimates had a small bias and larger SDs than in NH
listeners. At least 20 sentences per condition and an initial signal-to-noise
ratio below the real SRT50n were required to ensure sufficient
reliability. In CI users, bias and SD became unacceptably large for a maximum
speech intelligibility score in quiet below 70%. In conclusion, SA algorithms
with word scoring in adaptive speech-in-noise tests are applicable to various
listeners, from CI users to NH listeners. In CI users, they lead to efficient
estimation of the SRT50n as long as speech intelligibility in
quiet is greater than 70%. SA procedures can be considered as a valid, more
efficient, and alternative to clinical adaptive procedures currently used in CI
users.
Collapse
Affiliation(s)
- Gertjan Dingemanse
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, the Netherlands
| | - André Goedegebure
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
18
|
Patro C, Mendel LL. Semantic influences on the perception of degraded speech by individuals with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1778. [PMID: 32237796 DOI: 10.1121/10.0000934] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Accepted: 03/02/2020] [Indexed: 06/11/2023]
Abstract
This study investigated whether speech intelligibility in cochlear implant (CI) users is affected by semantic context. Three groups participated in two experiments: Two groups of listeners with normal hearing (NH) listened to either full spectrum speech or vocoded speech, and one CI group listened to full spectrum speech. Experiment 1 measured participants' sentence recognition as a function of target-to-masker ratio (four-talker babble masker), and experiment 2 measured perception of interrupted speech as a function of duty cycles (long/short uninterrupted speech). Listeners were presented with both semantic congruent/incongruent targets. Results from the two experiments suggested that NH listeners benefitted more from the semantic cues as the listening conditions became more challenging (lower signal-to-noise ratios and interrupted speech with longer silent intervals). However, the CI group received minimal benefit from context, and therefore performed poorly in such conditions. On the contrary, in the conditions that were less challenging, CI users benefitted greatly from the semantic context, and NH listeners did not rely on such cues. The results also confirmed that such differential use of semantic cues appears to originate from the spectro-temporal degradations experienced by CI users, which could be a contributing factor for their poor performance in suboptimal environments.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55414, USA
| | - Lisa Lucks Mendel
- School of Communication Sciences and Disorders, University of Memphis, Memphis, Tennessee 38152, USA
| |
Collapse
|
19
|
Dingemanse G, Goedegebure A. The relation of hearing-specific patient-reported outcome measures with speech perception measures and acceptable noise levels in cochlear implant users. Int J Audiol 2020; 59:416-426. [PMID: 32091274 DOI: 10.1080/14992027.2020.1727033] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Objective: To investigate the relation of a hearing-specific patient-reported outcome measure (PROM) with speech perception and noise tolerance measurements. It was hypothesised that speech intelligibility in noise and noise tolerance may explain a larger part of the variance in PROM scores than speech intelligibility in quiet.Design: This cross-sectional study used the Speech, Spatial, Qualities (SSQ) questionnaire as a PROM. Speech recognition in quiet, the Speech Reception Threshold in noise and noise tolerance as measured with the acceptable noise level (ANL) were measured with sentences.Study sample: A group of 48 unilateral post-lingual deafened cochlear implant (CI) users.Results: SSQ scores were moderately correlated with speech scores in quiet and noise, and also with ANLs. Speech scores in quiet and noise were strongly correlated. The combination of speech scores and ANL explained 10-30% of the variances in SSQ scores, with ANLs adding only 0-9%.Conclusions: The variance in the SSQ as hearing-specific PROM in CI users was not better explained by speech intelligibility in noise than by speech intelligibility in quiet, because of the remarkably strong correlation between both measures. ANLs made only a small contribution to explain the variance of the SSQ. ANLs seem to measure other aspects than the SSQ.
Collapse
Affiliation(s)
- Gertjan Dingemanse
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, The Netherlands
| | - André Goedegebure
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, The Netherlands
| |
Collapse
|
20
|
O'Neill ER, Kreft HA, Oxenham AJ. Cognitive factors contribute to speech perception in cochlear-implant users and age-matched normal-hearing listeners under vocoded conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:195. [PMID: 31370651 PMCID: PMC6637026 DOI: 10.1121/1.5116009] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
This study examined the contribution of perceptual and cognitive factors to speech-perception abilities in cochlear-implant (CI) users. Thirty CI users were tested on word intelligibility in sentences with and without semantic context, presented in quiet and in noise. Performance was compared with measures of spectral-ripple detection and discrimination, thought to reflect peripheral processing, as well as with cognitive measures of working memory and non-verbal intelligence. Thirty age-matched and thirty younger normal-hearing (NH) adults also participated, listening via tone-excited vocoders, adjusted to produce mean performance for speech in noise comparable to that of the CI group. Results suggest that CI users may rely more heavily on semantic context than younger or older NH listeners, and that non-auditory working memory explains significant variance in the CI and age-matched NH groups. Between-subject variability in spectral-ripple detection thresholds was similar across groups, despite the spectral resolution for all NH listeners being limited by the same vocoder, whereas speech perception scores were more variable between CI users than between NH listeners. The results highlight the potential importance of central factors in explaining individual differences in CI users and question the extent to which standard measures of spectral resolution in CIs reflect purely peripheral processing.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|