1
|
Guo ZC, McHaney JR, Parthasarathy A, Chandrasekaran B. Reduced neural distinctiveness of speech representations in the middle-aged brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.28.609778. [PMID: 39253477 PMCID: PMC11383304 DOI: 10.1101/2024.08.28.609778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Speech perception declines independent of hearing thresholds in middle-age, and the neurobiological reasons are unclear. In line with the age-related neural dedifferentiation hypothesis, we predicted that middle-aged adults show less distinct cortical representations of phonemes and acoustic-phonetic features relative to younger adults. In addition to an extensive audiological, auditory electrophysiological, and speech perceptual test battery, we measured electroencephalographic responses time-locked to phoneme instances (phoneme-related potential; PRP) in naturalistic, continuous speech and trained neural network classifiers to predict phonemes from these responses. Consistent with age-related neural dedifferentiation, phoneme predictions were less accurate, more uncertain, and involved a broader network for middle-aged adults compared with younger adults. Representational similarity analysis revealed that the featural relationship between phonemes was less robust in middle-age. Electrophysiological and behavioral measures revealed signatures of cochlear neural degeneration (CND) and speech perceptual deficits in middle-aged adults relative to younger adults. Consistent with prior work in animal models, signatures of CND were associated with greater cortical dedifferentiation, explaining nearly a third of the variance in PRP prediction accuracy together with measures of acoustic neural processing. Notably, even after controlling for CND signatures and acoustic processing abilities, age-group differences in PRP prediction accuracy remained. Overall, our results reveal "fuzzier" phonemic representations, suggesting that age-related cortical neural dedifferentiation can occur even in middle-age and may underlie speech perceptual challenges, despite a normal audiogram.
Collapse
|
2
|
Klein KE, Harris LA, Humphrey EL, Noss EC, Sanderson AM, Yeager KR. Predictors of Listening-Related Fatigue in Adolescents With Hearing Loss. Lang Speech Hear Serv Sch 2024; 55:724-740. [PMID: 38501931 DOI: 10.1044/2024_lshss-23-00097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/20/2024] Open
Abstract
PURPOSE Self-reported listening-related fatigue in adolescents with hearing loss (HL) was investigated. Specifically, the extent to which listening-related fatigue is associated with school accommodations, audiologic characteristics, and listening breaks was examined. METHOD Participants were 144 adolescents with HL ages 12-19 years. Data were collected online via Qualtrics. The Vanderbilt Fatigue Scale-Child was used to measure listening-related fatigue. Participants also reported on their use of listening breaks and school accommodations, including an Individualized Education Program (IEP) or 504 plan, remote microphone systems, closed captioning, preferential seating, sign language interpreters, live transcriptions, and notetakers. RESULTS After controlling for age, HL laterality, and self-perceived listening difficulty, adolescents with an IEP or a 504 plan reported lower listening-related fatigue compared to adolescents without an IEP or a 504 plan. Adolescents who more frequently used remote microphone systems or notetakers reported higher listening-related fatigue compared to adolescents who used these accommodations less frequently, whereas increased use of a sign language interpreter was associated with decreased listening-related fatigue. Among adolescents with unilateral HL, higher age was associated with lower listening-related fatigue; no effect of age was found among adolescents with bilateral HL. Listening-related fatigue did not differ based on hearing device configuration. CONCLUSIONS Adolescents with HL should be considered at risk for listening-related fatigue regardless of the type of hearing devices used or the degree of HL. The individualized support provided by an IEP or 504 plan may help alleviate listening-related fatigue, especially by empowering adolescents with HL to be self-advocates in terms of their listening needs and accommodations in school. Additional research is needed to better understand the role of specific school accommodations and listening breaks in addressing listening-related fatigue.
Collapse
Affiliation(s)
- Kelsey E Klein
- Center for Pediatric Hearing Health Research, The House Institute Foundation, Los Angeles, CA
| | - Lauren A Harris
- Department of Otolaryngology - Head and Neck Surgery, University of Kentucky, Lexington
| | - Elizabeth L Humphrey
- Department of Audiology and Speech Pathology, The University of Tennessee Health Science Center, Knoxville
| | - Emily C Noss
- Department of Audiology and Speech Pathology, The University of Tennessee Health Science Center, Knoxville
| | - Autumn M Sanderson
- Department of Audiology and Speech Pathology, The University of Tennessee Health Science Center, Knoxville
| | - Kelly R Yeager
- Department of Audiology and Speech Pathology, The University of Tennessee Health Science Center, Knoxville
| |
Collapse
|
3
|
Graves EA, Sajjadi A, Hughes ML. A Comparison of Montreal Cognitive Assessment Scores among Individuals with Normal Hearing and Cochlear Implants. Ear Hear 2024; 45:894-904. [PMID: 38334699 PMCID: PMC11178479 DOI: 10.1097/aud.0000000000001483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024]
Abstract
OBJECTIVES The Montreal Cognitive Assessment (MoCA) is a cognitive screening tool that has 4 of 10 test items heavily dependent on auditory input, potentially leaving hearing-impaired (HI) individuals at a disadvantage. Previous work found that HI individuals scored lower than normal-hearing (NH) individuals on the MoCA, potentially attributed to the degraded auditory signals negatively impacting the ability to commit auditory information to memory. However, there is no research comparing how cochlear implant (CI) recipients perform on the MoCA relative to NH and HI individuals. This study aimed to (1) examine the effect of implementing three different hearing-adjusted scoring methods for a group of age-matched CI recipients and NH individuals, (2) determine if there is a difference between the two groups in overall scores and hearing-adjusted scores, and (3) compare scores across our CI and NH data to the published HI data for all scoring methods. We hypothesized that (1) scores for CI recipients would improve with implementation of the hearing-adjusted scoring methods over the original method, (2) CI recipients would score lower than NH participants for both original and adjusted scoring methods, and (3) the difference in scores between NH and CI listeners for both adjusted and unadjusted scores would be greater than that reported in the literature between NH and HI individuals due to the greater severity of hearing loss and relatively poor spectral resolution of CIs. DESIGN A total of 94 adults with CIs and 105 adults with NH were initially enrolled. After age-matching the two groups and excluding those who self-identified as NH but failed a hearing screening, a total of 75 CI participants (mean age 61.2 y) and 74 NH participants (mean age 58.8 y) were administered the MoCA. Scores were compared between the NH and CI groups, as well as to published HI data, using the original MoCA scoring method and three alternative scoring methods that excluded various auditory-dependent test items. RESULTS MoCA scores improved for all groups when two of the three alternative scoring methods were used, with no significant interaction between scoring method and group. Scores for CI recipients were significantly poorer than those for age-matched NH participants for all scoring methods. CI recipients scored better than the published data for HI individuals; however, the HI group was not age matched to the CI and NH groups. CONCLUSIONS MoCA scores are only partly affected by the potentially greater cognitive processing required to interpret degraded auditory signals. Even with the removal of the auditory-dependent items, CI recipients still did not perform as well as the age-matched NH group. Importantly, removing auditory-dependent items significantly and fundamentally alters the test, thereby reducing its sensitivity. This has important limitations for administration and interpretation of the MoCA for people with hearing loss.
Collapse
Affiliation(s)
- Emily A. Graves
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, NE, USA 68583
| | - Autefeh Sajjadi
- Creighton University School of Medicine, 2500 California Plaza, Omaha, NE, USA 68178; current affiliation, University of Minnesota Dept. of Otolarynology-Head & Neck Surgery, Minneapolis, MN, USA 55455
| | - Michelle L. Hughes
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, NE, USA 68583
| |
Collapse
|
4
|
Svirsky MA, Neukam JD, Capach NH, Amichetti NM, Lavender A, Wingfield A. Communication Under Sharply Degraded Auditory Input and the "2-Sentence" Problem. Ear Hear 2024; 45:1045-1058. [PMID: 38523125 DOI: 10.1097/aud.0000000000001500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/26/2024]
Abstract
OBJECTIVES Despite performing well in standard clinical assessments of speech perception, many cochlear implant (CI) users report experiencing significant difficulties when listening in real-world environments. We hypothesize that this disconnect may be related, in part, to the limited ecological validity of tests that are currently used clinically and in research laboratories. The challenges that arise from degraded auditory information provided by a CI, combined with the listener's finite cognitive resources, may lead to difficulties when processing speech material that is more demanding than the single words or single sentences that are used in clinical tests. DESIGN Here, we investigate whether speech identification performance and processing effort (indexed by pupil dilation measures) are affected when CI users or normal-hearing control subjects are asked to repeat two sentences presented sequentially instead of just one sentence. RESULTS Response accuracy was minimally affected in normal-hearing listeners, but CI users showed a wide range of outcomes, from no change to decrements of up to 45 percentage points. The amount of decrement was not predictable from the CI users' performance in standard clinical tests. Pupillometry measures tracked closely with task difficulty in both the CI group and the normal-hearing group, even though the latter had speech perception scores near ceiling levels for all conditions. CONCLUSIONS Speech identification performance is significantly degraded in many (but not all) CI users in response to input that is only slightly more challenging than standard clinical tests; specifically, when two sentences are presented sequentially before requesting a response, instead of presenting just a single sentence at a time. This potential "2-sentence problem" represents one of the simplest possible scenarios that go beyond presentation of the single words or sentences used in most clinical tests of speech perception, and it raises the possibility that even good performers in single-sentence tests may be seriously impaired by other ecologically relevant manipulations. The present findings also raise the possibility that a clinical version of a 2-sentence test may provide actionable information for counseling and rehabilitating CI users, and for people who interact with them closely.
Collapse
Affiliation(s)
- Mario A Svirsky
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
- Neuroscience Institute, New York University School of Medicine, New York, New York, USA
| | - Jonathan D Neukam
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
| | - Nicole Hope Capach
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
| | - Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | - Annette Lavender
- Department of Otolaryngology Head and Neck Surgery, New York University Grossman School of Medicine, New York, New York, USA
- Cochlear Americas, Denver, Colorado, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
5
|
Baldock J, Kapadia S, van Steenbrugge W, McCarley J. The Effects of Light Level and Signal-to-Noise Ratio on the Task-Evoked Pupil Response in a Speech-in-Noise Task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1964-1975. [PMID: 38690971 DOI: 10.1044/2024_jslhr-23-00627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2024]
Abstract
PURPOSE There is increasing interest in the measurement of cognitive effort during listening tasks, for both research and clinical purposes. Quantification of task-evoked pupil responses (TEPRs) is a psychophysiological method that can be used to study cognitive effort. However, light level during cognitively demanding listening tasks may affect TEPRs, complicating interpretation of listening-related changes. The objective of this study was to examine the effects of light level on TEPRs during effortful listening across a range of signal-to-noise ratios (SNRs). METHOD Thirty-six adults without hearing loss were asked to repeat target sentences presented in background babble noise while their pupil diameter was recorded. Light level and SNRs were manipulated in a 4 × 4 repeated-measures design. Repeated-measures analyses of variance were used to measure the effects. RESULTS Peak and mean dilation were typically larger in more adverse SNR conditions (except for SNR -6 dB) and smaller in higher light levels. Differences in mean and peak dilation between SNR conditions were larger in dim light than in brighter light. CONCLUSIONS Brighter light conditions make TEPRs less sensitive to variations in listening effort across levels of SNR. Therefore, light level must be considered and reported in detail to ensure sensitivity of TEPRs and for comparisons of findings across different studies. It is recommended that TEPR testing be conducted in relatively low light conditions, considering both background illumination and screen luminance. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25676538.
Collapse
Affiliation(s)
| | - Sarosh Kapadia
- Flinders University, Adelaide, South Australia, Australia
| | | | - Jason McCarley
- Flinders University, Adelaide, South Australia, Australia
- Oregon State University, Corvallis
| |
Collapse
|
6
|
Silcox JW, Bennett K, Copeland A, Ferguson SH, Payne BR. The Costs (and Benefits?) of Effortful Listening for Older Adults: Insights from Simultaneous Electrophysiology, Pupillometry, and Memory. J Cogn Neurosci 2024; 36:997-1020. [PMID: 38579256 DOI: 10.1162/jocn_a_02161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Although the impact of acoustic challenge on speech processing and memory increases as a person ages, older adults may engage in strategies that help them compensate for these demands. In the current preregistered study, older adults (n = 48) listened to sentences-presented in quiet or in noise-that were high constraint with either expected or unexpected endings or were low constraint with unexpected endings. Pupillometry and EEG were simultaneously recorded, and subsequent sentence recognition and word recall were measured. Like young adults in prior work, we found that noise led to increases in pupil size, delayed and reduced ERP responses, and decreased recall for unexpected words. However, in contrast to prior work in young adults where a larger pupillary response predicted a recovery of the N400 at the cost of poorer memory performance in noise, older adults did not show an associated recovery of the N400 despite decreased memory performance. Instead, we found that in quiet, increases in pupil size were associated with delays in N400 onset latencies and increased recognition memory performance. In conclusion, we found that transient variation in pupil-linked arousal predicted trade-offs between real-time lexical processing and memory that emerged at lower levels of task demand in aging. Moreover, with increased acoustic challenge, older adults still exhibited costs associated with transient increases in arousal without the corresponding benefits.
Collapse
|
7
|
Cychosz M, Winn MB, Goupell MJ. How to vocode: Using channel vocoders for cochlear-implant research. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2407-2437. [PMID: 38568143 PMCID: PMC10994674 DOI: 10.1121/10.0025274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 02/23/2024] [Indexed: 04/05/2024]
Abstract
The channel vocoder has become a useful tool to understand the impact of specific forms of auditory degradation-particularly the spectral and temporal degradation that reflect cochlear-implant processing. Vocoders have many parameters that allow researchers to answer questions about cochlear-implant processing in ways that overcome some logistical complications of controlling for factors in individual cochlear implant users. However, there is such a large variety in the implementation of vocoders that the term "vocoder" is not specific enough to describe the signal processing used in these experiments. Misunderstanding vocoder parameters can result in experimental confounds or unexpected stimulus distortions. This paper highlights the signal processing parameters that should be specified when describing vocoder construction. The paper also provides guidance on how to determine vocoder parameters within perception experiments, given the experimenter's goals and research questions, to avoid common signal processing mistakes. Throughout, we will assume that experimenters are interested in vocoders with the specific goal of better understanding cochlear implants.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Linguistics, University of California, Los Angeles, Los Angeles, California 90095, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, Maryland 20742, USA
| |
Collapse
|
8
|
Cody P, Kumar M, Tzounopoulos T. Cortical Zinc Signaling Is Necessary for Changes in Mouse Pupil Diameter That Are Evoked by Background Sounds with Different Contrasts. J Neurosci 2024; 44:e0939232024. [PMID: 38242698 PMCID: PMC10941062 DOI: 10.1523/jneurosci.0939-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 12/29/2023] [Accepted: 01/14/2024] [Indexed: 01/21/2024] Open
Abstract
Luminance-independent changes in pupil diameter (PD) during wakefulness influence and are influenced by neuromodulatory, neuronal, and behavioral responses. However, it is unclear whether changes in neuromodulatory activity in a specific brain area are necessary for the associated changes in PD or whether some different mechanisms cause parallel fluctuations in both PD and neuromodulation. To answer this question, we simultaneously recorded PD and cortical neuronal activity in male and female mice. Namely, we measured PD and neuronal activity during adaptation to sound contrast, which is a well-described adaptation conserved in many species and brain areas. In the primary auditory cortex (A1), increases in the variability of sound level (contrast) induce a decrease in the slope of the neuronal input-output relationship, neuronal gain, which depends on cortical neuromodulatory zinc signaling. We found a previously unknown modulation of PD by changes in background sensory context: high stimulus contrast sounds evoke larger increases in evoked PD compared with low-contrast sounds. To explore whether these changes in evoked PD are controlled by cortical neuromodulatory zinc signaling, we imaged single-cell neural activity in A1, manipulated zinc signaling in the cortex, and assessed PD in the same awake mouse. We found that cortical synaptic zinc signaling is necessary for increases in PD during high-contrast background sounds compared with low-contrast sounds. This finding advances our knowledge about how cortical neuromodulatory activity affects PD changes and thus advances our understanding of the brain states, circuits, and neuromodulatory mechanisms that can be inferred from pupil size fluctuations.
Collapse
Affiliation(s)
- Patrick Cody
- Department of Otolaryngology, Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Manoj Kumar
- Department of Otolaryngology, Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| | - Thanos Tzounopoulos
- Department of Otolaryngology, Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
9
|
Tamati TN, Jebens A, Başkent D. Lexical effects on talker discrimination in adult cochlear implant usersa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1631-1640. [PMID: 38426835 PMCID: PMC10908561 DOI: 10.1121/10.0025011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 02/06/2024] [Accepted: 02/07/2024] [Indexed: 03/02/2024]
Abstract
The lexical and phonological content of an utterance impacts the processing of talker-specific details in normal-hearing (NH) listeners. Adult cochlear implant (CI) users demonstrate difficulties in talker discrimination, particularly for same-gender talker pairs, which may alter the reliance on lexical information in talker discrimination. The current study examined the effect of lexical content on talker discrimination in 24 adult CI users. In a remote AX talker discrimination task, word pairs-produced either by the same talker (ST) or different talkers with the same (DT-SG) or mixed genders (DT-MG)-were either lexically easy (high frequency, low neighborhood density) or lexically hard (low frequency, high neighborhood density). The task was completed in quiet and multi-talker babble (MTB). Results showed an effect of lexical difficulty on talker discrimination, for same-gender talker pairs in both quiet and MTB. CI users showed greater sensitivity in quiet as well as less response bias in both quiet and MTB for lexically easy words compared to lexically hard words. These results suggest that CI users make use of lexical content in same-gender talker discrimination, providing evidence for the contribution of linguistic information to the processing of degraded talker information by adult CI users.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otolaryngology, Vanderbilt University Medical Center, 1215 21st Ave S, Nashville, Tennessee 37232, USA
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Almut Jebens
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
10
|
Mechtenberg H, Giorio C, Myers EB. Pupil Dilation Reflects Perceptual Priorities During a Receptive Speech Task. Ear Hear 2024; 45:425-440. [PMID: 37882091 PMCID: PMC10868674 DOI: 10.1097/aud.0000000000001438] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 09/01/2023] [Indexed: 10/27/2023]
Abstract
OBJECTIVES The listening demand incurred by speech perception fluctuates in normal conversation. At the acoustic-phonetic level, natural variation in pronunciation acts as speedbumps to accurate lexical selection. Any given utterance may be more or less phonetically ambiguous-a problem that must be resolved by the listener to choose the correct word. This becomes especially apparent when considering two common speech registers-clear and casual-that have characteristically different levels of phonetic ambiguity. Clear speech prioritizes intelligibility through hyperarticulation which results in less ambiguity at the phonetic level, while casual speech tends to have a more collapsed acoustic space. We hypothesized that listeners would invest greater cognitive resources while listening to casual speech to resolve the increased amount of phonetic ambiguity, as compared with clear speech. To this end, we used pupillometry as an online measure of listening effort during perception of clear and casual continuous speech in two background conditions: quiet and noise. DESIGN Forty-eight participants performed a probe detection task while listening to spoken, nonsensical sentences (masked and unmasked) while recording pupil size. Pupil size was modeled using growth curve analysis to capture the dynamics of the pupil response as the sentence unfolded. RESULTS Pupil size during listening was sensitive to the presence of noise and speech register (clear/casual). Unsurprisingly, listeners had overall larger pupil dilations during speech perception in noise, replicating earlier work. The pupil dilation pattern for clear and casual sentences was considerably more complex. Pupil dilation during clear speech trials was slightly larger than for casual speech, across quiet and noisy backgrounds. CONCLUSIONS We suggest that listener motivation could explain the larger pupil dilations to clearly spoken speech. We propose that, bounded by the context of this task, listeners devoted more resources to perceiving the speech signal with the greatest acoustic/phonetic fidelity. Further, we unexpectedly found systematic differences in pupil dilation preceding the onset of the spoken sentences. Together, these data demonstrate that the pupillary system is not merely reactive but also adaptive-sensitive to both task structure and listener motivation to maximize accurate perception in a limited resource system.
Collapse
Affiliation(s)
- Hannah Mechtenberg
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, USA
| | - Cristal Giorio
- Department of Psychology, Pennsylvania State University, State College, Pennsylvania, USA
| | - Emily B. Myers
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, USA
- Department of Speech, Language and Hearing Sciences, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
11
|
Giuliani NP, Venkitakrishnan S, Wu YH. Input-related demands: vocoded sentences evoke different pupillometrics and subjective listening effort than sentences in speech-shaped noise. Int J Audiol 2024; 63:199-206. [PMID: 36519812 PMCID: PMC10947987 DOI: 10.1080/14992027.2022.2150901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 12/23/2022]
Abstract
OBJECTIVES The Framework for Effortful Listening (FUEL) suggests five input-related demands can alter listening effort: source, transmission, listener, message and context factors. We hypothesised that vocoded sentences represented a source factor degradation and sentences in speech-shaped noise represented a transmission factor degradation. We used pupillometry and a subjective scale to examine our hypothesis. DESIGN Participants listened to vocoded sentences and sentences in speech-shaped noise at several difficulty levels designed to produce similar word recognition abilities; they also listened to unprocessed sentences. Within-participant pupillometrics and subjective listening effort were analysed. Post-hoc analyses were performed to examine if word recognition accuracy differentially influenced pupil responses. STUDY SAMPLES Twenty young adults with normal hearing. RESULTS Baseline pupil diameter was significantly smaller, peak pupil dilation was significantly larger, peak pupil dilation latency was significantly shorter, and subjective listening effort was significantly greater for the vocoded sentences than the sentences-in-noise. Word recognition ability also affected pupillometrics, but only for the vocoded sentences. CONCLUSIONS Our findings suggest that source factor degradations result in greater listening effort than transmission factor degradations. Future research should address how clinical interventions tailored towards different input-related demands may lead to reduced listening effort and improve patient outcomes.
Collapse
Affiliation(s)
- Nicholas P. Giuliani
- Department of Otolaryngology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Soumya Venkitakrishnan
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Yu-Hsiang Wu
- Department of Otolaryngology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
12
|
Abramowitz JC, Goupell MJ, Milvae KD. Cochlear-Implant Simulated Signal Degradation Exacerbates Listening Effort in Older Listeners. Ear Hear 2024; 45:441-450. [PMID: 37953469 PMCID: PMC10922081 DOI: 10.1097/aud.0000000000001440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2023]
Abstract
OBJECTIVES Individuals with cochlear implants (CIs) often report that listening requires high levels of effort. Listening effort can increase with decreasing spectral resolution, which occurs when listening with a CI, and can also increase with age. What is not clear is whether these factors interact; older CI listeners potentially experience even higher listening effort with greater signal degradation than younger CI listeners. This study used pupillometry as a physiological index of listening effort to examine whether age, spectral resolution, and their interaction affect listening effort in a simulation of CI listening. DESIGN Fifteen younger normal-hearing listeners (ages 18 to 31 years) and 15 older normal-hearing listeners (ages 65 to 75 years) participated in this experiment; they had normal hearing thresholds from 0.25 to 4 kHz. Participants repeated sentences presented in quiet that were either unprocessed or vocoded, simulating CI listening. Stimuli frequency spectra were limited to below 4 kHz (to control for effects of age-related high-frequency hearing loss), and spectral resolution was decreased by decreasing the number of vocoder channels, with 32-, 16-, and 8-channel conditions. Behavioral speech recognition scores and pupil dilation were recorded during this task. In addition, cognitive measures of working memory and processing speed were obtained to examine if individual differences in these measures predicted changes in pupil dilation. RESULTS For trials where the sentence was recalled correctly, there was a significant interaction between age and spectral resolution, with significantly greater pupil dilation in the older normal-hearing listeners for the 8- and 32-channel vocoded conditions. Cognitive measures did not predict pupil dilation. CONCLUSIONS There was a significant interaction between age and spectral resolution, such that older listeners appear to exert relatively higher listening effort than younger listeners when the signal is highly degraded, with the largest effects observed in the eight-channel condition. The clinical implication is that older listeners may be at higher risk for increased listening effort with a CI.
Collapse
Affiliation(s)
- Jordan C. Abramowitz
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742
| | - Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742
| | - Kristina DeRoy Milvae
- Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY 14214
| |
Collapse
|
13
|
Illg A, Adams D, Lesinski-Schiedat A, Lenarz T, Kral A. Variability in Receptive Language Development Following Bilateral Cochlear Implantation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:618-632. [PMID: 38198368 DOI: 10.1044/2023_jslhr-23-00297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2024]
Abstract
OBJECTIVES The primary aim was to investigate the variability in language development in children aged 5-7.5 years after bilateral cochlear implantation (CI) up to the age of 2 years, and any impact of the age at implantation and additional noncognitive or anatomical disorders at implantation. DESIGN Data of 84 congenitally deaf children that had received simultaneous bilateral CI at the age of ≤ 24 months were included in this retrospective study. The results of language comprehension acquisition were evaluated using a standardized German language acquisition test for normal hearing preschoolers and first graders. Data on speech perception of monosyllables and sentences in quiet and noise were added. RESULTS In a monosyllabic test, the children achieved a median performance of 75.0 ± 12.88%. In the sentence test in quiet, the median performance was 89 ± 12.69%, but dropped to 54 ± 18.92% in noise. A simple analysis showed a significant main effect of age at implantation on monosyllabic word comprehension (p < .001), but no significant effect of comorbidities that lacked cognitive effects (p = .24). Language acquisition values correspond to the normal range of children with normal hearing. Approximately 25% of the variability in the language acquisition tests is due to the outcome of the monosyllabic speech perception test. CONCLUSIONS Congenitally deaf children who were fitted bilaterally in the 1st year of life can develop age-appropriate language skills by the time they start school. The high variability in the data is partly due to the age of implantation, but additional factors such as cognitive factors (e.g., working memory) are likely to influence the variability.
Collapse
Affiliation(s)
- Angelika Illg
- Department of Otolaryngology, Medical University Hannover, Germany
| | - Doris Adams
- Department of Otolaryngology, Medical University Hannover, Germany
| | | | - Thomas Lenarz
- Department of Otolaryngology, Medical University Hannover, Germany
| | - Andrej Kral
- Department of Otolaryngology, Medical University Hannover, Germany
| |
Collapse
|
14
|
Fitzgerald LP, DeDe G, Shen J. Effects of linguistic context and noise type on speech comprehension. Front Psychol 2024; 15:1345619. [PMID: 38375107 PMCID: PMC10875108 DOI: 10.3389/fpsyg.2024.1345619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/17/2024] [Indexed: 02/21/2024] Open
Abstract
Introduction Understanding speech in background noise is an effortful endeavor. When acoustic challenges arise, linguistic context may help us fill in perceptual gaps. However, more knowledge is needed regarding how different types of background noise affect our ability to construct meaning from perceptually complex speech input. Additionally, there is limited evidence regarding whether perceptual complexity (e.g., informational masking) and linguistic complexity (e.g., occurrence of contextually incongruous words) interact during processing of speech material that is longer and more complex than a single sentence. Our first research objective was to determine whether comprehension of spoken sentence pairs is impacted by the informational masking from a speech masker. Our second objective was to identify whether there is an interaction between perceptual and linguistic complexity during speech processing. Methods We used multiple measures including comprehension accuracy, reaction time, and processing effort (as indicated by task-evoked pupil response), making comparisons across three different levels of linguistic complexity in two different noise conditions. Context conditions varied by final word, with each sentence pair ending with an expected exemplar (EE), within-category violation (WV), or between-category violation (BV). Forty young adults with typical hearing performed a speech comprehension in noise task over three visits. Each participant heard sentence pairs presented in either multi-talker babble or spectrally shaped steady-state noise (SSN), with the same noise condition across all three visits. Results We observed an effect of context but not noise on accuracy. Further, we observed an interaction of noise and context in peak pupil dilation data. Specifically, the context effect was modulated by noise type: context facilitated processing only in the more perceptually complex babble noise condition. Discussion These findings suggest that when perceptual complexity arises, listeners make use of the linguistic context to facilitate comprehension of speech obscured by background noise. Our results extend existing accounts of speech processing in noise by demonstrating how perceptual and linguistic complexity affect our ability to engage in higher-level processes, such as construction of meaning from speech segments that are longer than a single sentence.
Collapse
Affiliation(s)
- Laura P. Fitzgerald
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Gayle DeDe
- Speech, Language, and Brain Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Jing Shen
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| |
Collapse
|
15
|
McLaughlin DJ, Colvett JS, Bugg JM, Van Engen KJ. Sequence effects and speech processing: cognitive load for speaker-switching within and across accents. Psychon Bull Rev 2024; 31:176-186. [PMID: 37442872 PMCID: PMC10867039 DOI: 10.3758/s13423-023-02322-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/08/2023] [Indexed: 07/15/2023]
Abstract
Prior work in speech processing indicates that listening tasks with multiple speakers (as opposed to a single speaker) result in slower and less accurate processing. Notably, the trial-to-trial cognitive demands of switching between speakers or switching between accents have yet to be examined. We used pupillometry, a physiological index of cognitive load, to examine the demands of processing first (L1) and second (L2) language-accented speech when listening to sentences produced by the same speaker consecutively (no switch), a novel speaker of the same accent (within-accent switch), and a novel speaker with a different accent (across-accent switch). Inspired by research on sequential adjustments in cognitive control, we aimed to identify the cognitive demands of accommodating a novel speaker and accent by examining the trial-to-trial changes in pupil dilation during speech processing. Our results indicate that switching between speakers was more cognitively demanding than listening to the same speaker consecutively. Additionally, switching to a novel speaker with a different accent was more cognitively demanding than switching between speakers of the same accent. However, there was an asymmetry for across-accent switches, such that switching from an L1 to an L2 accent was more demanding than vice versa. Findings from the present study align with work examining multi-talker processing costs, and provide novel evidence that listeners dynamically adjust cognitive processing to accommodate speaker and accent variability. We discuss these novel findings in the context of an active control model and auditory streaming framework of speech processing.
Collapse
Affiliation(s)
- Drew J McLaughlin
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St Louis, MO, USA.
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia-San Sebastián, Gipuzkoa, Spain.
| | - Jackson S Colvett
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St Louis, MO, USA
| | - Julie M Bugg
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St Louis, MO, USA
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St Louis, MO, USA
| |
Collapse
|
16
|
Hu J, Vetter P. How the eyes respond to sounds. Ann N Y Acad Sci 2024; 1532:18-36. [PMID: 38152040 DOI: 10.1111/nyas.15093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2023]
Abstract
Eye movements have been extensively studied with respect to visual stimulation. However, we live in a multisensory world, and how the eyes are driven by other senses has been explored much less. Here, we review the evidence on how audition can trigger and drive different eye responses and which cortical and subcortical neural correlates are involved. We provide an overview on how different types of sounds, from simple tones and noise bursts to spatially localized sounds and complex linguistic stimuli, influence saccades, microsaccades, smooth pursuit, pupil dilation, and eye blinks. The reviewed evidence reveals how the auditory system interacts with the oculomotor system, both behaviorally and neurally, and how this differs from visually driven eye responses. Some evidence points to multisensory interaction, and potential multisensory integration, but the underlying computational and neural mechanisms are still unclear. While there are marked differences in how the eyes respond to auditory compared to visual stimuli, many aspects of auditory-evoked eye responses remain underexplored, and we summarize the key open questions for future research.
Collapse
Affiliation(s)
- Junchao Hu
- Visual and Cognitive Neuroscience Lab, Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Petra Vetter
- Visual and Cognitive Neuroscience Lab, Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
17
|
Zhang Y, Callejón-Leblic MA, Picazo-Reina AM, Blanco-Trejo S, Patou F, Sánchez-Gómez S. Impact of SNR, peripheral auditory sensitivity, and central cognitive profile on the psychometric relation between pupillary response and speech performance in CI users. Front Neurosci 2023; 17:1307777. [PMID: 38188029 PMCID: PMC10768066 DOI: 10.3389/fnins.2023.1307777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 12/05/2023] [Indexed: 01/09/2024] Open
Abstract
Despite substantial technical advances and wider clinical use, cochlear implant (CI) users continue to report high and elevated listening effort especially under challenging noisy conditions. Among all the objective measures to quantify listening effort, pupillometry is one of the most widely used and robust physiological measures. Previous studies with normally hearing (NH) and hearing-impaired (HI) listeners have shown that the relation between speech performance in noise and listening effort (as measured by peak pupil dilation) is not linear and exhibits an inverted-U shape. However, it is unclear whether the same psychometric relation exists in CI users, and whether individual differences in auditory sensitivity and central cognitive capacity affect this relation. Therefore, we recruited 17 post-lingually deaf CI adults to perform speech-in-noise tasks from 0 to 20 dB SNR with a 4 dB step size. Simultaneously, their pupillary responses and self-reported subjective effort were recorded. To characterize top-down and bottom-up individual variabilities, a spectro-temporal modulation task and a set of cognitive abilities were measured. Clinical word recognition in quiet and Quality of Life (QoL) were also collected. Results showed that at a group level, an inverted-U shape psychometric curve between task difficulty (SNR) and peak pupil dilation (PPD) was not observed. Individual shape of the psychometric curve was significantly associated with some individual factors: CI users with higher clinical word and speech-in-noise recognition showed a quadratic decrease of PPD over increasing SNRs; CI users with better non-verbal intelligence and lower QoL showed smaller average PPD. To summarize, individual differences in CI users had a significant impact on the psychometric relation between pupillary response and task difficulty, hence affecting the interpretation of pupillary response as listening effort (or engagement) at different task difficulty levels. Future research and clinical applications should further characterize the possible effects of individual factors (such as motivation or engagement) in modulating CI users' occurrence of 'tipping point' on their psychometric functions, and develop an individualized method for reliably quantifying listening effort using pupillometry.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Research and Technology, Oticon Medical, Vallauris, France
| | - M. Amparo Callejón-Leblic
- Oticon Medical, Madrid, Spain
- ENT Department, Virgen Macarena University Hospital, Seville, Spain
- Biomedical Engineering Group, University of Sevillel, Sevillel, Spain
| | | | | | - François Patou
- Department of Research and Technology, Oticon Medical, Smørum, Denmark
| | | |
Collapse
|
18
|
Kraus F, Obleser J, Herrmann B. Pupil Size Sensitivity to Listening Demand Depends on Motivational State. eNeuro 2023; 10:ENEURO.0288-23.2023. [PMID: 37989588 PMCID: PMC10734370 DOI: 10.1523/eneuro.0288-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/19/2023] [Accepted: 10/22/2023] [Indexed: 11/23/2023] Open
Abstract
Motivation plays a role when a listener needs to understand speech under acoustically demanding conditions. Previous work has demonstrated pupil-linked arousal being sensitive to both listening demands and motivational state during listening. It is less clear how motivational state affects the temporal evolution of the pupil size and its relation to subsequent behavior. We used an auditory gap detection task (N = 33) to study the joint impact of listening demand and motivational state on the pupil size response and examine its temporal evolution. Task difficulty and a listener's motivational state were orthogonally manipulated through changes in gap duration and monetary reward prospect. We show that participants' performance decreased with task difficulty, but that reward prospect enhanced performance under hard listening conditions. Pupil size increased with both increased task difficulty and higher reward prospect, and this reward prospect effect was largest under difficult listening conditions. Moreover, pupil size time courses differed between detected and missed gaps, suggesting that the pupil response indicates upcoming behavior. Larger pre-gap pupil size was further associated with faster response times on a trial-by-trial within-participant level. Our results reiterate the utility of pupil size as an objective and temporally sensitive measure in audiology. However, such assessments of cognitive resource recruitment need to consider the individual's motivational state.
Collapse
Affiliation(s)
- Frauke Kraus
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto M6A 2E1, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto M5S 3G3, Ontario, Canada
| |
Collapse
|
19
|
Carraturo S, McLaughlin DJ, Peelle JE, Van Engen KJ. Pupillometry reveals differences in cognitive demands of listening to face mask-attenuated speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3973-3985. [PMID: 38149818 DOI: 10.1121/10.0023953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 11/29/2023] [Indexed: 12/28/2023]
Abstract
Face masks offer essential protection but also interfere with speech communication. Here, audio-only sentences spoken through four types of masks were presented in noise to young adult listeners. Pupil dilation (an index of cognitive demand), intelligibility, and subjective effort and performance ratings were collected. Dilation increased in response to each mask relative to the no-mask condition and differed significantly where acoustic attenuation was most prominent. These results suggest that the acoustic impact of the mask drives not only the intelligibility of speech, but also the cognitive demands of listening. Subjective effort ratings reflected the same trends as the pupil data.
Collapse
Affiliation(s)
- Sita Carraturo
- Department of Psychological & Brain Sciences, Washington University in St. Louis, Saint Louis, Missouri 63130, USA
| | - Drew J McLaughlin
- Basque Center on Cognition, Brain and Language, San Sebastian, Basque Country 20009, Spain
| | - Jonathan E Peelle
- Department of Communication Sciences and Disorders, Northeastern University, Boston, Massachusetts 02115, USA
| | - Kristin J Van Engen
- Department of Psychological & Brain Sciences, Washington University in St. Louis, Saint Louis, Missouri 63130, USA
| |
Collapse
|
20
|
Cychosz M, Xu K, Fu QJ. Effects of spectral smearing on speech understanding and masking release in simulated bilateral cochlear implants. PLoS One 2023; 18:e0287728. [PMID: 37917727 PMCID: PMC10621938 DOI: 10.1371/journal.pone.0287728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 06/11/2023] [Indexed: 11/04/2023] Open
Abstract
Differences in spectro-temporal degradation may explain some variability in cochlear implant users' speech outcomes. The present study employs vocoder simulations on listeners with typical hearing to evaluate how differences in degree of channel interaction across ears affects spatial speech recognition. Speech recognition thresholds and spatial release from masking were measured in 16 normal-hearing subjects listening to simulated bilateral cochlear implants. 16-channel sine-vocoded speech simulated limited, broad, or mixed channel interaction, in dichotic and diotic target-masker conditions, across ears. Thresholds were highest with broad channel interaction in both ears but improved when interaction decreased in one ear and again in both ears. Masking release was apparent across conditions. Results from this simulation study on listeners with typical hearing show that channel interaction may impact speech recognition more than masking release, and may have implications for the effects of channel interaction on cochlear implant users' speech recognition outcomes.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Linguistics, University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Kevin Xu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States of America
| |
Collapse
|
21
|
Chiossi JSC, Patou F, Ng EHN, Faulkner KF, Lyxell B. Phonological discrimination and contrast detection in pupillometry. Front Psychol 2023; 14:1232262. [PMID: 38023001 PMCID: PMC10646334 DOI: 10.3389/fpsyg.2023.1232262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The perception of phonemes is guided by both low-level acoustic cues and high-level linguistic context. However, differentiating between these two types of processing can be challenging. In this study, we explore the utility of pupillometry as a tool to investigate both low- and high-level processing of phonological stimuli, with a particular focus on its ability to capture novelty detection and cognitive processing during speech perception. Methods Pupillometric traces were recorded from a sample of 22 Danish-speaking adults, with self-reported normal hearing, while performing two phonological-contrast perception tasks: a nonword discrimination task, which included minimal-pair combinations specific to the Danish language, and a nonword detection task involving the detection of phonologically modified words within sentences. The study explored the perception of contrasts in both unprocessed speech and degraded speech input, processed with a vocoder. Results No difference in peak pupil dilation was observed when the contrast occurred between two isolated nonwords in the nonword discrimination task. For unprocessed speech, higher peak pupil dilations were measured when phonologically modified words were detected within a sentence compared to sentences without the nonwords. For vocoded speech, higher peak pupil dilation was observed for sentence stimuli, but not for the isolated nonwords, although performance decreased similarly for both tasks. Conclusion Our findings demonstrate the complexity of pupil dynamics in the presence of acoustic and phonological manipulation. Pupil responses seemed to reflect higher-level cognitive and lexical processing related to phonological perception rather than low-level perception of acoustic cues. However, the incorporation of multiple talkers in the stimuli, coupled with the relatively low task complexity, may have affected the pupil dilation.
Collapse
Affiliation(s)
- Julia S. C. Chiossi
- Oticon A/S, Smørum, Denmark
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| | | | - Elaine Hoi Ning Ng
- Oticon A/S, Smørum, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | | | - Björn Lyxell
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| |
Collapse
|
22
|
Skidmore J, Oleson JJ, Yuan Y, He S. The Relationship Between Cochlear Implant Speech Perception Outcomes and Electrophysiological Measures of the Electrically Evoked Compound Action Potential. Ear Hear 2023; 44:1485-1497. [PMID: 37194125 DOI: 10.1097/aud.0000000000001389] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
OBJECTIVE This study assessed the relationship between electrophysiological measures of the electrically evoked compound action potential (eCAP) and speech perception scores measured in quiet and in noise in postlingually deafened adult cochlear implant (CI) users. It tested the hypothesis that how well the auditory nerve (AN) responds to electrical stimulation is important for speech perception with a CI in challenging listening conditions. DESIGN Study participants included 24 postlingually deafened adult CI users. All participants used Cochlear Nucleus CIs in their test ears. In each participant, eCAPs were measured at multiple electrode locations in response to single-pulse, paired-pulse, and pulse-train stimuli. Independent variables included six metrics calculated from the eCAP recordings: the electrode-neuron interface (ENI) index, the neural adaptation (NA) ratio, NA speed, the adaptation recovery (AR) ratio, AR speed, and the amplitude modulation (AM) ratio. The ENI index quantified the effectiveness of the CI electrodes in stimulating the targeted AN fibers. The NA ratio indicated the amount of NA at the AN caused by a train of constant-amplitude pulses. NA speed was defined as the speed/rate of NA. The AR ratio estimated the amount of recovery from NA at a fixed time point after the cessation of pulse-train stimulation. AR speed referred to the speed of recovery from NA caused by previous pulse-train stimulation. The AM ratio provided a measure of AN sensitivity to AM cues. Participants' speech perception scores were measured using Consonant-Nucleus-Consonant (CNC) word lists and AzBio sentences presented in quiet, as well as in noise at signal-to-noise ratios (SNRs) of +10 and +5 dB. Predictive models were created for each speech measure to identify eCAP metrics with meaningful predictive power. RESULTS The ENI index and AR speed individually explained at least 10% of the variance in most of the speech perception scores measured in this study, while the NA ratio, NA speed, the AR ratio, and the AM ratio did not. The ENI index was identified as the only eCAP metric that had unique predictive power for each of the speech test results. The amount of variance in speech perception scores (both CNC words and AzBio sentences) explained by the eCAP metrics increased with increased difficulty under the listening condition. Over half of the variance in speech perception scores measured in +5 dB SNR noise (both CNC words and AzBio sentences) was explained by a model with only three eCAP metrics: the ENI index, NA speed, and AR speed. CONCLUSIONS Of the six electrophysiological measures assessed in this study, the ENI index is the most informative predictor for speech perception performance in CI users. In agreement with the tested hypothesis, the response characteristics of the AN to electrical stimulation are more important for speech perception with a CI in noise than they are in quiet.
Collapse
Affiliation(s)
- Jeffrey Skidmore
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio, USA
| | - Jacob J Oleson
- Department of Biostatistics, University of Iowa, Iowa City, Iowa, USA
| | - Yi Yuan
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio, USA
| | - Shuman He
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio, USA
- Department of Audiology, Nationwide Children's Hospital, Columbus, Ohio, USA
| |
Collapse
|
23
|
Simantiraki O, Wagner AE, Cooke M. The impact of speech type on listening effort and intelligibility for native and non-native listeners. Front Neurosci 2023; 17:1235911. [PMID: 37841688 PMCID: PMC10568627 DOI: 10.3389/fnins.2023.1235911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 09/08/2023] [Indexed: 10/17/2023] Open
Abstract
Listeners are routinely exposed to many different types of speech, including artificially-enhanced and synthetic speech, styles which deviate to a greater or lesser extent from naturally-spoken exemplars. While the impact of differing speech types on intelligibility is well-studied, it is less clear how such types affect cognitive processing demands, and in particular whether those speech forms with the greatest intelligibility in noise have a commensurately lower listening effort. The current study measured intelligibility, self-reported listening effort, and a pupillometry-based measure of cognitive load for four distinct types of speech: (i) plain i.e. natural unmodified speech; (ii) Lombard speech, a naturally-enhanced form which occurs when speaking in the presence of noise; (iii) artificially-enhanced speech which involves spectral shaping and dynamic range compression; and (iv) speech synthesized from text. In the first experiment a cohort of 26 native listeners responded to the four speech types in three levels of speech-shaped noise. In a second experiment, 31 non-native listeners underwent the same procedure at more favorable signal-to-noise ratios, chosen since second language listening in noise has a more detrimental effect on intelligibility than listening in a first language. For both native and non-native listeners, artificially-enhanced speech was the most intelligible and led to the lowest subjective effort ratings, while the reverse was true for synthetic speech. However, pupil data suggested that Lombard speech elicited the lowest processing demands overall. These outcomes indicate that the relationship between intelligibility and cognitive processing demands is not a simple inverse, but is mediated by speech type. The findings of the current study motivate the search for speech modification algorithms that are optimized for both intelligibility and listening effort.
Collapse
Affiliation(s)
- Olympia Simantiraki
- Institute of Applied and Computational Mathematics, Foundation for Research & Technology-Hellas, Heraklion, Greece
| | - Anita E. Wagner
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Martin Cooke
- Ikerbasque (Basque Science Foundation), Vitoria-Gasteiz, Spain
| |
Collapse
|
24
|
McHaney JR, Hancock KE, Polley DB, Parthasarathy A. Sensory representations and pupil-indexed listening effort provide complementary contributions to multi-talker speech intelligibility. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.13.553131. [PMID: 37645975 PMCID: PMC10462058 DOI: 10.1101/2023.08.13.553131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Optimal speech perception in noise requires successful separation of the target speech stream from multiple competing background speech streams. The ability to segregate these competing speech streams depends on the fidelity of bottom-up neural representations of sensory information in the auditory system and top-down influences of effortful listening. Here, we use objective neurophysiological measures of bottom-up temporal processing using envelope-following responses (EFRs) to amplitude modulated tones and investigate their interactions with pupil-indexed listening effort, as it relates to performance on the Quick speech in noise (QuickSIN) test in young adult listeners with clinically normal hearing thresholds. We developed an approach using ear-canal electrodes and adjusting electrode montages for modulation rate ranges, which extended the rage of reliable EFR measurements as high as 1024Hz. Pupillary responses revealed changes in listening effort at the two most difficult signal-to-noise ratios (SNR), but behavioral deficits at the hardest SNR only. Neither pupil-indexed listening effort nor the slope of the EFR decay function independently related to QuickSIN performance. However, a linear model using the combination of EFRs and pupil metrics significantly explained variance in QuickSIN performance. These results suggest a synergistic interaction between bottom-up sensory coding and top-down measures of listening effort as it relates to speech perception in noise. These findings can inform the development of next-generation tests for hearing deficits in listeners with normal-hearing thresholds that incorporates a multi-dimensional approach to understanding speech intelligibility deficits.
Collapse
Affiliation(s)
- Jacie R. McHaney
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
| | - Kenneth E. Hancock
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Daniel B. Polley
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Aravindakshan Parthasarathy
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh PA
| |
Collapse
|
25
|
Cui ME, Herrmann B. Eye Movements Decrease during Effortful Speech Listening. J Neurosci 2023; 43:5856-5869. [PMID: 37491313 PMCID: PMC10423048 DOI: 10.1523/jneurosci.0240-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 06/09/2023] [Accepted: 07/18/2023] [Indexed: 07/27/2023] Open
Abstract
Hearing impairment affects many older adults but is often diagnosed decades after speech comprehension in noisy situations has become effortful. Accurate assessment of listening effort may thus help diagnose hearing impairment earlier. However, pupillometry-the most used approach to assess listening effort-has limitations that hinder its use in practice. The current study explores a novel way to assess listening effort through eye movements. Building on cognitive and neurophysiological work, we examine the hypothesis that eye movements decrease when speech listening becomes challenging. In three experiments with human participants from both sexes, we demonstrate, consistent with this hypothesis, that fixation duration increases and spatial gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (simple sentences, naturalistic stories). In contrast, pupillometry was less sensitive to speech masking during story listening, suggesting pupillometric measures may not be as effective for the assessments of listening effort in naturalistic speech-listening paradigms. Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in the brain regions that support the regulation of eye movements, such as frontal eye field and superior colliculus, are modulated when listening is effortful.SIGNIFICANCE STATEMENT Assessment of listening effort is critical for early diagnosis of age-related hearing loss. Pupillometry is most used but has several disadvantages. The current study explores a novel way to assess listening effort through eye movements. We examine the hypothesis that eye movements decrease when speech listening becomes effortful. We demonstrate, consistent with this hypothesis, that fixation duration increases and gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (sentences, naturalistic stories). Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in brain regions that support the regulation of eye movements are modulated when listening is effortful.
Collapse
Affiliation(s)
- M Eric Cui
- Rotman Research Institute, Baycrest Academy for Research and Education, North York, Ontario M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, North York, Ontario M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| |
Collapse
|
26
|
Patro C, Bennaim A, Shephard E. Effects of spectral degradation on gated word recognition. JASA EXPRESS LETTERS 2023; 3:084401. [PMID: 37561082 DOI: 10.1121/10.0020646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 07/28/2023] [Indexed: 08/11/2023]
Abstract
Although much is known about how normal-hearing listeners process spoken words under ideal listening conditions, little is known about how a degraded signal, such as speech transmitted via cochlear implants, affects the word recognition process. In this study, gated word recognition performance was measured with the goal of describing the time course of word identification by using a noise-band vocoder simulation. The results of this study demonstrate that spectral degradations can impact the temporal aspects of speech processing. These results also provide insights into the potential advantages of enhancing spectral resolution in the processing of spoken words.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, Maryland 21252, , ,
| | - Ariana Bennaim
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, Maryland 21252, , ,
| | - Ellen Shephard
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, Maryland 21252, , ,
| |
Collapse
|
27
|
Martohardjono G, Johns MA, Franciotti P, Castillo D, Porru I, Lowry C. Use of the first-acquired language modulates pupil size in the processing of island constraint violations. Front Psychol 2023; 14:1180989. [PMID: 37519378 PMCID: PMC10382202 DOI: 10.3389/fpsyg.2023.1180989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 06/15/2023] [Indexed: 08/01/2023] Open
Abstract
Introduction Traditional studies of the population called "heritage speakers" (HS) have treated this group as distinct from other bilingual populations, e.g., simultaneous or late bilinguals (LB), focusing on group differences in the competencies of the first-acquired language or "heritage language". While several explanations have been proposed for such differences (e.g., incomplete acquisition, attrition, differential processing mechanisms), few have taken into consideration the individual variation that must occur, due to the fluctuation of factors such as exposure and use that characterize all bilinguals. In addition, few studies have used implicit measures, e.g., psychophysiological methods (ERPs; Eye-tracking), that can circumvent confounding variables such as resorting to conscious metalinguistic knowledge. Methodology This study uses pupillometry, a method that has only recently been used in psycholinguistic studies of bilingualism, to investigate pupillary responses to three syntactic island constructions in two groups of Spanish/English bilinguals: heritage speakers and late bilinguals. Data were analyzed using generalized additive mixed effects models (GAMMs) and two models were created and compared to one another: one with group (LB/HS) and the other with groups collapsed and current and historical use of Spanish as continuous variables. Results Results show that group-based models generally yield conflicting results while models collapsing groups and having usage as a predictor yield consistent ones. In particular, current use predicts sensitivity to L1 ungrammaticality across both HS and LB populations. We conclude that individual variation, as measured by use, is a critical factor tha must be taken into account in the description of the language competencies and processing of heritage and late bilinguals alike.
Collapse
Affiliation(s)
- Gita Martohardjono
- Department of Linguistics and Communication Disorders, Queens College, New York, NY, United States
- Second Language Acquisition Laboratory, Linguistics Program, The Graduate Center of the City University of New York, New York, NY, United States
| | - Michael A. Johns
- Institute for Systems Research, University of Maryland, College Park, MD, United States
| | - Pamela Franciotti
- Second Language Acquisition Laboratory, Linguistics Program, The Graduate Center of the City University of New York, New York, NY, United States
| | - Daniela Castillo
- Second Language Acquisition Laboratory, Linguistics Program, The Graduate Center of the City University of New York, New York, NY, United States
| | - Ilaria Porru
- Second Language Acquisition Laboratory, Linguistics Program, The Graduate Center of the City University of New York, New York, NY, United States
| | - Cass Lowry
- Second Language Acquisition Laboratory, Linguistics Program, The Graduate Center of the City University of New York, New York, NY, United States
| |
Collapse
|
28
|
Perea Pérez F, Hartley DEH, Kitterick PT, Zekveld AA, Naylor G, Wiggins IM. Listening efficiency in adult cochlear-implant users compared with normally-hearing controls at ecologically relevant signal-to-noise ratios. Front Hum Neurosci 2023; 17:1214485. [PMID: 37520928 PMCID: PMC10379644 DOI: 10.3389/fnhum.2023.1214485] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 06/23/2023] [Indexed: 08/01/2023] Open
Abstract
Introduction Due to having to work with an impoverished auditory signal, cochlear-implant (CI) users may experience reduced speech intelligibility and/or increased listening effort in real-world listening situations, compared to their normally-hearing (NH) peers. These two challenges to perception may be usefully integrated in a measure of listening efficiency: conceptually, the amount of accuracy achieved for a certain amount of effort expended. Methods We describe a novel approach to quantifying listening efficiency based on the rate of evidence accumulation toward a correct response in a linear ballistic accumulator (LBA) model of choice decision-making. Estimation of this objective measure within a hierarchical Bayesian framework confers further benefits, including full quantification of uncertainty in parameter estimates. We applied this approach to examine the speech-in-noise performance of a group of 24 CI users (M age: 60.3, range: 20-84 years) and a group of 25 approximately age-matched NH controls (M age: 55.8, range: 20-79 years). In a laboratory experiment, participants listened to reverberant target sentences in cafeteria noise at ecologically relevant signal-to-noise ratios (SNRs) of +20, +10, and +4 dB SNR. Individual differences in cognition and self-reported listening experiences were also characterised by means of cognitive tests and hearing questionnaires. Results At the group level, the CI group showed much lower listening efficiency than the NH group, even in favourable acoustic conditions. At the individual level, within the CI group (but not the NH group), higher listening efficiency was associated with better cognition (i.e., working-memory and linguistic-closure) and with more positive self-reported listening experiences, both in the laboratory and in daily life. Discussion We argue that listening efficiency, measured using the approach described here, is: (i) conceptually well-motivated, in that it is theoretically impervious to differences in how individuals approach the speed-accuracy trade-off that is inherent to all perceptual decision making; and (ii) of practical utility, in that it is sensitive to differences in task demand, and to differences between groups, even when speech intelligibility remains at or near ceiling level. Further research is needed to explore the sensitivity and practical utility of this metric across diverse listening situations.
Collapse
Affiliation(s)
- Francisca Perea Pérez
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Douglas E. H. Hartley
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom
| | - Pádraig T. Kitterick
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- National Acoustic Laboratories, Sydney, NSW, Australia
| | - Adriana A. Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam, Netherlands
| | - Graham Naylor
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Ian M. Wiggins
- National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
29
|
Trau-Margalit A, Fostick L, Harel-Arbeli T, Nissanholtz-Gannot R, Taitelbaum-Swead R. Speech recognition in noise task among children and young-adults: a pupillometry study. Front Psychol 2023; 14:1188485. [PMID: 37425148 PMCID: PMC10328119 DOI: 10.3389/fpsyg.2023.1188485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/05/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Children experience unique challenges when listening to speech in noisy environments. The present study used pupillometry, an established method for quantifying listening and cognitive effort, to detect temporal changes in pupil dilation during a speech-recognition-in-noise task among school-aged children and young adults. Methods Thirty school-aged children and 31 young adults listened to sentences amidst four-talker babble noise in two signal-to-noise ratios (SNR) conditions: high accuracy condition (+10 dB and + 6 dB, for children and adults, respectively) and low accuracy condition (+5 dB and + 2 dB, for children and adults, respectively). They were asked to repeat the sentences while pupil size was measured continuously during the task. Results During the auditory processing phase, both groups displayed pupil dilation; however, adults exhibited greater dilation than children, particularly in the low accuracy condition. In the second phase (retention), only children demonstrated increased pupil dilation, whereas adults consistently exhibited a decrease in pupil size. Additionally, the children's group showed increased pupil dilation during the response phase. Discussion Although adults and school-aged children produce similar behavioural scores, group differences in dilation patterns point that their underlying auditory processing differs. A second peak of pupil dilation among the children suggests that their cognitive effort during speech recognition in noise lasts longer than in adults, continuing past the first auditory processing peak dilation. These findings support effortful listening among children and highlight the need to identify and alleviate listening difficulties in school-aged children, to provide proper intervention strategies.
Collapse
Affiliation(s)
- Avital Trau-Margalit
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
| | - Leah Fostick
- Department of Communication Disorders, Auditory Perception Lab in the Name of Laurent Levy, Ariel University, Ariel, Israel
| | - Tami Harel-Arbeli
- Department of Gerontology, University of Haifa, Haifa, Israel
- Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | | | - Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| |
Collapse
|
30
|
Baş B, Yücel E. Sensory profiles of children using cochlear implant and auditory brainstem implant. Int J Pediatr Otorhinolaryngol 2023; 170:111584. [PMID: 37224736 DOI: 10.1016/j.ijporl.2023.111584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 04/18/2023] [Accepted: 04/29/2023] [Indexed: 05/26/2023]
Affiliation(s)
- Banu Baş
- Ankara Yıldırım Beyazıt University, Faculty of Health Sciences, Department of Audiology, Ankara, Turkey.
| | - Esra Yücel
- Hacettepe University, Faculty of Health Sciences, Department of Audiology, Ankara, Turkey
| |
Collapse
|
31
|
Shatzer HE, Russo FA. Brightening the Study of Listening Effort with Functional Near-Infrared Spectroscopy: A Scoping Review. Semin Hear 2023; 44:188-210. [PMID: 37122884 PMCID: PMC10147513 DOI: 10.1055/s-0043-1766105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
Abstract
Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.
Collapse
Affiliation(s)
- Hannah E. Shatzer
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
32
|
Winn MB. Time Scales and Moments of Listening Effort Revealed in Pupillometry. Semin Hear 2023; 44:106-123. [PMID: 37122881 PMCID: PMC10147502 DOI: 10.1055/s-0043-1767741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023] Open
Abstract
This article offers a collection of observations that highlight the value of time course data in pupillometry and points out ways in which these observations create deeper understanding of listening effort. The main message is that listening effort should be considered on a moment-to-moment basis rather than as a singular amount. A review of various studies and the reanalysis of data reveal distinct signatures of effort before a stimulus, during a stimulus, in the moments after a stimulus, and changes over whole experimental testing sessions. Collectively these observations motivate questions that extend beyond the "amount" of effort, toward understanding how long the effort lasts, and how precisely someone can allocate effort at specific points in time or reduce effort at other times. Apparent disagreements between studies are reconsidered as informative lessons about stimulus selection and the nature of pupil dilation as a reflection of decision making rather than the difficulty of sensory encoding.
Collapse
Affiliation(s)
- Matthew B. Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota
| |
Collapse
|
33
|
Neagu MB, Kressner AA, Relaño-Iborra H, Bækgaard P, Dau T, Wendt D. Investigating the Reliability of Pupillometry as a Measure of
Individualized Listening Effort. Trends Hear 2023; 27:23312165231153288. [PMCID: PMC9947699 DOI: 10.1177/23312165231153288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023] Open
Abstract
Recordings of the pupillary response have been used in numerous studies to assess
listening effort during a speech-in-noise task. Most studies focused on averaged
responses across listeners, whereas less is known about pupil dilation as an
indicator of the individuals’ listening effort. The present study investigated
the reliability of several pupil features as potential indicators of individual
listening effort and the impact of different normalization procedures on the
reliability. The pupil diameters of 31 normal-hearing listeners were recorded
during multiple visits while performing a speech-in-noise task. The
signal-to-noise ratios (SNRs) of the stimuli ranged from
−12 dB to
+4 dB. All listeners
were measured twice at separate visits, and 11 were re-tested at a third visit.
To examine the reliability of the pupil responses across visits, the intraclass
correlation coefficient was applied to the peak and mean pupil dilation and to
the temporal features of the pupil response, extracted using growth curve
analysis. The reliability of the pupillary response was assessed in relation to
SNR and different normalization procedures over multiple visits. The most
reliable pupil features were the traditional mean and peak pupil dilation. The
highest reliability results were obtained when the data were baseline-corrected
and normalized to the individual pupil response range across all visits.
Moreover, the present study results showed only a minor impact of the SNR and
the number of visits on the reliability of the pupil response. Overall, the
results may provide an important basis for developing a standardized test for
pupillometry in the clinic.
Collapse
Affiliation(s)
- Mihaela-Beatrice Neagu
- Department of Health Technology, DTU Hearing Systems, Denmark,Mihaela-Beatrice Neagu, Department of
Health Technology, DTU Hearing Systems, Denmark.
| | - Abigail A. Kressner
- Department of Health Technology, DTU Hearing Systems, Denmark,Copenhagen Hearing and Balance Centre, Rigshospitalet, Copenhagen
University Hospital, Denmark
| | - Helia Relaño-Iborra
- Department of Health Technology, DTU Hearing Systems, Denmark,Department of Applied Mathematics and Computer Science, DTU Cognitive systems, Denmark
| | - Per Bækgaard
- Department of Applied Mathematics and Computer Science, DTU Cognitive systems, Denmark
| | - Torsten Dau
- Department of Health Technology, DTU Hearing Systems, Denmark,Copenhagen Hearing and Balance Centre, Rigshospitalet, Copenhagen
University Hospital, Denmark
| | - Dorothea Wendt
- Department of Health Technology, DTU Hearing Systems, Denmark,Eriksholm Research Centre, Denmark
| |
Collapse
|
34
|
Hershman R, Milshtein D, Henik A. The contribution of temporal analysis of pupillometry measurements to cognitive research. PSYCHOLOGICAL RESEARCH 2023; 87:28-42. [PMID: 35178621 DOI: 10.1007/s00426-022-01656-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 01/26/2022] [Indexed: 01/27/2023]
Abstract
Reaction time (RT) is one of the most frequently used measures to detect cognitive processes. When tasks require more cognitive processes/resources, reaction is slower. However, RTs may provide only restricted information regarding the temporal characteristics of cognitive processes. Pupils respond reflexively to light but also to cognitive activation. The more cognitive resources a task requires, the more the pupil dilates. However, despite being able to use temporal changes in pupil size (advanced devices measure changes in pupil diameter with sampling rates of above 1000 samples per second), most past studies using pupil dilation have not investigated temporal changes in pupil response. In the current paper, we discuss the advantage of the temporal approach to analyze pupil changes compared to a more traditional perspective, specifically, singular value methods such as mean value and peak amplitude value. Using data from two recent studies conducted in our laboratory, we demonstrate the differences in findings arising from the various analyses. In particular, we focus on the advantage of temporal analysis in detecting hidden effects, investigating temporal characterizations of the effects, and validating the experimental manipulation.
Collapse
Affiliation(s)
- Ronen Hershman
- Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, P.O.B. 653, Beer-Sheva, Israel.
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
| | - Dalit Milshtein
- Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, P.O.B. 653, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Avishai Henik
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
35
|
Short Implicit Voice Training Affects Listening Effort During a Voice Cue Sensitivity Task With Vocoder-Degraded Speech. Ear Hear 2023:00003446-990000000-00113. [PMID: 36695603 PMCID: PMC10262993 DOI: 10.1097/aud.0000000000001335] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
OBJECTIVES Understanding speech in real life can be challenging and effortful, such as in multiple-talker listening conditions. Fundamental frequency (fo) and vocal-tract length (vtl) voice cues can help listeners segregate between talkers, enhancing speech perception in adverse listening conditions. Previous research showed lower sensitivity to fo and vtl voice cues when speech signal was degraded, such as in cochlear implant hearing and vocoder-listening compared to normal hearing, likely contributing to difficulties in understanding speech in adverse listening. Nevertheless, when multiple talkers are present, familiarity with a talker's voice, via training or exposure, could provide a speech intelligibility benefit. In this study, the objective was to assess how an implicit short-term voice training could affect perceptual discrimination of voice cues (fo+vtl), measured in sensitivity and listening effort, with or without vocoder degradations. DESIGN Voice training was provided via listening to a recording of a book segment for approximately 30 min, and answering text-related questions, to ensure engagement. Just-noticeable differences (JNDs) for fo+vtl were measured with an odd-one-out task implemented as a 3-alternative forced-choice adaptive paradigm, while simultaneously collecting pupil data. The reference voice either belonged to the trained voice or an untrained voice. Effects of voice training (trained and untrained voice), vocoding (non-vocoded and vocoded), and item variability (fixed or variable consonant-vowel triplets presented across three items) on voice cue sensitivity (fo+vtl JNDs) and listening effort (pupillometry measurements) were analyzed. RESULTS Results showed that voice training did not have a significant effect on voice cue discrimination. As expected, fo+vtl JNDs were significantly larger for vocoded conditions than for non-vocoded conditions and with variable item presentations than fixed item presentations. Generalized additive mixed models analysis of pupil dilation over the time course of stimulus presentation showed that pupil dilation was significantly larger during fo+vtl discrimination while listening to untrained voices compared to trained voices, but only for vocoder-degraded speech. Peak pupil dilation was significantly larger for vocoded conditions compared to non-vocoded conditions and variable items increased the pupil baseline relative to fixed items, which could suggest a higher anticipated task difficulty. CONCLUSIONS In this study, even though short voice training did not lead to improved sensitivity to small fo+vtl voice cue differences at the discrimination threshold level, voice training still resulted in reduced listening effort for discrimination among vocoded voice cues.
Collapse
|
36
|
Pernia M, Kar M, Montes-Lourido P, Sadagopan S. Pupillometry to Assess Auditory Sensation in Guinea Pigs. J Vis Exp 2023:10.3791/64581. [PMID: 36688548 PMCID: PMC9929667 DOI: 10.3791/64581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Noise exposure is a leading cause of sensorineural hearing loss. Animal models of noise-induced hearing loss have generated mechanistic insight into the underlying anatomical and physiological pathologies of hearing loss. However, relating behavioral deficits observed in humans with hearing loss to behavioral deficits in animal models remains challenging. Here, pupillometry is proposed as a method that will enable the direct comparison of animal and human behavioral data. The method is based on a modified oddball paradigm - habituating the subject to the repeated presentation of a stimulus and intermittently presenting a deviant stimulus that varies in some parametric fashion from the repeated stimulus. The fundamental premise is that if the change between the repeated and deviant stimulus is detected by the subject, it will trigger a pupil dilation response that is larger than that elicited by the repeated stimulus. This approach is demonstrated using a vocalization categorization task in guinea pigs, an animal model widely used in auditory research, including in hearing loss studies. By presenting vocalizations from one vocalization category as standard stimuli and a second category as oddball stimuli embedded in noise at various signal-to-noise ratios, it is demonstrated that the magnitude of pupil dilation in response to the oddball category varies monotonically with the signal-to-noise ratio. Growth curve analyses can then be used to characterize the time course and statistical significance of these pupil dilation responses. In this protocol, detailed procedures for acclimating guinea pigs to the setup, conducting pupillometry, and evaluating/analyzing data are described. Although this technique is demonstrated in normal-hearing guinea pigs in this protocol, the method may be used to assess the sensory effects of various forms of hearing loss within each subject. These effects may then be correlated with concurrent electrophysiological measures and post-hoc anatomical observations.
Collapse
Affiliation(s)
- Marianny Pernia
- Department of Neurobiology, University of Pittsburgh; Center for Neuroscience, University of Pittsburgh
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh; Center for Neuroscience, University of Pittsburgh; Center for Neural Basis of Cognition, University of Pittsburgh
| | - Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh; Center for Neuroscience, University of Pittsburgh; Department of Transfer and Innovation, USC University Hospital Complex (CHUS), University of Santiago de Compostela
| | - Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh; Center for Neuroscience, University of Pittsburgh; Department of Bioengineering, University of Pittsburgh; Center for Neural Basis of Cognition, University of Pittsburgh; Department of Communication Science and Disorders, University of Pittsburgh;
| |
Collapse
|
37
|
Bsharat-Maalouf D, Degani T, Karawani H. The Involvement of Listening Effort in Explaining Bilingual Listening Under Adverse Listening Conditions. Trends Hear 2023; 27:23312165231205107. [PMID: 37941413 PMCID: PMC10637154 DOI: 10.1177/23312165231205107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/14/2023] [Accepted: 09/15/2023] [Indexed: 11/10/2023] Open
Abstract
The current review examines listening effort to uncover how it is implicated in bilingual performance under adverse listening conditions. Various measures of listening effort, including physiological, behavioral, and subjective measures, have been employed to examine listening effort in bilingual children and adults. Adverse listening conditions, stemming from environmental factors, as well as factors related to the speaker or listener, have been examined. The existing literature, although relatively limited to date, points to increased listening effort among bilinguals in their nondominant second language (L2) compared to their dominant first language (L1) and relative to monolinguals. Interestingly, increased effort is often observed even when speech intelligibility remains unaffected. These findings emphasize the importance of considering listening effort alongside speech intelligibility. Building upon the insights gained from the current review, we propose that various factors may modulate the observed effects. These include the particular measure selected to examine listening effort, the characteristics of the adverse condition, as well as factors related to the particular linguistic background of the bilingual speaker. Critically, further research is needed to better understand the impact of these factors on listening effort. The review outlines avenues for future research that would promote a comprehensive understanding of listening effort in bilingual individuals.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Tamar Degani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
38
|
O’Leary RM, Neukam J, Hansen TA, Kinney AJ, Capach N, Svirsky MA, Wingfield A. Strategic Pauses Relieve Listeners from the Effort of Listening to Fast Speech: Data Limited and Resource Limited Processes in Narrative Recall by Adult Users of Cochlear Implants. Trends Hear 2023; 27:23312165231203514. [PMID: 37941344 PMCID: PMC10637151 DOI: 10.1177/23312165231203514] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 08/11/2023] [Accepted: 09/08/2023] [Indexed: 11/10/2023] Open
Abstract
Speech that has been artificially accelerated through time compression produces a notable deficit in recall of the speech content. This is especially so for adults with cochlear implants (CI). At the perceptual level, this deficit may be due to the sharply degraded CI signal, combined with the reduced richness of compressed speech. At the cognitive level, the rapidity of time-compressed speech can deprive the listener of the ordinarily available processing time present when speech is delivered at a normal speech rate. Two experiments are reported. Experiment 1 was conducted with 27 normal-hearing young adults as a proof-of-concept demonstration that restoring lost processing time by inserting silent pauses at linguistically salient points within a time-compressed narrative ("time-restoration") returns recall accuracy to a level approximating that for a normal speech rate. Noise vocoder conditions with 10 and 6 channels reduced the effectiveness of time-restoration. Pupil dilation indicated that additional effort was expended by participants while attempting to process the time-compressed narratives, with the effortful demand on resources reduced with time restoration. In Experiment 2, 15 adult CI users tested with the same (unvocoded) materials showed a similar pattern of behavioral and pupillary responses, but with the notable exception that meaningful recovery of recall accuracy with time-restoration was limited to a subgroup of CI users identified by better working memory spans, and better word and sentence recognition scores. Results are discussed in terms of sensory-cognitive interactions in data-limited and resource-limited processes among adult users of cochlear implants.
Collapse
Affiliation(s)
- Ryan M. O’Leary
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | - Jonathan Neukam
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Thomas A. Hansen
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | | | - Nicole Capach
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Mario A. Svirsky
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
39
|
Książek P, Zekveld AA, Fiedler L, Kramer SE, Wendt D. Time-specific Components of Pupil Responses Reveal Alternations in Effort Allocation Caused by Memory Task Demands During Speech Identification in Noise. Trends Hear 2023; 27:23312165231153280. [PMID: 36938784 PMCID: PMC10028670 DOI: 10.1177/23312165231153280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2023] Open
Abstract
Daily communication may be effortful due to poor acoustic quality. In addition, memory demands can induce effort, especially for long or complex sentences. In the current study, we tested the impact of memory task demands and speech-to-noise ratio on the time-specific components of effort allocation during speech identification in noise. Thirty normally hearing adults (15 females, mean age 42.2 years) participated. In an established auditory memory test, listeners had to listen to a list of seven sentences in noise, and repeat the sentence-final word after presentation, and, if instructed, recall the repeated words. We tested the effects of speech-to-noise ratio (SNR; -4 dB, +1 dB) and recall (Recall; Yes, No), on the time-specific components of pupil responses, trial baseline pupil size, and their dynamics (change) along the list. We found three components in the pupil responses (early, middle, and late). While the additional memory task (recall versus no recall) lowered all components' values, SNR (-4 dB versus +1 dB SNR) increased the middle and late component values. Increasing memory demands (Recall) progressively increased trial baseline and steepened decrease of the late component's values. Trial baseline increased most steeply in the condition of +1 dB SNR with recall. The findings suggest that adding a recall to the auditory task alters effort allocation for listening. Listeners are dynamically re-allocating effort from listening to memorizing under changing memory and acoustic demands. The pupil baseline and the time-specific components of pupil responses provide a comprehensive picture of the interplay of SNR and recall on effort.
Collapse
Affiliation(s)
- Patrycja Książek
- 26066Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology/Head and Neck Surgery, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
- 263099Eriksholm Research Centre, Snekkersten, Denmark
| | - Adriana A Zekveld
- 26066Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology/Head and Neck Surgery, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | | | - Sophia E Kramer
- 26066Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology/Head and Neck Surgery, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | - Dorothea Wendt
- 263099Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, 5205Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
40
|
Shen J, Heller Murray E, Kulick ER. The Effect of Breathy Vocal Quality on Speech Intelligibility and Listening Effort in Background Noise. Trends Hear 2023; 27:23312165231206925. [PMID: 37817666 PMCID: PMC10566269 DOI: 10.1177/23312165231206925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 09/06/2023] [Accepted: 09/25/2023] [Indexed: 10/12/2023] Open
Abstract
Speech perception is challenging under adverse conditions. However, there is limited evidence regarding how multiple adverse conditions affect speech perception. The present study investigated two conditions that are frequently encountered in real-life communication: background noise and breathy vocal quality. The study first examined the effects of background noise and breathiness on speech perception as measured by intelligibility. Secondly, the study tested the hypothesis that both noise and breathiness affect listening effort, as indicated by linear and nonlinear changes in pupil dilation. Low-context sentences were resynthesized to create three levels of breathiness (original, mild-moderate, and severe). The sentences were presented in a fluctuating nonspeech noise with two signal-to-noise ratios (SNRs) of -5 dB (favorable) and -9 dB (adverse) SNR. Speech intelligibility and pupil dilation data were collected from young listeners with normal hearing thresholds. The results demonstrated that a breathy vocal quality presented in noise negatively affected speech intelligibility, with the degree of breathiness playing a critical role. Listening effort, as measured by the magnitude of pupil dilation, showed significant effects with both severe and mild-moderate breathy voices that were independent of noise level. The findings contributed to the literature by demonstrating the impact of vocal quality on the perception of speech in noise. They also highlighted the complex dynamics between overall task demand and processing resources in understanding the combined impact of multiple adverse conditions.
Collapse
Affiliation(s)
- Jing Shen
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, USA
| | - Elizabeth Heller Murray
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, USA
| | - Erin R. Kulick
- Department of Epidemiology and Biostatistics, College of Public Health, Temple University, Philadelphia, PA, USA
| |
Collapse
|
41
|
Jakobsen Y, Christensen Andersen LA, Schmidt JH. Study protocol for a randomised controlled trial evaluating the benefits from bimodal solution with cochlear implant and hearing aid versus bilateral hearing aids in patients with asymmetric speech identification scores. BMJ Open 2022; 12:e070296. [PMID: 36581413 PMCID: PMC9806092 DOI: 10.1136/bmjopen-2022-070296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
INTRODUCTION Cochlear implant (CI) and hearing aid (HA) in a bimodal solution (CI+HA) is compared with bilateral HAs (HA+HA) to test if the bimodal solution results in better speech intelligibility and self-reported quality of life. METHODS AND ANALYSIS This randomised controlled trial is conducted in Odense University Hospital, Denmark. Sixty adult bilateral HA users referred for CI surgery are enrolled if eligible and undergo: audiometry, speech perception in noise (HINT: Hearing in Noise Test), Speech Identification Scores and video head impulse test. All participants will receive new replacement HAs. After 1 month they will be randomly assigned (1:1) to the intervention group (CI+HA) or to the delayed intervention control group (HA+HA). The intervention group (CI+HA) will receive a CI on the ear with a poorer speech recognition score and continue using the HA on the other ear. The control group (HA+HA) will receive a CI after a total of 4 months of bilateral HA use.The primary outcome measures are speech intelligibility measured objectively with HINT (sentences in noise) and DANTALE I (words) and subjectively with the Speech, Spatial and Qualities of Hearing scale questionnaire. Secondary outcomes are patient reported Health-Related Quality of Life scores assessed with the Nijmegen Cochlear Implant Questionnaire, the Tinnitus Handicap Inventory and Dizziness Handicap Inventory. Third outcome is listening effort assessed with pupil dilation during HINT.In conclusion, the purpose is to improve the clinical decision-making for CI candidacy and optimise bimodal solutions. ETHICS AND DISSEMINATION This study protocol was approved by the Ethics Committee Southern Denmark project ID S-20200074G. All participants are required to sign an informed consent form.This study will be published on completion in peer-reviewed publications and scientific conferences. TRIAL REGISTRATION NUMBER NCT04919928.
Collapse
Affiliation(s)
- Yeliz Jakobsen
- Department of Oto-Rhino-Laryngology, Odense University Hospital, Odense C, Denmark
- Department of Audiology, Odense University Hospital, Odense C, Denmark
| | | | - Jesper Hvass Schmidt
- Department of Oto-Rhino-Laryngology, Odense University Hospital, Odense C, Denmark
- Department of Audiology, Odense University Hospital, Odense C, Denmark
| |
Collapse
|
42
|
Burg EA, Thakkar TD, Litovsky RY. Interaural speech asymmetry predicts bilateral speech intelligibility but not listening effort in adults with bilateral cochlear implants. Front Neurosci 2022; 16:1038856. [PMID: 36570844 PMCID: PMC9768552 DOI: 10.3389/fnins.2022.1038856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
Introduction Bilateral cochlear implants (BiCIs) can facilitate improved speech intelligibility in noise and sound localization abilities compared to a unilateral implant in individuals with bilateral severe to profound hearing loss. Still, many individuals with BiCIs do not benefit from binaural hearing to the same extent that normal hearing (NH) listeners do. For example, binaural redundancy, a speech intelligibility benefit derived from having access to duplicate copies of a signal, is highly variable among BiCI users. Additionally, patients with hearing loss commonly report elevated listening effort compared to NH listeners. There is some evidence to suggest that BiCIs may reduce listening effort compared to a unilateral CI, but the limited existing literature has not shown this consistently. Critically, no studies to date have investigated this question using pupillometry to quantify listening effort, where large pupil sizes indicate high effort and small pupil sizes indicate low effort. Thus, the present study aimed to build on existing literature by investigating the potential benefits of BiCIs for both speech intelligibility and listening effort. Methods Twelve BiCI adults were tested in three listening conditions: Better Ear, Poorer Ear, and Bilateral. Stimuli were IEEE sentences presented from a loudspeaker at 0° azimuth in quiet. Participants were asked to repeat back the sentences, and responses were scored by an experimenter while changes in pupil dilation were measured. Results On average, participants demonstrated similar speech intelligibility in the Better Ear and Bilateral conditions, and significantly worse speech intelligibility in the Poorer Ear condition. Despite similar speech intelligibility in the Better Ear and Bilateral conditions, pupil dilation was significantly larger in the Bilateral condition. Discussion These results suggest that the BiCI users tested in this study did not demonstrate binaural redundancy in quiet. The large interaural speech asymmetries demonstrated by participants may have precluded them from obtaining binaural redundancy, as shown by the inverse relationship between the two variables. Further, participants did not obtain a release from effort when listening with two ears versus their better ear only. Instead, results indicate that bilateral listening elicited increased effort compared to better ear listening, which may be due to poor integration of asymmetric inputs.
Collapse
Affiliation(s)
- Emily A. Burg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States,Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States,*Correspondence: Emily A. Burg,
| | - Tanvi D. Thakkar
- Department of Psychology, University of Wisconsin-La Crosse, La Crosse, WI, United States
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States,Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States,Division of Otolaryngology, Department of Surgery, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
43
|
Zhang Y, Malaval F, Lehmann A, Deroche MLD. Luminance effects on pupil dilation in speech-in-noise recognition. PLoS One 2022; 17:e0278506. [PMID: 36459511 PMCID: PMC9718387 DOI: 10.1371/journal.pone.0278506] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 11/17/2022] [Indexed: 12/03/2022] Open
Abstract
There is an increasing interest in the field of audiology and speech communication to measure the effort that it takes to listen in noisy environments, with obvious implications for populations suffering from hearing loss. Pupillometry offers one avenue to make progress in this enterprise but important methodological questions remain to be addressed before such tools can serve practical applications. Typically, cocktail-party situations may occur in less-than-ideal lighting conditions, e.g. a pub or a restaurant, and it is unclear how robust pupil dynamics are to luminance changes. In this study, we first used a well-known paradigm where sentences were presented at different signal-to-noise ratios (SNR), all conducive of good intelligibility. This enabled us to replicate findings, e.g. a larger and later peak pupil dilation (PPD) at adverse SNR, or when the sentences were misunderstood, and to investigate the dependency of the PPD on sentence duration. A second experiment reiterated two of the SNR levels, 0 and +14 dB, but measured at 0, 75, and 220 lux. The results showed that the impact of luminance on the SNR effect was non-monotonic (sub-optimal in darkness or in bright light), and as such, there is no trivial way to derive pupillary metrics that are robust to differences in background light, posing considerable constraints for applications of pupillometry in daily life. Our findings raise an under-examined but crucial issue when designing and understanding listening effort studies using pupillometry, and offer important insights to future clinical application of pupillometry across sites.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
- * E-mail:
| | - Florian Malaval
- Department of Otolaryngology, McGill University, Montreal, Canada
| | - Alexandre Lehmann
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
| | - Mickael L. D. Deroche
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
- Department of Psychology, Concordia University, Montreal, Canada
| |
Collapse
|
44
|
Yüksel M, Taşdemir İ, Çiprut A. Listening Effort in Prelingual Cochlear Implant Recipients: Effects of Spectral and Temporal Auditory Processing and Contralateral Acoustic Hearing. Otol Neurotol 2022; 43:e1077-e1084. [PMID: 36099588 DOI: 10.1097/mao.0000000000003690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE Considering the impact of listening effort (LE) on auditory perception, attention, and memory, it is a significant aspect in the daily hearing experiences of cochlear implant (CI) recipients. Reduced spectral and temporal information on an acoustic signal can make listening more difficult; as a result, it is important to understand the relationship between LE and spectral and temporal auditory processing capacities in CI receivers. STUDY DESIGN, SETTING, AND PATIENTS This study used spectral ripple discrimination and temporal modulation transfer function to evaluate 20 prelingually deafened and early implanted CI recipients. The speech perception in noise test (primary) and the digit recall task (DRT-secondary) were used to assess LE using the dual-task paradigm. To assess the effects of acoustic hearing, contralateral acoustic hearing thresholds between 125 Hz and 8 kHz with a hearing aid were also acquired. To examine the relationship between the research variables, correlation coefficients were generated. Furthermore, the Mann-Whitney U test was used to compare unilateral and bimodal users. RESULTS There was a statistically significant correlation between LE and spectral ripple discrimination (r = 0.56; p = 0.011), 125 Hz (r = 0.51; p = 0.020), 250 Hz (r = 0.48; p = 0.030), 500 Hz (r = 0.45; p = 0.045), 1,000 Hz (r = 0.51; p = 0.023), 2000 Hz (r = 0.48; p = 0.031), and 4,000 Hz (r = 0.48; p = 0.031), whereas no statistically significant correlations were observed between temporal modulation transfer function in four frequencies and LE. There was no statistically significant difference between unilateral and bimodal CI recipients ( p > 0.05). CONCLUSION As a result of the improved signal-to-noise ratio in the auditory environment, CI users with better spectral resolutions and acoustic hearing have a reduced LE. On the other hand, temporal auditory processing, as measured by temporal modulation detection, does not contribute to the LE.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Department of Speech and Language Therapy, School of Health Sciences, Ankara Medipol University
| | - İlknur Taşdemir
- Audiology Department, Graduate School of Health Sciences, Hacettepe University, Ankara
| | - Ayça Çiprut
- Audiology Department, Faculty of Medicine, Marmara University, İstanbul, Turkey
| |
Collapse
|
45
|
Relaño-Iborra H, Wendt D, Neagu MB, Kressner AA, Dau T, Bækgaard P. Baseline pupil size encodes task-related information and modulates the task-evoked response in a speech-in-noise task. Trends Hear 2022; 26:23312165221134003. [PMID: 36426573 PMCID: PMC9703509 DOI: 10.1177/23312165221134003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Pupillometry data are commonly reported relative to a baseline value recorded in a controlled pre-task condition. In this study, the influence of the experimental design and the preparatory processing related to task difficulty on the baseline pupil size was investigated during a speech intelligibility in noise paradigm. Furthermore, the relationship between the baseline pupil size and the temporal dynamics of the pupil response was assessed. The analysis revealed strong effects of block presentation order, within-block sentence order and task difficulty on the baseline values. An interaction between signal-to-noise ratio and block order was found, indicating that baseline values reflect listener expectations arising from the order in which the different blocks were presented. Furthermore, the baseline pupil size was found to affect the slope, delay and curvature of the pupillary response as well as the peak pupil dilation. This suggests that baseline correction might be sufficient when reporting pupillometry results in terms of mean pupil dilation only, but not when a more complex characterization of the temporal dynamics of the response is considered. By clarifying which factors affect baseline pupil size and how baseline values interact with the task-evoked response, the results from the present study can contribute to a better interpretation of the pupillary response as a marker of cognitive processing.
Collapse
Affiliation(s)
- Helia Relaño-Iborra
- Cognitive Systems Section, Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark,Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark,Helia Relaño-Iborra, Cognitive Systems Section, Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark.
| | - Dorothea Wendt
- Eriksholm Research Center, Oticon, 3070 Snekkersten, Denmark
| | - Mihaela Beatrice Neagu
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark
| | - Abigail Anne Kressner
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark,Copenhagen Hearing and Balance Center, Rigshospitalet, 2100, Copenhagen, Denmark
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark
| | - Per Bækgaard
- Cognitive Systems Section, Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark
| |
Collapse
|
46
|
Shen J, Fitzgerald LP, Kulick ER. Interactions between acoustic challenges and processing depth in speech perception as measured by task-evoked pupil response. Front Psychol 2022; 13:959638. [PMID: 36389464 PMCID: PMC9641013 DOI: 10.3389/fpsyg.2022.959638] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 09/12/2022] [Indexed: 08/21/2023] Open
Abstract
Speech perception under adverse conditions is a multistage process involving a dynamic interplay among acoustic, cognitive, and linguistic factors. Nevertheless, prior research has primarily focused on factors within this complex system in isolation. The primary goal of the present study was to examine the interaction between processing depth and the acoustic challenge of noise and its effect on processing effort during speech perception in noise. Two tasks were used to represent different depths of processing. The speech recognition task involved repeating back a sentence after auditory presentation (higher-level processing), while the tiredness judgment task entailed a subjective judgment of whether the speaker sounded tired (lower-level processing). The secondary goal of the study was to investigate whether pupil response to alteration of dynamic pitch cues stems from difficult linguistic processing of speech content in noise or a perceptual novelty effect due to the unnatural pitch contours. Task-evoked peak pupil response from two groups of younger adult participants with typical hearing was measured in two experiments. Both tasks (speech recognition and tiredness judgment) were implemented in both experiments, and stimuli were presented with background noise in Experiment 1 and without noise in Experiment 2. Increased peak pupil dilation was associated with deeper processing (i.e., the speech recognition task), particularly in the presence of background noise. Importantly, there is a non-additive interaction between noise and task, as demonstrated by the heightened peak pupil dilation to noise in the speech recognition task as compared to in the tiredness judgment task. Additionally, peak pupil dilation data suggest dynamic pitch alteration induced an increased perceptual novelty effect rather than reflecting effortful linguistic processing of the speech content in noise. These findings extend current theories of speech perception under adverse conditions by demonstrating that the level of processing effort expended by a listener is influenced by the interaction between acoustic challenges and depth of linguistic processing. The study also provides a foundation for future work to investigate the effects of this complex interaction in clinical populations who experience both hearing and cognitive challenges.
Collapse
Affiliation(s)
- Jing Shen
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Laura P. Fitzgerald
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Erin R. Kulick
- Department of Epidemiology and Biostatistics, College of Public Health, Temple University, Philadelphia, PA, United States
| |
Collapse
|
47
|
Winn MB, Teece KH. Effortful Listening Despite Correct Responses: The Cost of Mental Repair in Sentence Recognition by Listeners With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3966-3980. [PMID: 36112516 PMCID: PMC9927629 DOI: 10.1044/2022_jslhr-21-00631] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 04/20/2022] [Accepted: 06/24/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE Speech recognition percent correct scores fail to capture the effort of mentally repairing the perception of speech that was initially misheard. This study measured the effort of listening to stimuli specifically designed to elicit mental repair in adults who use cochlear implants (CIs). METHOD CI listeners heard and repeated sentences in which specific words were distorted or masked by noise but recovered based on later context: a signature of mental repair. Changes in pupil dilation were tracked as an index of effort and time-locked with specific landmarks during perception. RESULTS Effort significantly increases when a listener needs to repair a misperceived word, even if the verbal response is ultimately correct. Mental repair of words in a sentence was accompanied by greater prevalence of errors elsewhere in the same sentence, suggesting that effort spreads to consume resources across time. The cost of mental repair in CI listeners was essentially the same as that observed in listeners with normal hearing in previous work. CONCLUSIONS Listening effort as tracked by pupil dilation is better explained by the mental repair and reconstruction of words rather than the appearance of correct or incorrect perception. Linguistic coherence drives effort more heavily than the mere presence of mistakes, highlighting the importance of testing materials that do not constrain coherence by design.
Collapse
Affiliation(s)
- Matthew B. Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, Minneapolis
| | - Katherine H. Teece
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, Minneapolis
| |
Collapse
|
48
|
Hogan AL, Winston M, Barstein J, Losh M. Slower Peak Pupillary Response to Emotional Faces in Parents of Autistic Individuals. Front Psychol 2022; 13:836719. [PMID: 36304881 PMCID: PMC9595282 DOI: 10.3389/fpsyg.2022.836719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 05/31/2022] [Indexed: 11/23/2022] Open
Abstract
Background Atypical autonomic arousal has been consistently documented in autism spectrum disorder (ASD) and is thought to contribute to the social-communication phenotype of ASD. Some evidence suggests that clinically unaffected first-degree relatives of autistic individuals may also show subtle differences in indices of autonomic arousal, potentially implicating heritable pathophysiological mechanisms in ASD. This study examined pupillary responses in parents of autistic individuals to investigate evidence that atypical autonomic arousal might constitute a subclinical physiological marker of ASD heritability within families of autistic individuals. Methods Pupillary responses to emotional faces were measured in 47 ASD parents and 20 age-matched parent controls. Macro-level pupillary responses (e.g., mean, peak, latency to peak) and dynamic pupillary responses over the course of the stimulus presentation were compared between groups, and in relationship to subclinical ASD-related features in ASD parents. A small ASD group (n = 20) and controls (n = 17) were also included for exploratory analyses of parent–child correlations in pupillary response. Results Parents of autistic individuals differed in the time course of pupillary response, exhibiting a later primary peak response than controls. In ASD parents, slower peak response was associated with poorer pragmatic language and larger peak response was associated with poorer social cognition. Exploratory analyses revealed correlations between peak pupillary responses in ASD parents and mean and peak pupillary responses in their autistic children. Conclusion Differences in pupillary responses in clinically unaffected parents, together with significant correlations with ASD-related features and significant parent–child associations, suggest that pupillary responses to emotional faces may constitute an objective physiological marker of ASD genetic liability, with potential to inform the mechanistic underpinnings of ASD symptomatology.
Collapse
|
49
|
Zhou X, Burg E, Kan A, Litovsky RY. Investigating effortful speech perception using fNIRS and pupillometry measures. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100052. [PMID: 36518346 PMCID: PMC9743070 DOI: 10.1016/j.crneur.2022.100052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 05/12/2022] [Accepted: 08/12/2022] [Indexed: 10/15/2022] Open
Abstract
The current study examined the neural mechanisms for mental effort and its correlation to speech perception using functional near-infrared spectroscopy (fNIRS) in listeners with normal hearing (NH). Data were collected while participants listened and responded to unprocessed and degraded sentences, where words were presented in grammatically correct or shuffled order. Effortful listening and task difficulty due to stimulus manipulations was confirmed using a subjective questionnaire and a well-established objective measure of mental effort - pupillometry. fNIRS measures focused on cortical responses in two a priori regions of interest, the left auditory cortex (AC) and lateral frontal cortex (LFC), which are closely related to auditory speech perception and listening effort, respectively. We examined the relations between the two objective measures and behavioral measures of speech perception (task performance) and task difficulty. Results demonstrated that changes in pupil dilation were positively correlated with the self-reported task difficulty levels and negatively correlated with the task performance scores. A significant and negative correlation between the two behavioral measures was also found. That is, as perceived task demands increased and task performance scores decreased, pupils dilated more. fNIRS measures (cerebral oxygenation) in the left AC and LFC were both negatively correlated with the self-reported task difficulty levels and positively correlated with task performance scores. These results suggest that pupillometry measures can indicate task demands and listening effort; whereas, fNIRS measures using a similar paradigm seem to reflect speech processing, but not effort.
Collapse
Affiliation(s)
- Xin Zhou
- Waisman Center, University of Wisconsin Madison, WI, USA
| | - Emily Burg
- Waisman Center, University of Wisconsin Madison, WI, USA
- Department of Communication Science and Disorders, University of Wisconsin Madison, WI, USA
| | - Alan Kan
- School of Engineering, Macquarie University, Sydney, NSW, Australia
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin Madison, WI, USA
- Department of Communication Science and Disorders, University of Wisconsin Madison, WI, USA
| |
Collapse
|
50
|
Zhang M, Palmer CV, Pratt SR, McNeil MR, Siegle GJ. Need for cognition is associated with the interaction of reward and task-load on effort: A verification and extension study. Int J Psychophysiol 2022; 180:60-67. [PMID: 35931237 DOI: 10.1016/j.ijpsycho.2022.07.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 07/27/2022] [Accepted: 07/28/2022] [Indexed: 10/16/2022]
Abstract
Here, we work to provide nuance around the assumption that people will work for rewards. We examine whether individuals' inherent tendency to mobilize cognitive effort (need for cognition, NFC) moderates this effect. We re-analyzed our existing data to verify an effect reported by Sandra and Otto (2018) regarding the association between NFC and reward-induced cognitive effort expenditure, using a more ecological cognitive task design and adding a psychophysiological measure of effort. Specifically, distinct from their short time course visual task-switching paradigm, we used a relatively long course auditory comprehension task paradigm. We found that, consistent with the original study, increased cognitive effort in response to incentive reward depends on individual differences in cognitive motivation (need for cognition). We also found that, to observe consistent phenomena, different indices of effort (behavioral and psychophysiological) need to be considered when evaluating the relationship between the effort expenditure and cognitive motivation. Pupil dilation showed an advantage over reaction time in revealing mental effort mobilized over a prolonged cognitive task. Our results suggest that assessing cognitive motivation when planning a behavior-change program involving reward feedback for positive performance could help to optimize individuals' effort investment.
Collapse
Affiliation(s)
- Min Zhang
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Huadong Hospital, Fudan University, Shanghai, China.
| | - Catherine V Palmer
- Department of Communication Science and Disorders, University of Pittsburgh, PA, USA; Department of Otolaryngology, University of Pittsburgh Medical Center, PA, USA
| | - Sheila R Pratt
- Department of Communication Science and Disorders, University of Pittsburgh, PA, USA; Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA, USA
| | - Malcolm R McNeil
- Department of Communication Science and Disorders, University of Pittsburgh, PA, USA; Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA, USA
| | - Greg J Siegle
- Department of Psychiatry, University of Pittsburgh, PA, USA
| |
Collapse
|