1
|
Xu S, Zhang H, Fan J, Jiang X, Zhang M, Guan J, Ding H, Zhang Y. Auditory Challenges and Listening Effort in School-Age Children With Autism: Insights From Pupillary Dynamics During Speech-in-Noise Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:2410-2453. [PMID: 38861391 DOI: 10.1044/2024_jslhr-23-00553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
PURPOSE This study aimed to investigate challenges in speech-in-noise (SiN) processing faced by school-age children with autism spectrum conditions (ASCs) and their impact on listening effort. METHOD Participants, including 23 Mandarin-speaking children with ASCs and 19 age-matched neurotypical (NT) peers, underwent sentence recognition tests in both quiet and noisy conditions, with a speech-shaped steady-state noise masker presented at 0-dB signal-to-noise ratio in the noisy condition. Recognition accuracy rates and task-evoked pupil responses were compared to assess behavioral performance and listening effort during auditory tasks. RESULTS No main effect of group was found on accuracy rates. Instead, significant effects emerged for autistic trait scores, listening conditions, and their interaction, indicating that higher trait scores were associated with poorer performance in noise. Pupillometric data revealed significantly larger and earlier peak dilations, along with more varied pupillary dynamics in the ASC group relative to the NT group, especially under noisy conditions. Importantly, the ASC group's peak dilation in quiet mirrored that of the NT group in noise. However, the ASC group consistently exhibited reduced mean dilations than the NT group. CONCLUSIONS Pupillary responses suggest a different resource allocation pattern in ASCs: An initial sharper and larger dilation may signal an intense, narrowed resource allocation, likely linked to heightened arousal, engagement, and cognitive load, whereas a subsequent faster tail-off may indicate a greater decrease in resource availability and engagement, or a quicker release of arousal and cognitive load. The presence of noise further accentuates this pattern. This highlights the unique SiN processing challenges children with ASCs may face, underscoring the importance of a nuanced, individual-centric approach for interventions and support.
Collapse
Affiliation(s)
- Suyun Xu
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Hua Zhang
- Department of Child and Adolescent Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, China
| | - Juan Fan
- Department of Child and Adolescent Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, China
| | - Xiaoming Jiang
- Institute of Linguistics, Shanghai International Studies University, China
| | - Minyue Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | | | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis
| |
Collapse
|
2
|
Lie S, Zekveld AA, Smits C, Kramer SE, Versfeld NJ. Learning effects in speech-in-noise tasks: Effect of masker modulation and masking release. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:341-349. [PMID: 38990038 DOI: 10.1121/10.0026519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 06/19/2024] [Indexed: 07/12/2024]
Abstract
Previous research has shown that learning effects are present for speech intelligibility in temporally modulated (TM) noise, but not in stationary noise. The present study aimed to gain more insight into the factors that might affect the time course (the number of trials required to reach stable performance) and size [the improvement in the speech reception threshold (SRT)] of the learning effect. Two hypotheses were addressed: (1) learning effects are present in both TM and spectrally modulated (SM) noise and (2) the time course and size of the learning effect depend on the amount of masking release caused by either TM or SM noise. Eighteen normal-hearing adults (23-62 years) participated in SRT measurements, in which they listened to sentences in six masker conditions, including stationary, TM, and SM noise conditions. The results showed learning effects in all TM and SM noise conditions, but not for the stationary noise condition. The learning effect was related to the size of masking release: a larger masking release was accompanied by an increased time course of the learning effect and a larger learning effect. The results also indicate that speech is processed differently in SM noise than in TM noise.
Collapse
Affiliation(s)
- Sisi Lie
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Adriana A Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Cas Smits
- Amsterdam UMC, University of Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Meibergdreef, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Sophia E Kramer
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Niek J Versfeld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Doyle BR, Aiyagari V, Yokobori S, Kuramatsu JB, Barnes A, Puccio A, Nairon EB, Marshall JL, Olson DM. Anisocoria After Direct Light Stimulus is Associated with Poor Outcomes Following Acute Brain Injury. Neurocrit Care 2024:10.1007/s12028-024-02030-1. [PMID: 38918339 DOI: 10.1007/s12028-024-02030-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 05/31/2024] [Indexed: 06/27/2024]
Abstract
BACKGROUND Assessing pupil size and reactivity is the standard of care in neurocritically ill patients. Anisocoria observed in critically ill patients often prompts further investigation and treatment. This study explores anisocoria at rest and after light stimulus determined using quantitative pupillometry as a predictor of discharge modified Rankin Scale (mRS) scores. METHODS This analysis includes data from an international registry and includes patients with paired (left and right eye) quantitative pupillometry readings linked to discharge mRS scores. Anisocoria was defined as the absolute difference in pupil size using three common cut points (> 0.5 mm, > 1 mm, and > 2 mm). Nonparametric models were constructed to explore patient outcome using three predictors: the presence of anisocoria at rest (in ambient light); the presence of anisocoria after light stimulus; and persistent anisocoria (present both at rest and after light). The primary outcome was discharge mRS score associated with the presence of anisocoria at rest versus after light stimulus using the three commonly defined cut points. RESULTS This analysis included 152,905 paired observations from 6,654 patients with a mean age of 57.0 (standard deviation 17.9) years, and a median hospital stay of 5 (interquartile range 3-12) days. The mean admission Glasgow Coma Scale score was 12.7 (standard deviation 3.5), and the median discharge mRS score was 2 (interquartile range 0-4). The ranges for absolute differences in pupil diameters were 0-5.76 mm at rest and 0-6.84 mm after light. Using an anisocoria cut point of > 0.5 mm, patients with anisocoria after light had worse median mRS scores (2 [interquartile range 0-4]) than patients with anisocoria at rest (1 [interquartile range 0-3]; P < .0001). Patients with persistent anisocoria had worse median mRS scores (3 [interquartile range 1-4]) than those without persistent anisocoria (1 [interquartile range 0-3]; P < .0001). Similar findings were observed using a cut point for anisocoria of > 1 mm and > 2 mm. CONCLUSIONS Anisocoria after light is a new biomarker that portends worse outcome than anisocoria at rest. After further validation, anisocoria after light should be considered for inclusion as a reported and trended assessment value.
Collapse
Affiliation(s)
- Brittany R Doyle
- Department of Nursing, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Venkatesh Aiyagari
- Neurological Surgery and Neurology University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Shoji Yokobori
- Department of Emergency and Critical Care Medicine, Nippon Medical School, Tokyo, Japan
| | - Joji B Kuramatsu
- Department of Neurology, University of Erlangen-Nuremberg, Erlangen, Germany
| | - Arianna Barnes
- Cardiac Intensive Care Unit, Barnes Jewish Hospital, St. Louis, MO, USA
| | - Ava Puccio
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Emerson B Nairon
- Department of Neurology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Jade L Marshall
- Department of Neurology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - DaiWai M Olson
- Department of Neurology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
4
|
Baldock J, Kapadia S, van Steenbrugge W, McCarley J. The Effects of Light Level and Signal-to-Noise Ratio on the Task-Evoked Pupil Response in a Speech-in-Noise Task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1964-1975. [PMID: 38690971 DOI: 10.1044/2024_jslhr-23-00627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2024]
Abstract
PURPOSE There is increasing interest in the measurement of cognitive effort during listening tasks, for both research and clinical purposes. Quantification of task-evoked pupil responses (TEPRs) is a psychophysiological method that can be used to study cognitive effort. However, light level during cognitively demanding listening tasks may affect TEPRs, complicating interpretation of listening-related changes. The objective of this study was to examine the effects of light level on TEPRs during effortful listening across a range of signal-to-noise ratios (SNRs). METHOD Thirty-six adults without hearing loss were asked to repeat target sentences presented in background babble noise while their pupil diameter was recorded. Light level and SNRs were manipulated in a 4 × 4 repeated-measures design. Repeated-measures analyses of variance were used to measure the effects. RESULTS Peak and mean dilation were typically larger in more adverse SNR conditions (except for SNR -6 dB) and smaller in higher light levels. Differences in mean and peak dilation between SNR conditions were larger in dim light than in brighter light. CONCLUSIONS Brighter light conditions make TEPRs less sensitive to variations in listening effort across levels of SNR. Therefore, light level must be considered and reported in detail to ensure sensitivity of TEPRs and for comparisons of findings across different studies. It is recommended that TEPR testing be conducted in relatively low light conditions, considering both background illumination and screen luminance. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25676538.
Collapse
Affiliation(s)
| | - Sarosh Kapadia
- Flinders University, Adelaide, South Australia, Australia
| | | | - Jason McCarley
- Flinders University, Adelaide, South Australia, Australia
- Oregon State University, Corvallis
| |
Collapse
|
5
|
Silcox JW, Bennett K, Copeland A, Ferguson SH, Payne BR. The Costs (and Benefits?) of Effortful Listening for Older Adults: Insights from Simultaneous Electrophysiology, Pupillometry, and Memory. J Cogn Neurosci 2024; 36:997-1020. [PMID: 38579256 DOI: 10.1162/jocn_a_02161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Although the impact of acoustic challenge on speech processing and memory increases as a person ages, older adults may engage in strategies that help them compensate for these demands. In the current preregistered study, older adults (n = 48) listened to sentences-presented in quiet or in noise-that were high constraint with either expected or unexpected endings or were low constraint with unexpected endings. Pupillometry and EEG were simultaneously recorded, and subsequent sentence recognition and word recall were measured. Like young adults in prior work, we found that noise led to increases in pupil size, delayed and reduced ERP responses, and decreased recall for unexpected words. However, in contrast to prior work in young adults where a larger pupillary response predicted a recovery of the N400 at the cost of poorer memory performance in noise, older adults did not show an associated recovery of the N400 despite decreased memory performance. Instead, we found that in quiet, increases in pupil size were associated with delays in N400 onset latencies and increased recognition memory performance. In conclusion, we found that transient variation in pupil-linked arousal predicted trade-offs between real-time lexical processing and memory that emerged at lower levels of task demand in aging. Moreover, with increased acoustic challenge, older adults still exhibited costs associated with transient increases in arousal without the corresponding benefits.
Collapse
|
6
|
Herrmann B, Ryan JD. Pupil Size and Eye Movements Differently Index Effort in Both Younger and Older Adults. J Cogn Neurosci 2024; 36:1325-1340. [PMID: 38683698 DOI: 10.1162/jocn_a_02172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The assessment of mental effort is increasingly relevant in neurocognitive and life span domains. Pupillometry, the measure of the pupil size, is often used to assess effort but has disadvantages. Analysis of eye movements may provide an alternative, but research has been limited to easy and difficult task demands in younger adults. An effort measure must be sensitive to the whole effort profile, including "giving up" effort investment, and capture effort in different age groups. The current study comprised three experiments in which younger (n = 66) and older (n = 44) adults listened to speech masked by background babble at different signal-to-noise ratios associated with easy, difficult, and impossible speech comprehension. We expected individuals to invest little effort for easy and impossible speech (giving up) but to exert effort for difficult speech. Indeed, pupil size was largest for difficult but lower for easy and impossible speech. In contrast, gaze dispersion decreased with increasing speech masking in both age groups. Critically, gaze dispersion during difficult speech returned to levels similar to easy speech after sentence offset, when acoustic stimulation was similar across conditions, whereas gaze dispersion during impossible speech continued to be reduced. These findings show that a reduction in eye movements is not a byproduct of acoustic factors, but instead suggest that neurocognitive processes, different from arousal-related systems regulating the pupil size, drive reduced eye movements during high task demands. The current data thus show that effort in one sensory domain (audition) differentially impacts distinct functional properties in another sensory domain (vision).
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| | - Jennifer D Ryan
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| |
Collapse
|
7
|
Render A, Eisenbarth H, Oxner M, Jansen P. Arousal, interindividual differences and temporal binding a psychophysiological study. PSYCHOLOGICAL RESEARCH 2024:10.1007/s00426-024-01976-3. [PMID: 38806732 DOI: 10.1007/s00426-024-01976-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 05/13/2024] [Indexed: 05/30/2024]
Abstract
The sense of agency varies as a function of arousal in negative emotional contexts. As yet, it is unknown whether the same is true for positive affect, and how inter-individual characteristics might predict these effects. Temporal binding, an implicit measure of the sense of agency, was measured in 59 participants before and after watching either an emotionally neutral film clip or a positive film clip with high or low arousal. Analyses included participants' individual differences in subjective affective ratings, physiological arousal (pupillometry, skin conductance, heart rate), striatal dopamine levels via eye blink rates, and psychopathy. Linear mixed models showed that sexual arousal decreased temporal binding whereas calm pleasure had no facilitation effect on binding. Striatal dopamine levels were positively linked whereas subjective and physiological arousal may be negatively associated with binding towards actions. Psychopathic traits reduced the effect of high arousal on binding towards actions. These results provide evidence that individual differences influence the extent to which the temporal binding is affected by high arousing states with positive valence.
Collapse
Affiliation(s)
- Anna Render
- Faculty of Human Sciences, University of Regensburg, Regensburg, Germany.
- Victoria University of Wellington, Wellington, New Zealand.
- University of Passau, Passau, Germany.
| | | | - Matt Oxner
- Victoria University of Wellington, Wellington, New Zealand
- Wilhelm Wundt Institute for Psychology, University of Leipzig, Leipzig, Germany
| | - Petra Jansen
- Faculty of Human Sciences, University of Regensburg, Regensburg, Germany
| |
Collapse
|
8
|
Becker J, Viertler M, Korn CW, Blank H. The pupil dilation response as an indicator of visual cue uncertainty and auditory outcome surprise. Eur J Neurosci 2024; 59:2686-2701. [PMID: 38469976 DOI: 10.1111/ejn.16306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 01/05/2024] [Accepted: 02/18/2024] [Indexed: 03/13/2024]
Abstract
In everyday perception, we combine incoming sensory information with prior expectations. Expectations can be induced by cues that indicate the probability of following sensory events. The information provided by cues may differ and hence lead to different levels of uncertainty about which event will follow. In this experiment, we employed pupillometry to investigate whether the pupil dilation response to visual cues varies depending on the level of cue-associated uncertainty about a following auditory outcome. Also, we tested whether the pupil dilation response reflects the amount of surprise about the subsequently presented auditory stimulus. In each trial, participants were presented with a visual cue (face image) which was followed by an auditory outcome (spoken vowel). After the face cue, participants had to indicate by keypress which of three auditory vowels they expected to hear next. We manipulated the cue-associated uncertainty by varying the probabilistic cue-outcome contingencies: One face was most likely followed by one specific vowel (low cue uncertainty), another face was equally likely followed by either of two vowels (intermediate cue uncertainty) and the third face was followed by all three vowels (high cue uncertainty). Our results suggest that pupil dilation in response to task-relevant cues depends on the associated uncertainty, but only for large differences in the cue-associated uncertainty. Additionally, in response to the auditory outcomes, the pupil dilation scaled negatively with the cue-dependent probabilities, likely signalling the amount of surprise.
Collapse
Affiliation(s)
- Janika Becker
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Marvin Viertler
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Christoph W Korn
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Section Social Neuroscience, Department of General Psychiatry, University of Heidelberg, Heidelberg, Germany
| | - Helen Blank
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
9
|
Alben N, Arthur C. Pupil dilation as a function of pitch discrimination difficulty: A replication of Kahneman and Beatty, 1967. Atten Percept Psychophys 2024; 86:1435-1444. [PMID: 37684499 DOI: 10.3758/s13414-023-02765-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2023] [Indexed: 09/10/2023]
Abstract
In the present paper, we carry out a replication of a seminal paper by Kahneman, D. & Beatty, J. (1967). Perception & Psychophysics, 2(3),101-105 for using pupillometry as an implicit measure of auditory processing load, specifically, non-verbal auditory processing. While numerous papers since have supported the notion that pupillometry is a fairly reliable index of processing load in general (Zekveld, A. A., Koelewijn, T., and Kramer, S. E. (2018). Trends in Hearing, 22,1-25; Winn, M. B., Wendt, D., Koelewijn, T., and Kuchinsky, S. E. (2018). Trends in Hearing, 22,1-32), they typically have relied on memory recall, and/or more sophisticated cognitive tasks such as language comprehension or split attention. Kahneman and Beatty's paper, despite that it was published more than 50 years ago, continues to be the primary citation to support the claim that pupillometry is a reliable index of task difficulty for a simple non-verbal pitch discrimination task therefore giving us an implicit measure for listening effort (e.g.,Kramer, S. E., Lorens, A., Coninx, F., Zekveld, A. A., Piotrowska, A., & Skarzynski, H. (2013). Language and Cognitive Processes, 28(4),426-442; Schlemmer, K. B., Kulke, F., Kuchinke, L., & Van Der Meer, E. (2005). Psychophysiology, 42(4),465-472; Lisi, M., Bonato, M., and Zorzi, M. (2015). Biological Psychology, 112,39-45). This type of task takes very little explicit memory, is non-verbal, and relies heavily on more low-level, automatic perceptual processing. Using two different replication studies, one exact, and one modified, we only replicated the main result in the modified replication. The true replication failed to replicate on all nine statistical tests. Overall, our findings suggest that pupil dilation can be used as an implicit measure of task difficulty for a simple, non-semantic, auditory task, however, the robustness of the effect appears relatively weak in comparison with the original study, and the amount of variation across participants much greater.
Collapse
Affiliation(s)
- Noel Alben
- Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Claire Arthur
- Georgia Institute of Technology, Atlanta, GA, 30332, USA.
| |
Collapse
|
10
|
Becker J, Korn CW, Blank H. Pupil diameter as an indicator of sound pair familiarity after statistically structured auditory sequence. Sci Rep 2024; 14:8739. [PMID: 38627572 PMCID: PMC11021535 DOI: 10.1038/s41598-024-59302-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 04/09/2024] [Indexed: 04/19/2024] Open
Abstract
Inspired by recent findings in the visual domain, we investigated whether the stimulus-evoked pupil dilation reflects temporal statistical regularities in sequences of auditory stimuli. We conducted two preregistered pupillometry experiments (experiment 1, n = 30, 21 females; experiment 2, n = 31, 22 females). In both experiments, human participants listened to sequences of spoken vowels in two conditions. In the first condition, the stimuli were presented in a random order and, in the second condition, the same stimuli were presented in a sequence structured in pairs. The second experiment replicated the first experiment with a modified timing and number of stimuli presented and without participants being informed about any sequence structure. The sound-evoked pupil dilation during a subsequent familiarity task indicated that participants learned the auditory vowel pairs of the structured condition. However, pupil diameter during the structured sequence did not differ according to the statistical regularity of the pair structure. This contrasts with similar visual studies, emphasizing the susceptibility of pupil effects during statistically structured sequences to experimental design settings in the auditory domain. In sum, our findings suggest that pupil diameter may serve as an indicator of sound pair familiarity but does not invariably respond to task-irrelevant transition probabilities of auditory sequences.
Collapse
Affiliation(s)
- Janika Becker
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany.
| | - Christoph W Korn
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
- Section Social Neuroscience, Department of General Psychiatry, University of Heidelberg, 69115, Heidelberg, Germany
| | - Helen Blank
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
| |
Collapse
|
11
|
Fink L, Simola J, Tavano A, Lange E, Wallot S, Laeng B. From pre-processing to advanced dynamic modeling of pupil data. Behav Res Methods 2024; 56:1376-1412. [PMID: 37351785 PMCID: PMC10991010 DOI: 10.3758/s13428-023-02098-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2023] [Indexed: 06/24/2023]
Abstract
The pupil of the eye provides a rich source of information for cognitive scientists, as it can index a variety of bodily states (e.g., arousal, fatigue) and cognitive processes (e.g., attention, decision-making). As pupillometry becomes a more accessible and popular methodology, researchers have proposed a variety of techniques for analyzing pupil data. Here, we focus on time series-based, signal-to-signal approaches that enable one to relate dynamic changes in pupil size over time with dynamic changes in a stimulus time series, continuous behavioral outcome measures, or other participants' pupil traces. We first introduce pupillometry, its neural underpinnings, and the relation between pupil measurements and other oculomotor behaviors (e.g., blinks, saccades), to stress the importance of understanding what is being measured and what can be inferred from changes in pupillary activity. Next, we discuss possible pre-processing steps, and the contexts in which they may be necessary. Finally, we turn to signal-to-signal analytic techniques, including regression-based approaches, dynamic time-warping, phase clustering, detrended fluctuation analysis, and recurrence quantification analysis. Assumptions of these techniques, and examples of the scientific questions each can address, are outlined, with references to key papers and software packages. Additionally, we provide a detailed code tutorial that steps through the key examples and figures in this paper. Ultimately, we contend that the insights gained from pupillometry are constrained by the analysis techniques used, and that signal-to-signal approaches offer a means to generate novel scientific insights by taking into account understudied spectro-temporal relationships between the pupil signal and other signals of interest.
Collapse
Affiliation(s)
- Lauren Fink
- Department of Music, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, 60322, Frankfurt am Main, Germany.
- Department of Psychology, Neuroscience & Behavior, McMaster University, 1280 Main St. West, Hamilton, Ontario, L8S 4L8, Canada.
| | - Jaana Simola
- Helsinki Collegium for Advanced Studies, University of Helsinki, Helsinki, Finland
- Department of Education, University of Helsinki, Helsinki, Finland
| | - Alessandro Tavano
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Elke Lange
- Department of Music, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, 60322, Frankfurt am Main, Germany
| | - Sebastian Wallot
- Department of Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Institute for Sustainability Education and Psychologyy, Leuphana University, Lüneburg, Germany
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary studies in Rhythm, Time, and Motion, University of Oslo, Oslo, Norway
| |
Collapse
|
12
|
Ershaid H, Lizarazu M, McLaughlin D, Cooke M, Simantiraki O, Koutsogiannaki M, Lallier M. Contributions of listening effort and intelligibility to cortical tracking of speech in adverse listening conditions. Cortex 2024; 172:54-71. [PMID: 38215511 DOI: 10.1016/j.cortex.2023.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 09/05/2023] [Accepted: 11/14/2023] [Indexed: 01/14/2024]
Abstract
Cortical tracking of speech is vital for speech segmentation and is linked to speech intelligibility. However, there is no clear consensus as to whether reduced intelligibility leads to a decrease or an increase in cortical speech tracking, warranting further investigation of the factors influencing this relationship. One such factor is listening effort, defined as the cognitive resources necessary for speech comprehension, and reported to have a strong negative correlation with speech intelligibility. Yet, no studies have examined the relationship between speech intelligibility, listening effort, and cortical tracking of speech. The aim of the present study was thus to examine these factors in quiet and distinct adverse listening conditions. Forty-nine normal hearing adults listened to sentences produced casually, presented in quiet and two adverse listening conditions: cafeteria noise and reverberant speech. Electrophysiological responses were registered with electroencephalogram, and listening effort was estimated subjectively using self-reported scores and objectively using pupillometry. Results indicated varying impacts of adverse conditions on intelligibility, listening effort, and cortical tracking of speech, depending on the preservation of the speech temporal envelope. The more distorted envelope in the reverberant condition led to higher listening effort, as reflected in higher subjective scores, increased pupil diameter, and stronger cortical tracking of speech in the delta band. These findings suggest that using measures of listening effort in addition to those of intelligibility is useful for interpreting cortical tracking of speech results. Moreover, reading and phonological skills of participants were positively correlated with listening effort in the cafeteria condition, suggesting a special role of expert language skills in processing speech in this noisy condition. Implications for future research and theories linking atypical cortical tracking of speech and reading disorders are further discussed.
Collapse
Affiliation(s)
- Hadeel Ershaid
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Mikel Lizarazu
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Drew McLaughlin
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Martin Cooke
- Ikerbasque, Basque Science Foundation, Bilbao, Spain.
| | | | | | - Marie Lallier
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Science Foundation, Bilbao, Spain.
| |
Collapse
|
13
|
Giuliani NP, Venkitakrishnan S, Wu YH. Input-related demands: vocoded sentences evoke different pupillometrics and subjective listening effort than sentences in speech-shaped noise. Int J Audiol 2024; 63:199-206. [PMID: 36519812 PMCID: PMC10947987 DOI: 10.1080/14992027.2022.2150901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 12/23/2022]
Abstract
OBJECTIVES The Framework for Effortful Listening (FUEL) suggests five input-related demands can alter listening effort: source, transmission, listener, message and context factors. We hypothesised that vocoded sentences represented a source factor degradation and sentences in speech-shaped noise represented a transmission factor degradation. We used pupillometry and a subjective scale to examine our hypothesis. DESIGN Participants listened to vocoded sentences and sentences in speech-shaped noise at several difficulty levels designed to produce similar word recognition abilities; they also listened to unprocessed sentences. Within-participant pupillometrics and subjective listening effort were analysed. Post-hoc analyses were performed to examine if word recognition accuracy differentially influenced pupil responses. STUDY SAMPLES Twenty young adults with normal hearing. RESULTS Baseline pupil diameter was significantly smaller, peak pupil dilation was significantly larger, peak pupil dilation latency was significantly shorter, and subjective listening effort was significantly greater for the vocoded sentences than the sentences-in-noise. Word recognition ability also affected pupillometrics, but only for the vocoded sentences. CONCLUSIONS Our findings suggest that source factor degradations result in greater listening effort than transmission factor degradations. Future research should address how clinical interventions tailored towards different input-related demands may lead to reduced listening effort and improve patient outcomes.
Collapse
Affiliation(s)
- Nicholas P. Giuliani
- Department of Otolaryngology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Soumya Venkitakrishnan
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Yu-Hsiang Wu
- Department of Otolaryngology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
14
|
Fitzgerald LP, DeDe G, Shen J. Effects of linguistic context and noise type on speech comprehension. Front Psychol 2024; 15:1345619. [PMID: 38375107 PMCID: PMC10875108 DOI: 10.3389/fpsyg.2024.1345619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/17/2024] [Indexed: 02/21/2024] Open
Abstract
Introduction Understanding speech in background noise is an effortful endeavor. When acoustic challenges arise, linguistic context may help us fill in perceptual gaps. However, more knowledge is needed regarding how different types of background noise affect our ability to construct meaning from perceptually complex speech input. Additionally, there is limited evidence regarding whether perceptual complexity (e.g., informational masking) and linguistic complexity (e.g., occurrence of contextually incongruous words) interact during processing of speech material that is longer and more complex than a single sentence. Our first research objective was to determine whether comprehension of spoken sentence pairs is impacted by the informational masking from a speech masker. Our second objective was to identify whether there is an interaction between perceptual and linguistic complexity during speech processing. Methods We used multiple measures including comprehension accuracy, reaction time, and processing effort (as indicated by task-evoked pupil response), making comparisons across three different levels of linguistic complexity in two different noise conditions. Context conditions varied by final word, with each sentence pair ending with an expected exemplar (EE), within-category violation (WV), or between-category violation (BV). Forty young adults with typical hearing performed a speech comprehension in noise task over three visits. Each participant heard sentence pairs presented in either multi-talker babble or spectrally shaped steady-state noise (SSN), with the same noise condition across all three visits. Results We observed an effect of context but not noise on accuracy. Further, we observed an interaction of noise and context in peak pupil dilation data. Specifically, the context effect was modulated by noise type: context facilitated processing only in the more perceptually complex babble noise condition. Discussion These findings suggest that when perceptual complexity arises, listeners make use of the linguistic context to facilitate comprehension of speech obscured by background noise. Our results extend existing accounts of speech processing in noise by demonstrating how perceptual and linguistic complexity affect our ability to engage in higher-level processes, such as construction of meaning from speech segments that are longer than a single sentence.
Collapse
Affiliation(s)
- Laura P. Fitzgerald
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Gayle DeDe
- Speech, Language, and Brain Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Jing Shen
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| |
Collapse
|
15
|
Johns MA, Calloway RC, Karunathilake IMD, Decruy LP, Anderson S, Simon JZ, Kuchinsky SE. Attention Mobilization as a Modulator of Listening Effort: Evidence From Pupillometry. Trends Hear 2024; 28:23312165241245240. [PMID: 38613337 PMCID: PMC11015766 DOI: 10.1177/23312165241245240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 03/11/2024] [Accepted: 03/15/2024] [Indexed: 04/14/2024] Open
Abstract
Listening to speech in noise can require substantial mental effort, even among younger normal-hearing adults. The task-evoked pupil response (TEPR) has been shown to track the increased effort exerted to recognize words or sentences in increasing noise. However, few studies have examined the trajectory of listening effort across longer, more natural, stretches of speech, or the extent to which expectations about upcoming listening difficulty modulate the TEPR. Seventeen younger normal-hearing adults listened to 60-s-long audiobook passages, repeated three times in a row, at two different signal-to-noise ratios (SNRs) while pupil size was recorded. There was a significant interaction between SNR, repetition, and baseline pupil size on sustained listening effort. At lower baseline pupil sizes, potentially reflecting lower attention mobilization, TEPRs were more sustained in the harder SNR condition, particularly when attention mobilization remained low by the third presentation. At intermediate baseline pupil sizes, differences between conditions were largely absent, suggesting these listeners had optimally mobilized their attention for both SNRs. Lastly, at higher baseline pupil sizes, potentially reflecting overmobilization of attention, the effect of SNR was initially reversed for the second and third presentations: participants initially appeared to disengage in the harder SNR condition, resulting in reduced TEPRs that recovered in the second half of the story. Together, these findings suggest that the unfolding of listening effort over time depends critically on the extent to which individuals have successfully mobilized their attention in anticipation of difficult listening conditions.
Collapse
Affiliation(s)
- M. A. Johns
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - R. C. Calloway
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - I. M. D. Karunathilake
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - L. P. Decruy
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - S. Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA
| | - J. Z. Simon
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - S. E. Kuchinsky
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD 20889, USA
| |
Collapse
|
16
|
Plain B, Pielage H, Kramer SE, Richter M, Saunders GH, Versfeld NJ, Zekveld AA, Bhuiyan TA. Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening. Trends Hear 2024; 28:23312165241232551. [PMID: 38549351 PMCID: PMC10981225 DOI: 10.1177/23312165241232551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 01/04/2024] [Accepted: 01/25/2024] [Indexed: 04/01/2024] Open
Abstract
In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean = 64.6 years, SD = 9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD = 10.2) for task demand, 88.0% (SD = 7.5) for social context, and 60.0% (SD = 13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.
Collapse
Affiliation(s)
- Bethany Plain
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Hidde Pielage
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Sophia E. Kramer
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
| | - Michael Richter
- School of Psychology, Liverpool John Moores University, Liverpool, UK
| | - Gabrielle H. Saunders
- Manchester Centre for Audiology and Deafness (ManCAD), University of Manchester, Manchester, UK
| | - Niek J. Versfeld
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
| | - Adriana A. Zekveld
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
| | | |
Collapse
|
17
|
Zhang Y, Callejón-Leblic MA, Picazo-Reina AM, Blanco-Trejo S, Patou F, Sánchez-Gómez S. Impact of SNR, peripheral auditory sensitivity, and central cognitive profile on the psychometric relation between pupillary response and speech performance in CI users. Front Neurosci 2023; 17:1307777. [PMID: 38188029 PMCID: PMC10768066 DOI: 10.3389/fnins.2023.1307777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 12/05/2023] [Indexed: 01/09/2024] Open
Abstract
Despite substantial technical advances and wider clinical use, cochlear implant (CI) users continue to report high and elevated listening effort especially under challenging noisy conditions. Among all the objective measures to quantify listening effort, pupillometry is one of the most widely used and robust physiological measures. Previous studies with normally hearing (NH) and hearing-impaired (HI) listeners have shown that the relation between speech performance in noise and listening effort (as measured by peak pupil dilation) is not linear and exhibits an inverted-U shape. However, it is unclear whether the same psychometric relation exists in CI users, and whether individual differences in auditory sensitivity and central cognitive capacity affect this relation. Therefore, we recruited 17 post-lingually deaf CI adults to perform speech-in-noise tasks from 0 to 20 dB SNR with a 4 dB step size. Simultaneously, their pupillary responses and self-reported subjective effort were recorded. To characterize top-down and bottom-up individual variabilities, a spectro-temporal modulation task and a set of cognitive abilities were measured. Clinical word recognition in quiet and Quality of Life (QoL) were also collected. Results showed that at a group level, an inverted-U shape psychometric curve between task difficulty (SNR) and peak pupil dilation (PPD) was not observed. Individual shape of the psychometric curve was significantly associated with some individual factors: CI users with higher clinical word and speech-in-noise recognition showed a quadratic decrease of PPD over increasing SNRs; CI users with better non-verbal intelligence and lower QoL showed smaller average PPD. To summarize, individual differences in CI users had a significant impact on the psychometric relation between pupillary response and task difficulty, hence affecting the interpretation of pupillary response as listening effort (or engagement) at different task difficulty levels. Future research and clinical applications should further characterize the possible effects of individual factors (such as motivation or engagement) in modulating CI users' occurrence of 'tipping point' on their psychometric functions, and develop an individualized method for reliably quantifying listening effort using pupillometry.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Research and Technology, Oticon Medical, Vallauris, France
| | - M. Amparo Callejón-Leblic
- Oticon Medical, Madrid, Spain
- ENT Department, Virgen Macarena University Hospital, Seville, Spain
- Biomedical Engineering Group, University of Sevillel, Sevillel, Spain
| | | | | | - François Patou
- Department of Research and Technology, Oticon Medical, Smørum, Denmark
| | | |
Collapse
|
18
|
Bussu G, Portugal AM, Wilsson L, Kleberg JL, Falck-Ytter T. Manipulation of phasic arousal by auditory cues is associated with subsequent changes in visual orienting to faces in infancy. Sci Rep 2023; 13:22072. [PMID: 38086954 PMCID: PMC10716513 DOI: 10.1038/s41598-023-49373-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 12/07/2023] [Indexed: 12/18/2023] Open
Abstract
This eye-tracking study investigated the effect of sound-induced arousal on social orienting under different auditory cue conditions in 5-month-old (n = 25; n = 13 males) and 10-month-old infants (n = 21; n = 14 males) participating in a spontaneous visual search task. Results showed: (1) larger pupil dilation discriminating between high and low volume (b = 0.02, p = 0.007), but not between social and non-social sounds (b = 0.004, p = 0.64); (2) faster visual orienting (b = - 0.09, p < 0.001) and better social orienting at older age (b = 0.94, p < 0.001); (3) a fast habituation effect on social orienting after high-volume sounds (χ2(2) = 7.39, p = 0.025); (4) a quadratic association between baseline pupil size and target selection (b = - 1.0, SE = 0.5, χ2(1) = 4.04, p = 0.045); (5) a positive linear association between pupil dilation and social orienting (b = 0.09, p = 0.039). Findings support adaptive gain theories of arousal, extending the link between phasic pupil dilation and task performance to spontaneous social orienting in infancy.
Collapse
Affiliation(s)
- Giorgia Bussu
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Von Kraemers Alle 1C, 754 32, Uppsala, Sweden.
| | - Ana Maria Portugal
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Von Kraemers Alle 1C, 754 32, Uppsala, Sweden
| | - Lowe Wilsson
- Department of Women's and Children's Health, Center of Neurodevelopmental Disorders (KIND), Centre for Psychiatry Research, Karolinska Institutet & Stockholm Health Care Services, Region Stockholm, Stockholm, Sweden
| | - Johan Lundin Kleberg
- Department of Clinical Neuroscience, Centre for Psychiatry Research, Karolinska Institutet & Stockholm Health Care Services, Region Stockholm, Stockholm, Sweden
- Department of Molecular Medicine and Surgery, Karolinska Institutet, Stockholm, Sweden
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Von Kraemers Alle 1C, 754 32, Uppsala, Sweden
- Department of Women's and Children's Health, Center of Neurodevelopmental Disorders (KIND), Centre for Psychiatry Research, Karolinska Institutet & Stockholm Health Care Services, Region Stockholm, Stockholm, Sweden
| |
Collapse
|
19
|
Kraus F, Obleser J, Herrmann B. Pupil Size Sensitivity to Listening Demand Depends on Motivational State. eNeuro 2023; 10:ENEURO.0288-23.2023. [PMID: 37989588 PMCID: PMC10734370 DOI: 10.1523/eneuro.0288-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/19/2023] [Accepted: 10/22/2023] [Indexed: 11/23/2023] Open
Abstract
Motivation plays a role when a listener needs to understand speech under acoustically demanding conditions. Previous work has demonstrated pupil-linked arousal being sensitive to both listening demands and motivational state during listening. It is less clear how motivational state affects the temporal evolution of the pupil size and its relation to subsequent behavior. We used an auditory gap detection task (N = 33) to study the joint impact of listening demand and motivational state on the pupil size response and examine its temporal evolution. Task difficulty and a listener's motivational state were orthogonally manipulated through changes in gap duration and monetary reward prospect. We show that participants' performance decreased with task difficulty, but that reward prospect enhanced performance under hard listening conditions. Pupil size increased with both increased task difficulty and higher reward prospect, and this reward prospect effect was largest under difficult listening conditions. Moreover, pupil size time courses differed between detected and missed gaps, suggesting that the pupil response indicates upcoming behavior. Larger pre-gap pupil size was further associated with faster response times on a trial-by-trial within-participant level. Our results reiterate the utility of pupil size as an objective and temporally sensitive measure in audiology. However, such assessments of cognitive resource recruitment need to consider the individual's motivational state.
Collapse
Affiliation(s)
- Frauke Kraus
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto M6A 2E1, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto M5S 3G3, Ontario, Canada
| |
Collapse
|
20
|
Carraturo S, McLaughlin DJ, Peelle JE, Van Engen KJ. Pupillometry reveals differences in cognitive demands of listening to face mask-attenuated speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3973-3985. [PMID: 38149818 DOI: 10.1121/10.0023953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 11/29/2023] [Indexed: 12/28/2023]
Abstract
Face masks offer essential protection but also interfere with speech communication. Here, audio-only sentences spoken through four types of masks were presented in noise to young adult listeners. Pupil dilation (an index of cognitive demand), intelligibility, and subjective effort and performance ratings were collected. Dilation increased in response to each mask relative to the no-mask condition and differed significantly where acoustic attenuation was most prominent. These results suggest that the acoustic impact of the mask drives not only the intelligibility of speech, but also the cognitive demands of listening. Subjective effort ratings reflected the same trends as the pupil data.
Collapse
Affiliation(s)
- Sita Carraturo
- Department of Psychological & Brain Sciences, Washington University in St. Louis, Saint Louis, Missouri 63130, USA
| | - Drew J McLaughlin
- Basque Center on Cognition, Brain and Language, San Sebastian, Basque Country 20009, Spain
| | - Jonathan E Peelle
- Department of Communication Sciences and Disorders, Northeastern University, Boston, Massachusetts 02115, USA
| | - Kristin J Van Engen
- Department of Psychological & Brain Sciences, Washington University in St. Louis, Saint Louis, Missouri 63130, USA
| |
Collapse
|
21
|
Ziereis A, Schacht A. Gender congruence and emotion effects in cross-modal associative learning: Insights from ERPs and pupillary responses. Psychophysiology 2023; 60:e14380. [PMID: 37387451 DOI: 10.1111/psyp.14380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 05/01/2023] [Accepted: 06/17/2023] [Indexed: 07/01/2023]
Abstract
Social and emotional cues from faces and voices are highly relevant and have been reliably demonstrated to attract attention involuntarily. However, there are mixed findings as to which degree associating emotional valence to faces occurs automatically. In the present study, we tested whether inherently neutral faces gain additional relevance by being conditioned with either positive, negative, or neutral vocal affect bursts. During learning, participants performed a gender-matching task on face-voice pairs without explicit emotion judgments of the voices. In the test session on a subsequent day, only the previously associated faces were presented and had to be categorized regarding gender. We analyzed event-related potentials (ERPs), pupil diameter, and response times (RTs) of N = 32 subjects. Emotion effects were found in auditory ERPs and RTs during the learning session, suggesting that task-irrelevant emotion was automatically processed. However, ERPs time-locked to the conditioned faces were mainly modulated by the task-relevant information, that is, the gender congruence of the face and voice, but not by emotion. Importantly, these ERP and RT effects of learned congruence were not limited to learning but extended to the test session, that is, after removing the auditory stimuli. These findings indicate successful associative learning in our paradigm, but it did not extend to the task-irrelevant dimension of emotional relevance. Therefore, cross-modal associations of emotional relevance may not be completely automatic, even though the emotion was processed in the voice.
Collapse
Affiliation(s)
- Annika Ziereis
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany
| | - Anne Schacht
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany
| |
Collapse
|
22
|
Chiossi JSC, Patou F, Ng EHN, Faulkner KF, Lyxell B. Phonological discrimination and contrast detection in pupillometry. Front Psychol 2023; 14:1232262. [PMID: 38023001 PMCID: PMC10646334 DOI: 10.3389/fpsyg.2023.1232262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The perception of phonemes is guided by both low-level acoustic cues and high-level linguistic context. However, differentiating between these two types of processing can be challenging. In this study, we explore the utility of pupillometry as a tool to investigate both low- and high-level processing of phonological stimuli, with a particular focus on its ability to capture novelty detection and cognitive processing during speech perception. Methods Pupillometric traces were recorded from a sample of 22 Danish-speaking adults, with self-reported normal hearing, while performing two phonological-contrast perception tasks: a nonword discrimination task, which included minimal-pair combinations specific to the Danish language, and a nonword detection task involving the detection of phonologically modified words within sentences. The study explored the perception of contrasts in both unprocessed speech and degraded speech input, processed with a vocoder. Results No difference in peak pupil dilation was observed when the contrast occurred between two isolated nonwords in the nonword discrimination task. For unprocessed speech, higher peak pupil dilations were measured when phonologically modified words were detected within a sentence compared to sentences without the nonwords. For vocoded speech, higher peak pupil dilation was observed for sentence stimuli, but not for the isolated nonwords, although performance decreased similarly for both tasks. Conclusion Our findings demonstrate the complexity of pupil dynamics in the presence of acoustic and phonological manipulation. Pupil responses seemed to reflect higher-level cognitive and lexical processing related to phonological perception rather than low-level perception of acoustic cues. However, the incorporation of multiple talkers in the stimuli, coupled with the relatively low task complexity, may have affected the pupil dilation.
Collapse
Affiliation(s)
- Julia S. C. Chiossi
- Oticon A/S, Smørum, Denmark
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| | | | - Elaine Hoi Ning Ng
- Oticon A/S, Smørum, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | | | - Björn Lyxell
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| |
Collapse
|
23
|
An H, Lee J, Suh MW, Lim Y. Neural correlation of speech envelope tracking for background noise in normal hearing. Front Neurosci 2023; 17:1268591. [PMID: 37916182 PMCID: PMC10616241 DOI: 10.3389/fnins.2023.1268591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 10/04/2023] [Indexed: 11/03/2023] Open
Abstract
Everyday speech communication often occurs in environments with background noise, and the impact of noise on speech recognition can vary depending on factors such as noise type, noise intensity, and the listener's hearing ability. However, the extent to which neural mechanisms in speech understanding are influenced by different types and levels of noise remains unknown. This study aims to investigate whether individuals exhibit distinct neural responses and attention strategies depending on noise conditions. We recorded electroencephalography (EEG) data from 20 participants with normal hearing (13 males) and evaluated both neural tracking of speech envelopes and behavioral performance in speech understanding in the presence of varying types of background noise. Participants engaged in an EEG experiment consisting of two separate sessions. The first session involved listening to a 12-min story presented binaurally without any background noise. In the second session, speech understanding scores were measured using matrix sentences presented under speech-shaped noise (SSN) and Story noise background noise conditions at noise levels corresponding to sentence recognitions score (SRS). We observed differences in neural envelope correlation depending on noise type but not on its level. Interestingly, the impact of noise type on the variation in envelope tracking was more significant among participants with higher speech perception scores, while those with lower scores exhibited similarities in envelope correlation regardless of the noise condition. The findings suggest that even individuals with normal hearing could adopt different strategies to understand speech in challenging listening environments, depending on the type of noise.
Collapse
Affiliation(s)
- HyunJung An
- Center for Intelligent and Interactive Robotics, Korea Institute of Science and Technology, Seoul, Republic of Korea
| | - JeeWon Lee
- Center for Intelligent and Interactive Robotics, Korea Institute of Science and Technology, Seoul, Republic of Korea
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul, Republic of Korea
| | - Myung-Whan Suh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Yoonseob Lim
- Center for Intelligent and Interactive Robotics, Korea Institute of Science and Technology, Seoul, Republic of Korea
- Department of HY-KIST Bio-convergence, Hanyang University, Seoul, Republic of Korea
| |
Collapse
|
24
|
Aliakbaryhosseinabadi S, Keidser G, May T, Dau T, Wendt D, Rotger-Griful S. The Effects of Noise and Simulated Conductive Hearing Loss on Physiological Response Measures During Interactive Conversations. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4009-4024. [PMID: 37625145 DOI: 10.1044/2023_jslhr-23-00063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
PURPOSE The purpose of this work was to study the effects of background noise and hearing attenuation associated with earplugs on three physiological measures, assumed to be markers of effort investment and arousal, during interactive communication. METHOD Twelve pairs of older people (average age of 63.2 years) with age-adjusted normal hearing took part in a face-to-face communication to solve a Diapix task. Communication was held in different levels of babble noise (0, 60, and 70 dBA) and with two levels of hearing attenuation (0 and 25 dB) in quiet. The physiological measures obtained included pupil size, heart rate variability, and skin conductance. In addition, subjective ratings of perceived communication success, frustration, and effort were obtained. RESULTS Ratings of perceived success, frustration, and effort confirmed that communication was more difficult in noise and with approximately 25-dB hearing attenuation and suggested that the implemented levels of noise and hearing attenuation resulted in comparable communication difficulties. Background noise at 70 dBA and hearing attenuation both led to an initial increase in pupil size (associated with effort), but only the effect of the background noise was sustained throughout the conversation. The 25-dB hearing attenuation led to a significant decrease of the high-frequency power of heart rate variability and a significant increase of skin conductance level, measured as the average z value of the electrodermal activity amplitude. CONCLUSION This study demonstrated that several physiological measures appear to be viable indicators of changing communication conditions, with pupillometry and cardiovascular as well as electrodermal measures potentially being markers of communication difficulty.
Collapse
Affiliation(s)
- Susan Aliakbaryhosseinabadi
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Hearing System Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Gitte Keidser
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Behavioral Sciences and Learning, Linnaeus Center HEAD, Linköping University, Sweden
| | - Tobias May
- Hearing System Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Torsten Dau
- Hearing System Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Dorothea Wendt
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Hearing System Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
| | | |
Collapse
|
25
|
Kuchinsky SE, Razeghi N, Pandža NB. Auditory, Lexical, and Multitasking Demands Interactively Impact Listening Effort. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4066-4082. [PMID: 37672797 PMCID: PMC10713022 DOI: 10.1044/2023_jslhr-22-00548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 03/12/2023] [Accepted: 06/27/2023] [Indexed: 09/08/2023]
Abstract
PURPOSE This study examined the extent to which acoustic, linguistic, and cognitive task demands interactively impact listening effort. METHOD Using a dual-task paradigm, on each trial, participants were instructed to perform either a single task or two tasks. In the primary word recognition task, participants repeated Northwestern University Auditory Test No. 6 words presented in speech-shaped noise at either an easier or a harder signal-to-noise ratio (SNR). The words varied in how commonly they occur in the English language (lexical frequency). In the secondary visual task, participants were instructed to press a specific key as soon as a number appeared on screen (simpler task) or one of two keys to indicate whether the visualized number was even or odd (more complex task). RESULTS Manipulation checks revealed that key assumptions of the dual-task design were met. A significant three-way interaction was observed, such that the expected effect of SNR on effort was only observable for words with lower lexical frequency and only when multitasking demands were relatively simpler. CONCLUSIONS This work reveals that variability across speech stimuli can influence the sensitivity of the dual-task paradigm for detecting changes in listening effort. In line with previous work, the results of this study also suggest that higher cognitive demands may limit the ability to detect expected effects of SNR on measures of effort. With implications for real-world listening, these findings highlight that even relatively minor changes in lexical and multitasking demands can alter the effort devoted to listening in noise.
Collapse
Affiliation(s)
- Stefanie E. Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Applied Research Laboratory for Intelligence and Security, University of Maryland, College Park
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Niki Razeghi
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Nick B. Pandža
- Applied Research Laboratory for Intelligence and Security, University of Maryland, College Park
- Program in Second Language Acquisition, University of Maryland, College Park
- Maryland Language Science Center, University of Maryland, College Park
| |
Collapse
|
26
|
Zekveld AA, Pielage H, Versfeld NJ, Kramer SE. The Influence of Hearing Loss on the Pupil Response to Degraded Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4083-4099. [PMID: 37699194 DOI: 10.1044/2023_jslhr-23-00093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/14/2023]
Abstract
PURPOSE Current evidence regarding the influence of hearing loss on the pupil response elicited by speech perception is inconsistent. This might be partially due to confounding effects of age. This study aimed to compare pupil responses in age-matched groups of normal-hearing (NH) and hard of hearing (HH) listeners during listening to speech. METHOD We tested the baseline pupil size and mean and peak pupil dilation response of 17 NH participants (Mage = 46 years; age range: 20-62 years) and 17 HH participants (Mage = 45 years; age range: 20-63 years) who were pairwise matched on age and educational level. Participants performed three speech perception tasks at a 50% intelligibility level: noise-vocoded speech and speech masked with either stationary noise or interfering speech. They also listened to speech presented in quiet. RESULTS Hearing loss was associated with poorer speech perception, except for noise-vocoded speech. In contrast to NH participants, performance of HH participants did not improve across trials for the interfering speech condition, and it decreased for speech in stationary noise. HH participants had a smaller mean pupil dilation in degraded speech conditions compared to NH participants, but not for speech in quiet. They also had a steeper decline in the baseline pupil size across trials. The baseline pupil size was smaller for noise-vocoded speech as compared to the other conditions. The normalized data showed an additional group effect on the baseline pupil response. CONCLUSIONS Hearing loss is associated with a smaller pupil response and steeper decline in baseline pupil size during the perception of degraded speech. This suggests difficulties of the HH participants to sustain their effort investment and performance across the test session.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Ear & Hearing Section, Otolaryngology-Head and Neck Surgery, Amsterdam UMC, VU University medical center Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Hidde Pielage
- Ear & Hearing Section, Otolaryngology-Head and Neck Surgery, Amsterdam UMC, VU University medical center Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Niek J Versfeld
- Ear & Hearing Section, Otolaryngology-Head and Neck Surgery, Amsterdam UMC, VU University medical center Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Sophia E Kramer
- Ear & Hearing Section, Otolaryngology-Head and Neck Surgery, Amsterdam UMC, VU University medical center Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| |
Collapse
|
27
|
Ren W, Huang K, Li Y, Yang Q, Wang L, Guo K, Wei P, Zhang YQ. Altered pupil responses to social and non-social stimuli in Shank3 mutant dogs. Mol Psychiatry 2023; 28:3751-3759. [PMID: 37848709 DOI: 10.1038/s41380-023-02277-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 06/21/2023] [Accepted: 09/14/2023] [Indexed: 10/19/2023]
Abstract
Pupillary response, an important process in visual perception and social and emotional cognition, has been widely studied for understanding the neural mechanisms of neuropsychiatric disorders. However, there have been few studies on pupil response to social and non-social stimuli in animal models of neurodevelopmental disorders including autism spectrum disorder (ASD) and attention deficit hyperactivity disorder. Here, we developed a pupilometer using a robust eye feature-detection algorithm for real-time pupillometry in dogs. In a pilot study, we found that a brief light flash induced a less-pronounced and slower pupil dilation response in gene-edited dogs carrying mutations in Shank3; mutations of its ortholog in humans were repeatedly identified in ASD patients. We further found that obnoxious, loud firecracker sound of 120 dB induced a stronger and longer pupil dilation response in Shank3 mutant dogs, whereas a high reward food induced a weaker pupillary response in Shank3 mutants than in wild-type control dogs. In addition, we found that Shank3 mutants showed compromised pupillary synchrony during dog-human interaction. These findings of altered pupil response in Shank3 mutant dogs recapitulate the altered sensory responses in ASD patients. Thus, this study demonstrates the validity and value of the pupilometer for dogs, and provides an effective paradigm for studying the underlying neural mechanisms of ASD and potentially other psychiatric disorders.
Collapse
Affiliation(s)
- Wei Ren
- State Key Laboratory for Molecular Developmental Biology, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, 100101, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Kang Huang
- Shenzhen Bayone BioTech Co. Ltd, Shenzhen, 518100, China
| | - Yumo Li
- State Key Laboratory for Molecular Developmental Biology, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, 100101, China
- College of Life Sciences, Beijing Normal University, Beijing, 100875, China
| | - Qin Yang
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
| | - Liping Wang
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
| | - Kun Guo
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK.
| | - Pengfei Wei
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China.
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China.
| | - Yong Q Zhang
- State Key Laboratory for Molecular Developmental Biology, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, 100101, China.
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China.
- School of Life Sciences, Hubei University, Wuhan, 430415, China.
| |
Collapse
|
28
|
Pielage H, Plain BJ, Saunders GH, Versfeld NJ, Lunner T, Kramer SE, Zekveld AA. Copresence Was Found to Be Related to Some Pupil Measures in Persons With Hearing Loss While They Performed a Speech-in-Noise Task. Ear Hear 2023; 44:1190-1201. [PMID: 37012623 PMCID: PMC10426789 DOI: 10.1097/aud.0000000000001361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 02/07/2023] [Indexed: 04/05/2023]
Abstract
OBJECTIVES To assess if a manipulation of copresence was related to speech-in-noise task performance, arousal, and effort of persons with hearing loss. Task-related arousal and effort were measured by means of pupillometry. DESIGN Twenty-nine participants (mean age: 64.6 years) with hearing loss (4-frequency pure-tone average [4F-PTA] of 50.2 dB HL [SD = 8.9 dB] in the right ear and 51.3 dB HL [SD = 8.7 dB] in the left ear; averaged across 0.5, 1, 2, and 4 kHz) listened to and repeated spoken Danish sentences that were masked by four streams of continuous speech. Participants were presented with blocks of 20 sentences, during which copresence was manipulated by having participants do the task either alone or accompanied by two observers who were recruited from a similar age group. The task was presented at two difficulty levels, which was accomplished by fixing the signal-to-noise ratio of the speech and masker to match the thresholds at which participants were estimated to correctly repeat 50% (difficult) or 80% (easy) of the sentences in a block. Performance was assessed based on whether or not sentences were repeated correctly. Measures of pupil size (baseline pupil size [BPS], peak pupil dilation [PPD], and mean pupil dilation [MPD]) were used to index arousal and effort. Participants also completed ratings of subjective effort and stress after each block of sentences and a self-efficacy for listening-questionnaire. RESULTS Task performance was not associated with copresence, but was found to be related to 4F-PTA. An increase in BPS was found for copresence conditions, compared to alone conditions. Furthermore, a post-hoc exploratory analysis revealed that the copresence conditions were associated with a significantly larger pupil size in the second half of the task-evoked pupil response (TEPR). No change in PPD or MPD did was detected between copresence and alone conditions. Self-efficacy, 4F-PTA, and age were not found to be related to the pupil data. Subjective ratings were sensitive to task difficulty but not copresence. CONCLUSION Copresence was not found to be related to speech-in-noise performance, PPD, or MPD in persons with HL but was associated with an increase in arousal (as indicated by a larger BPS). This could be related to premobilization of effort and/or discomfort in response to the observers' presence. Furthermore, an exploratory analysis of the pupil data showed that copresence was associated with greater pupil dilations in the second half of the TEPR. This may indicate that participants invested more effort during the speech-in-noise task while in the presence of the observers, but that this increase in effort may not necessarily have been related to listening itself. Instead, other speech-in-noise task-related processes, such as preparing to respond, could have been influenced by copresence.
Collapse
Affiliation(s)
- Hidde Pielage
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, section Ear & Hearing, Amsterdam, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Bethany J. Plain
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, section Ear & Hearing, Amsterdam, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Gabrielle H. Saunders
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
| | - Niek J. Versfeld
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, section Ear & Hearing, Amsterdam, the Netherlands
| | | | - Sophia E. Kramer
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, section Ear & Hearing, Amsterdam, the Netherlands
| | - Adriana A. Zekveld
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, section Ear & Hearing, Amsterdam, the Netherlands
| |
Collapse
|
29
|
Cui ME, Herrmann B. Eye Movements Decrease during Effortful Speech Listening. J Neurosci 2023; 43:5856-5869. [PMID: 37491313 PMCID: PMC10423048 DOI: 10.1523/jneurosci.0240-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 06/09/2023] [Accepted: 07/18/2023] [Indexed: 07/27/2023] Open
Abstract
Hearing impairment affects many older adults but is often diagnosed decades after speech comprehension in noisy situations has become effortful. Accurate assessment of listening effort may thus help diagnose hearing impairment earlier. However, pupillometry-the most used approach to assess listening effort-has limitations that hinder its use in practice. The current study explores a novel way to assess listening effort through eye movements. Building on cognitive and neurophysiological work, we examine the hypothesis that eye movements decrease when speech listening becomes challenging. In three experiments with human participants from both sexes, we demonstrate, consistent with this hypothesis, that fixation duration increases and spatial gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (simple sentences, naturalistic stories). In contrast, pupillometry was less sensitive to speech masking during story listening, suggesting pupillometric measures may not be as effective for the assessments of listening effort in naturalistic speech-listening paradigms. Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in the brain regions that support the regulation of eye movements, such as frontal eye field and superior colliculus, are modulated when listening is effortful.SIGNIFICANCE STATEMENT Assessment of listening effort is critical for early diagnosis of age-related hearing loss. Pupillometry is most used but has several disadvantages. The current study explores a novel way to assess listening effort through eye movements. We examine the hypothesis that eye movements decrease when speech listening becomes effortful. We demonstrate, consistent with this hypothesis, that fixation duration increases and gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (sentences, naturalistic stories). Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in brain regions that support the regulation of eye movements are modulated when listening is effortful.
Collapse
Affiliation(s)
- M Eric Cui
- Rotman Research Institute, Baycrest Academy for Research and Education, North York, Ontario M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, North York, Ontario M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| |
Collapse
|
30
|
Villard S, Perrachione TK, Lim SJ, Alam A, Kidd G. Energetic and informational masking place dissociable demands on listening effort: Evidence from simultaneous electroencephalography and pupillometrya). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:1152-1167. [PMID: 37610284 PMCID: PMC10449482 DOI: 10.1121/10.0020539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 07/09/2023] [Accepted: 07/14/2023] [Indexed: 08/24/2023]
Abstract
The task of processing speech masked by concurrent speech/noise can pose a substantial challenge to listeners. However, performance on such tasks may not directly reflect the amount of listening effort they elicit. Changes in pupil size and neural oscillatory power in the alpha range (8-12 Hz) are prominent neurophysiological signals known to reflect listening effort; however, measurements obtained through these two approaches are rarely correlated, suggesting that they may respond differently depending on the specific cognitive demands (and, by extension, the specific type of effort) elicited by specific tasks. This study aimed to compare changes in pupil size and alpha power elicited by different types of auditory maskers (highly confusable intelligible speech maskers, speech-envelope-modulated speech-shaped noise, and unmodulated speech-shaped noise maskers) in young, normal-hearing listeners. Within each condition, the target-to-masker ratio was set at the participant's individually estimated 75% correct point on the psychometric function. The speech masking condition elicited a significantly greater increase in pupil size than either of the noise masking conditions, whereas the unmodulated noise masking condition elicited a significantly greater increase in alpha oscillatory power than the speech masking condition, suggesting that the effort needed to solve these respective tasks may have different neural origins.
Collapse
Affiliation(s)
- Sarah Villard
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Sung-Joo Lim
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Ayesha Alam
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
31
|
Martohardjono G, Johns MA, Franciotti P, Castillo D, Porru I, Lowry C. Use of the first-acquired language modulates pupil size in the processing of island constraint violations. Front Psychol 2023; 14:1180989. [PMID: 37519378 PMCID: PMC10382202 DOI: 10.3389/fpsyg.2023.1180989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 06/15/2023] [Indexed: 08/01/2023] Open
Abstract
Introduction Traditional studies of the population called "heritage speakers" (HS) have treated this group as distinct from other bilingual populations, e.g., simultaneous or late bilinguals (LB), focusing on group differences in the competencies of the first-acquired language or "heritage language". While several explanations have been proposed for such differences (e.g., incomplete acquisition, attrition, differential processing mechanisms), few have taken into consideration the individual variation that must occur, due to the fluctuation of factors such as exposure and use that characterize all bilinguals. In addition, few studies have used implicit measures, e.g., psychophysiological methods (ERPs; Eye-tracking), that can circumvent confounding variables such as resorting to conscious metalinguistic knowledge. Methodology This study uses pupillometry, a method that has only recently been used in psycholinguistic studies of bilingualism, to investigate pupillary responses to three syntactic island constructions in two groups of Spanish/English bilinguals: heritage speakers and late bilinguals. Data were analyzed using generalized additive mixed effects models (GAMMs) and two models were created and compared to one another: one with group (LB/HS) and the other with groups collapsed and current and historical use of Spanish as continuous variables. Results Results show that group-based models generally yield conflicting results while models collapsing groups and having usage as a predictor yield consistent ones. In particular, current use predicts sensitivity to L1 ungrammaticality across both HS and LB populations. We conclude that individual variation, as measured by use, is a critical factor tha must be taken into account in the description of the language competencies and processing of heritage and late bilinguals alike.
Collapse
Affiliation(s)
- Gita Martohardjono
- Department of Linguistics and Communication Disorders, Queens College, New York, NY, United States
- Second Language Acquisition Laboratory, Linguistics Program, The Graduate Center of the City University of New York, New York, NY, United States
| | - Michael A. Johns
- Institute for Systems Research, University of Maryland, College Park, MD, United States
| | - Pamela Franciotti
- Second Language Acquisition Laboratory, Linguistics Program, The Graduate Center of the City University of New York, New York, NY, United States
| | - Daniela Castillo
- Second Language Acquisition Laboratory, Linguistics Program, The Graduate Center of the City University of New York, New York, NY, United States
| | - Ilaria Porru
- Second Language Acquisition Laboratory, Linguistics Program, The Graduate Center of the City University of New York, New York, NY, United States
| | - Cass Lowry
- Second Language Acquisition Laboratory, Linguistics Program, The Graduate Center of the City University of New York, New York, NY, United States
| |
Collapse
|
32
|
Van Opstal AJ, Noordanus E. Towards personalized and optimized fitting of cochlear implants. Front Neurosci 2023; 17:1183126. [PMID: 37521701 PMCID: PMC10372492 DOI: 10.3389/fnins.2023.1183126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 06/21/2023] [Indexed: 08/01/2023] Open
Abstract
A cochlear implant (CI) is a neurotechnological device that restores total sensorineural hearing loss. It contains a sophisticated speech processor that analyzes and transforms the acoustic input. It distributes its time-enveloped spectral content to the auditory nerve as electrical pulsed stimulation trains of selected frequency channels on a multi-contact electrode that is surgically inserted in the cochlear duct. This remarkable brain interface enables the deaf to regain hearing and understand speech. However, tuning of the large (>50) number of parameters of the speech processor, so-called "device fitting," is a tedious and complex process, which is mainly carried out in the clinic through 'one-size-fits-all' procedures. Current fitting typically relies on limited and often subjective data that must be collected in limited time. Despite the success of the CI as a hearing-restoration device, variability in speech-recognition scores among users is still very large, and mostly unexplained. The major factors that underly this variability incorporate three levels: (i) variability in auditory-system malfunction of CI-users, (ii) variability in the selectivity of electrode-to-auditory nerve (EL-AN) activation, and (iii) lack of objective perceptual measures to optimize the fitting. We argue that variability in speech recognition can only be alleviated by using objective patient-specific data for an individualized fitting procedure, which incorporates knowledge from all three levels. In this paper, we propose a series of experiments, aimed at collecting a large amount of objective (i.e., quantitative, reproducible, and reliable) data that characterize the three processing levels of the user's auditory system. Machine-learning algorithms that process these data will eventually enable the clinician to derive reliable and personalized characteristics of the user's auditory system, the quality of EL-AN signal transfer, and predictions of the perceptual effects of changes in the current fitting.
Collapse
|
33
|
Trau-Margalit A, Fostick L, Harel-Arbeli T, Nissanholtz-Gannot R, Taitelbaum-Swead R. Speech recognition in noise task among children and young-adults: a pupillometry study. Front Psychol 2023; 14:1188485. [PMID: 37425148 PMCID: PMC10328119 DOI: 10.3389/fpsyg.2023.1188485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/05/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Children experience unique challenges when listening to speech in noisy environments. The present study used pupillometry, an established method for quantifying listening and cognitive effort, to detect temporal changes in pupil dilation during a speech-recognition-in-noise task among school-aged children and young adults. Methods Thirty school-aged children and 31 young adults listened to sentences amidst four-talker babble noise in two signal-to-noise ratios (SNR) conditions: high accuracy condition (+10 dB and + 6 dB, for children and adults, respectively) and low accuracy condition (+5 dB and + 2 dB, for children and adults, respectively). They were asked to repeat the sentences while pupil size was measured continuously during the task. Results During the auditory processing phase, both groups displayed pupil dilation; however, adults exhibited greater dilation than children, particularly in the low accuracy condition. In the second phase (retention), only children demonstrated increased pupil dilation, whereas adults consistently exhibited a decrease in pupil size. Additionally, the children's group showed increased pupil dilation during the response phase. Discussion Although adults and school-aged children produce similar behavioural scores, group differences in dilation patterns point that their underlying auditory processing differs. A second peak of pupil dilation among the children suggests that their cognitive effort during speech recognition in noise lasts longer than in adults, continuing past the first auditory processing peak dilation. These findings support effortful listening among children and highlight the need to identify and alleviate listening difficulties in school-aged children, to provide proper intervention strategies.
Collapse
Affiliation(s)
- Avital Trau-Margalit
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
| | - Leah Fostick
- Department of Communication Disorders, Auditory Perception Lab in the Name of Laurent Levy, Ariel University, Ariel, Israel
| | - Tami Harel-Arbeli
- Department of Gerontology, University of Haifa, Haifa, Israel
- Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | | | - Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| |
Collapse
|
34
|
Sulas E, Hasan PY, Zhang Y, Patou F. Streamlining experiment design in cognitive hearing science using OpenSesame. Behav Res Methods 2023; 55:1965-1979. [PMID: 35794416 PMCID: PMC10250502 DOI: 10.3758/s13428-022-01886-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/23/2022] [Indexed: 11/08/2022]
Abstract
Auditory science increasingly builds on concepts and testing paradigms originated in behavioral psychology and cognitive neuroscience - an evolution of which the resulting discipline is now known as cognitive hearing science. Experimental cognitive hearing science paradigms call for hybrid cognitive and psychobehavioral tests such as those relating the attentional system, working memory, and executive functioning to low-level auditory acuity or speech intelligibility. Building complex multi-stimuli experiments can rapidly become time-consuming and error-prone. Platform-based experiment design can help streamline the implementation of cognitive hearing science experimental paradigms, promote the standardization of experiment design practices, and ensure reliability and control. Here, we introduce a set of features for the open-source python-based OpenSesame platform that allows the rapid implementation of custom behavioral and cognitive hearing science tests, including complex multichannel audio stimuli while interfacing with various synchronous inputs/outputs. Our integration includes advanced audio playback capabilities with multiple loudspeakers, an adaptive procedure, compatibility with standard I/Os and their synchronization through implementation of the Lab Streaming Layer protocol. We exemplify the capabilities of this extended OpenSesame platform with an implementation of the three-alternative forced choice amplitude modulation detection test and discuss reliability and performance. The new features are available free of charge from GitHub: https://github.com/elus-om/BRM_OMEXP .
Collapse
|
35
|
Bonmassar C, Scharf F, Widmann A, Wetzel N. On the relationship of arousal and attentional distraction by emotional novel sounds. Cognition 2023; 237:105470. [PMID: 37150156 DOI: 10.1016/j.cognition.2023.105470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Revised: 04/23/2023] [Accepted: 04/24/2023] [Indexed: 05/09/2023]
Abstract
Unexpected and task-irrelevant sounds can impair performance in a task. It has been shown that highly arousing emotional distractor sounds impaired performance less compared to moderately arousing neutral distractor sounds. The present study tests whether these differential emotion-related distraction effects are directly related to an enhancement of arousal evoked by processing of emotional distractor sounds. We disentangled costs of orienting of attention and benefits of increased arousal levels during the presentation of highly arousing emotional and moderately arousing neutral novel sounds that were embedded in a sequence of repeated standard sounds. We used sound-related pupil dilation responses as a marker of arousal and RTs as a marker of distraction in a visual categorization task in 57 healthy young adults. Multilevel analyses revealed increased RT and increased pupil dilation in response to novel vs. standard sounds. Emotional novel sounds reduced distraction effects on the behavioral level and increased pupil dilation responses compared to neutral novel sounds. Bayes Factors revealed strong evidence against an inverse proportional relationship between behavioral distraction effects and sound-related pupil dilation responses for emotional sounds. Given that the activity of the locus coeruleus has been linked to both changes in pupil diameter and arousal, it may embody an indirect relationship as a common antecedent by the release of norepinephrine into brain networks involved in attention control and control of the pupil. The present study provides new insights into the relation of changes in arousal and attentional distraction during the processing of emotional task-irrelevant novel sounds.
Collapse
Affiliation(s)
| | | | - Andreas Widmann
- Leibniz Institute for Neurobiology, Magdeburg, Germany; Leipzig University, Germany
| | - Nicole Wetzel
- Leibniz Institute for Neurobiology, Magdeburg, Germany; Center for Behavioral Brain Sciences Magdeburg, Germany; University of Applied Sciences Magdeburg-, Stendal, Germany
| |
Collapse
|
36
|
Winn MB. Time Scales and Moments of Listening Effort Revealed in Pupillometry. Semin Hear 2023; 44:106-123. [PMID: 37122881 PMCID: PMC10147502 DOI: 10.1055/s-0043-1767741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023] Open
Abstract
This article offers a collection of observations that highlight the value of time course data in pupillometry and points out ways in which these observations create deeper understanding of listening effort. The main message is that listening effort should be considered on a moment-to-moment basis rather than as a singular amount. A review of various studies and the reanalysis of data reveal distinct signatures of effort before a stimulus, during a stimulus, in the moments after a stimulus, and changes over whole experimental testing sessions. Collectively these observations motivate questions that extend beyond the "amount" of effort, toward understanding how long the effort lasts, and how precisely someone can allocate effort at specific points in time or reduce effort at other times. Apparent disagreements between studies are reconsidered as informative lessons about stimulus selection and the nature of pupil dilation as a reflection of decision making rather than the difficulty of sensory encoding.
Collapse
Affiliation(s)
- Matthew B. Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota
| |
Collapse
|
37
|
Kooijman L, Asadi H, Mohamed S, Nahavandi S. A virtual reality study investigating the train illusion. ROYAL SOCIETY OPEN SCIENCE 2023; 10:221622. [PMID: 37063997 PMCID: PMC10090874 DOI: 10.1098/rsos.221622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
The feeling of self-movement that occurs in the absence of physical motion is often referred to as vection, which is commonly exemplified using the train illusion analogy (TIA). Limited research exists on whether the TIA accurately exemplifies the experience of vection in virtual environments (VEs). Few studies complemented their vection research with participants' qualitative feedback or by recording physiological responses, and most studies used stimuli that contextually differed from the TIA. We investigated whether vection is experienced differently in a VE replicating the TIA compared to a VE depicting optic flow by recording subjective and physiological responses. Additionally, we explored participants' experience through an open question survey. We expected the TIA environment to induce enhanced vection compared to the optic flow environment. Twenty-nine participants were visually and audibly immersed in VEs that either depicted optic flow or replicated the TIA. Results showed optic flow elicited more compelling vection than the TIA environment and no consistent physiological correlates to vection were identified. The post-experiment survey revealed discrepancies between participants' quantitative and qualitative feedback. Although the dynamic content may outweigh the ecological relevance of the stimuli, it was concluded that more qualitative research is needed to understand participants' vection experience in VEs.
Collapse
Affiliation(s)
- Lars Kooijman
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Victoria, Australia
| | - Houshyar Asadi
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Victoria, Australia
| | - Shady Mohamed
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Victoria, Australia
| | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, Victoria, Australia
- Harvard Paulson School of Engineering and Applied Sciences, Harvard University, Allston, MA 02134, USA
| |
Collapse
|
38
|
Gosselin L, Sabourin L. Language athletes: Dual-language code-switchers exhibit inhibitory control advantages. Front Psychol 2023; 14:1150159. [PMID: 37063556 PMCID: PMC10102468 DOI: 10.3389/fpsyg.2023.1150159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/06/2023] [Indexed: 04/03/2023] Open
Abstract
Recent studies have begun to examine bilingual cognition from more nuanced, experienced-based perspectives. The present study adds to this body of work by investigating the potential impact of code-switching on bilinguals’ inhibitory control abilities. Crucially, our bilingual participants originated from a predominantly dual-language environment, the interactional context which is believed to require (and therefore, potentially train) cognitive control processes related to goal-monitoring and inhibition. As such, 266 French Canadian bilinguals completed an online experiment wherein they were asked to complete a domain-general (Flanker) and a language-specific (bilingual Stroop) inhibitory control task, as well as extensive demographic and language background questionnaires. Stepwise multiple regressions (including various potential demographic and linguistic predictors) were conducted on the participants’ Flanker and Stroop effects. The results indicated that the bilinguals’ propensity to code-switch consistently yielded significant positive (but unidirectional) inhibitory control effects: dual-language bilinguals who reported more habitual French-to-English switching exhibited better goal-monitoring and inhibition abilities. For the language-specific task, the analysis also revealed that frequent unintentional code-switching may mitigate these inhibition skills. As such, the findings demonstrate that dual-language code-switchers may experience inhibitory control benefits, but only when their switching is self-reportedly deliberate. We conclude that the bilinguals’ interactional context is thus of primary importance, as the dual-language context is more conducive to intentional code-switching. Overall, the current study highlights the importance of considering individualistic language experience when it comes to examining potential bilingual executive functioning advantages.
Collapse
|
39
|
Neagu MB, Kressner AA, Relaño-Iborra H, Bækgaard P, Dau T, Wendt D. Investigating the Reliability of Pupillometry as a Measure of
Individualized Listening Effort. Trends Hear 2023; 27:23312165231153288. [PMCID: PMC9947699 DOI: 10.1177/23312165231153288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023] Open
Abstract
Recordings of the pupillary response have been used in numerous studies to assess
listening effort during a speech-in-noise task. Most studies focused on averaged
responses across listeners, whereas less is known about pupil dilation as an
indicator of the individuals’ listening effort. The present study investigated
the reliability of several pupil features as potential indicators of individual
listening effort and the impact of different normalization procedures on the
reliability. The pupil diameters of 31 normal-hearing listeners were recorded
during multiple visits while performing a speech-in-noise task. The
signal-to-noise ratios (SNRs) of the stimuli ranged from
−12 dB to
+4 dB. All listeners
were measured twice at separate visits, and 11 were re-tested at a third visit.
To examine the reliability of the pupil responses across visits, the intraclass
correlation coefficient was applied to the peak and mean pupil dilation and to
the temporal features of the pupil response, extracted using growth curve
analysis. The reliability of the pupillary response was assessed in relation to
SNR and different normalization procedures over multiple visits. The most
reliable pupil features were the traditional mean and peak pupil dilation. The
highest reliability results were obtained when the data were baseline-corrected
and normalized to the individual pupil response range across all visits.
Moreover, the present study results showed only a minor impact of the SNR and
the number of visits on the reliability of the pupil response. Overall, the
results may provide an important basis for developing a standardized test for
pupillometry in the clinic.
Collapse
Affiliation(s)
- Mihaela-Beatrice Neagu
- Department of Health Technology, DTU Hearing Systems, Denmark,Mihaela-Beatrice Neagu, Department of
Health Technology, DTU Hearing Systems, Denmark.
| | - Abigail A. Kressner
- Department of Health Technology, DTU Hearing Systems, Denmark,Copenhagen Hearing and Balance Centre, Rigshospitalet, Copenhagen
University Hospital, Denmark
| | - Helia Relaño-Iborra
- Department of Health Technology, DTU Hearing Systems, Denmark,Department of Applied Mathematics and Computer Science, DTU Cognitive systems, Denmark
| | - Per Bækgaard
- Department of Applied Mathematics and Computer Science, DTU Cognitive systems, Denmark
| | - Torsten Dau
- Department of Health Technology, DTU Hearing Systems, Denmark,Copenhagen Hearing and Balance Centre, Rigshospitalet, Copenhagen
University Hospital, Denmark
| | - Dorothea Wendt
- Department of Health Technology, DTU Hearing Systems, Denmark,Eriksholm Research Centre, Denmark
| |
Collapse
|
40
|
Rutar D, Colizoli O, Selen L, Spieß L, Kwisthout J, Hunnius S. Differentiating between Bayesian parameter learning and structure learning based on behavioural and pupil measures. PLoS One 2023; 18:e0270619. [PMID: 36795714 PMCID: PMC9934335 DOI: 10.1371/journal.pone.0270619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 01/18/2023] [Indexed: 02/17/2023] Open
Abstract
Within predictive processing two kinds of learning can be distinguished: parameter learning and structure learning. In Bayesian parameter learning, parameters under a specific generative model are continuously being updated in light of new evidence. However, this learning mechanism cannot explain how new parameters are added to a model. Structure learning, unlike parameter learning, makes structural changes to a generative model by altering its causal connections or adding or removing parameters. Whilst these two types of learning have recently been formally differentiated, they have not been empirically distinguished. The aim of this research was to empirically differentiate between parameter learning and structure learning on the basis of how they affect pupil dilation. Participants took part in a within-subject computer-based learning experiment with two phases. In the first phase, participants had to learn the relationship between cues and target stimuli. In the second phase, they had to learn a conditional change in this relationship. Our results show that the learning dynamics were indeed qualitatively different between the two experimental phases, but in the opposite direction as we originally expected. Participants were learning more gradually in the second phase compared to the first phase. This might imply that participants built multiple models from scratch in the first phase (structure learning) before settling on one of these models. In the second phase, participants possibly just needed to update the probability distribution over the model parameters (parameter learning).
Collapse
Affiliation(s)
- Danaja Rutar
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
- * E-mail:
| | - Olympia Colizoli
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Luc Selen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | | | - Johan Kwisthout
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Sabine Hunnius
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
41
|
Human voices escape the auditory attentional blink: Evidence from detections and pupil responses. Brain Cogn 2023; 165:105928. [PMID: 36459865 DOI: 10.1016/j.bandc.2022.105928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 10/30/2022] [Accepted: 11/03/2022] [Indexed: 11/30/2022]
Abstract
Attentional selection of a second target in a rapid stream of stimuli embedding two targets tends to be briefly impaired when two targets are presented in close temporal proximity, an effect known as an attentional blink (AB). Two target sounds (T1 and T2) were embedded in a rapid serial auditory presentation of environmental sounds with a short (Lag 3) or long lag (Lag 9). Participants were to first identify T1 (bell or sine tone) and then to detect T2 (present or absent). Individual stimuli had durations of either 30 or 90 ms, and were presented in streams of 20 sounds. The T2 varied in category: human voice, cello, or dog sound. Previous research has introduced pupillometry as a useful marker of the intensity of cognitive processing and attentional allocation in the visual AB paradigm. Results suggest that the interplay of stimulus factors is critical for target detection accuracy and provides support for the hypothesis that the human voice is the least likely to show an auditory AB (in the 90 ms condition). For the other stimuli, accuracy for T2 was significantly worse at Lag 3 than at Lag 9 in the 90 ms condition, suggesting the presence of an auditory AB. When AB occurred (at Lag 3), we observed smaller pupil dilations, time-locked to the onset of T2, compared to Lag 9, reflecting lower attentional processing when 'blinking' during target detection. Taken together, these findings support the conclusion that human voices escape the AB and that the pupillary changes are consistent with the so-called T2 attentional deficit. In addition, we found some indication that salient stimuli like human voices could require a less intense allocation of attention, or noradrenergic potentiation, compared to other auditory stimuli.
Collapse
|
42
|
Wentzel M, Janse van Rensburg J, Terblans JJ. Radiology blues: Comparing occupational blue-light exposure to recommended safety standards. SA J Radiol 2023; 27:2522. [PMID: 36756358 PMCID: PMC9900293 DOI: 10.4102/sajr.v27i1.2522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 11/28/2022] [Indexed: 02/04/2023] Open
Abstract
Background The blue-light hazard is a well-documented entity addressing the detrimental health effects of high-energy visible light photons in the range of 305 nm - 450 nm. Radiologists spend long hours in front of multiple light-emitting diode (LED)-based diagnostic monitors emitting blue light, predisposing them to potentially higher blue-light dosages than other health professionals. Objectives The authors aimed to quantify the blue light that radiology registrars are exposed to in daily viewing of diagnostic monitors and compared this with international occupational safety standards. Method A limited cross-sectional observational study was conducted. Four radiology registrars at two academic hospitals in Bloemfontein from 01 October 2021 to 30 November 2021 participated. Diagnostic monitor viewing times on a standard workday were determined. Different image modalities obtained from 01 June 2019 to 30 November 2019 were assessed, and blue-light radiance was determined using a spectroscope and image analysis software. Blue-light radiance values were compared with international safety standards. Results Radiology registrars spent on average 380 min in front of a diagnostic display unit daily. Blue-light radiance from diagnostic monitors was elevated in higher-intensity images such as chest radiographs and lower for darker images like MRI brain studies. The total blue-light radiance from diagnostic display units was more than 10 000 times below the recommended threshold value for blue-light exposure. Conclusion Blue-light radiance from diagnostic displays measured well below the recommended values for occupational safety. Hence, blue-light exposure from diagnostic monitors does not significantly add to the occupational health burden of radiologists. Contribution Despite spending long hours in front of diagnostic monitors, radiologists' exposure to effective blue-light radiance from monitors was far below hazardous values. This suggests that blue-light exposure from diagnostic monitors does not increase the occupational health burden of radiologists.
Collapse
Affiliation(s)
- Mari Wentzel
- Department of Clinical Imaging Science, Faculty of Health Sciences, University of the Free State, Bloemfontein, South Africa
| | - Jacques Janse van Rensburg
- Department of Clinical Imaging Science, Faculty of Health Sciences, University of the Free State, Bloemfontein, South Africa
| | - Jacobus J. Terblans
- Department of Physics, Faculty of Natural and Agricultural Sciences, University of the Free State, Bloemfontein, South Africa
| |
Collapse
|
43
|
Pernia M, Kar M, Montes-Lourido P, Sadagopan S. Pupillometry to Assess Auditory Sensation in Guinea Pigs. J Vis Exp 2023:10.3791/64581. [PMID: 36688548 PMCID: PMC9929667 DOI: 10.3791/64581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Noise exposure is a leading cause of sensorineural hearing loss. Animal models of noise-induced hearing loss have generated mechanistic insight into the underlying anatomical and physiological pathologies of hearing loss. However, relating behavioral deficits observed in humans with hearing loss to behavioral deficits in animal models remains challenging. Here, pupillometry is proposed as a method that will enable the direct comparison of animal and human behavioral data. The method is based on a modified oddball paradigm - habituating the subject to the repeated presentation of a stimulus and intermittently presenting a deviant stimulus that varies in some parametric fashion from the repeated stimulus. The fundamental premise is that if the change between the repeated and deviant stimulus is detected by the subject, it will trigger a pupil dilation response that is larger than that elicited by the repeated stimulus. This approach is demonstrated using a vocalization categorization task in guinea pigs, an animal model widely used in auditory research, including in hearing loss studies. By presenting vocalizations from one vocalization category as standard stimuli and a second category as oddball stimuli embedded in noise at various signal-to-noise ratios, it is demonstrated that the magnitude of pupil dilation in response to the oddball category varies monotonically with the signal-to-noise ratio. Growth curve analyses can then be used to characterize the time course and statistical significance of these pupil dilation responses. In this protocol, detailed procedures for acclimating guinea pigs to the setup, conducting pupillometry, and evaluating/analyzing data are described. Although this technique is demonstrated in normal-hearing guinea pigs in this protocol, the method may be used to assess the sensory effects of various forms of hearing loss within each subject. These effects may then be correlated with concurrent electrophysiological measures and post-hoc anatomical observations.
Collapse
Affiliation(s)
- Marianny Pernia
- Department of Neurobiology, University of Pittsburgh; Center for Neuroscience, University of Pittsburgh
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh; Center for Neuroscience, University of Pittsburgh; Center for Neural Basis of Cognition, University of Pittsburgh
| | - Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh; Center for Neuroscience, University of Pittsburgh; Department of Transfer and Innovation, USC University Hospital Complex (CHUS), University of Santiago de Compostela
| | - Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh; Center for Neuroscience, University of Pittsburgh; Department of Bioengineering, University of Pittsburgh; Center for Neural Basis of Cognition, University of Pittsburgh; Department of Communication Science and Disorders, University of Pittsburgh;
| |
Collapse
|
44
|
Dercksen TT, Widmann A, Wetzel N. Salient omissions-pupil dilation in response to unexpected omissions of sound and touch. Front Psychiatry 2023; 14:1143931. [PMID: 37032955 PMCID: PMC10077953 DOI: 10.3389/fpsyt.2023.1143931] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 02/21/2023] [Indexed: 04/11/2023] Open
Abstract
Introduction Recent theories describe perception as an inferential process based on internal predictive models adjusted by means of prediction violations (prediction error). To study and demonstrate predictive processing in the brain the use of unexpected stimulus omissions has been suggested as a promising approach as the evoked brain responses are uncontaminated by responses to stimuli. Here, we aimed to investigate the pupil's response to unexpected stimulus omissions in order to better understand surprise and orienting of attention resulting from prediction violation. So far only few studies have used omission in pupillometry research and results have been inconsistent. Methods This study adapted an EEG paradigm that has been shown to elicit omission responses in auditory and somatosensory modalities. Healthy adults pressed a button at their own pace, which resulted in the presentation of sounds or tactile stimuli in either 88%, 50% or 0% (motor-control) of cases. Pupil size was recorded continuously and averaged to analyze the pupil dilation response associated with each condition. Results Results revealed that omission responses were observed in both modalities in the 88%-condition compared to motor-control. Similar pupil omission responses were observed between modalities, suggesting modality-unspecific activation of the underlying brain circuits. Discussion In combination with previous omission studies using EEG, the findings demonstrate predictive models in brain processing and point to the involvement of subcortical structures in the omission response. Our pupillometry approach is especially suitable to study sensory prediction in vulnerable populations within the psychiatric field.
Collapse
Affiliation(s)
- Tjerk T. Dercksen
- Research Group Neurocognitive Development, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Magdeburg, Germany
- *Correspondence: Tjerk T. Dercksen,
| | - Andreas Widmann
- Research Group Neurocognitive Development, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Nicole Wetzel
- Research Group Neurocognitive Development, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Magdeburg, Germany
- University of Applied Sciences Magdeburg-Stendal, Stendal, Germany
| |
Collapse
|
45
|
Bsharat-Maalouf D, Degani T, Karawani H. The Involvement of Listening Effort in Explaining Bilingual Listening Under Adverse Listening Conditions. Trends Hear 2023; 27:23312165231205107. [PMID: 37941413 PMCID: PMC10637154 DOI: 10.1177/23312165231205107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/14/2023] [Accepted: 09/15/2023] [Indexed: 11/10/2023] Open
Abstract
The current review examines listening effort to uncover how it is implicated in bilingual performance under adverse listening conditions. Various measures of listening effort, including physiological, behavioral, and subjective measures, have been employed to examine listening effort in bilingual children and adults. Adverse listening conditions, stemming from environmental factors, as well as factors related to the speaker or listener, have been examined. The existing literature, although relatively limited to date, points to increased listening effort among bilinguals in their nondominant second language (L2) compared to their dominant first language (L1) and relative to monolinguals. Interestingly, increased effort is often observed even when speech intelligibility remains unaffected. These findings emphasize the importance of considering listening effort alongside speech intelligibility. Building upon the insights gained from the current review, we propose that various factors may modulate the observed effects. These include the particular measure selected to examine listening effort, the characteristics of the adverse condition, as well as factors related to the particular linguistic background of the bilingual speaker. Critically, further research is needed to better understand the impact of these factors on listening effort. The review outlines avenues for future research that would promote a comprehensive understanding of listening effort in bilingual individuals.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Tamar Degani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
46
|
O’Leary RM, Neukam J, Hansen TA, Kinney AJ, Capach N, Svirsky MA, Wingfield A. Strategic Pauses Relieve Listeners from the Effort of Listening to Fast Speech: Data Limited and Resource Limited Processes in Narrative Recall by Adult Users of Cochlear Implants. Trends Hear 2023; 27:23312165231203514. [PMID: 37941344 PMCID: PMC10637151 DOI: 10.1177/23312165231203514] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 08/11/2023] [Accepted: 09/08/2023] [Indexed: 11/10/2023] Open
Abstract
Speech that has been artificially accelerated through time compression produces a notable deficit in recall of the speech content. This is especially so for adults with cochlear implants (CI). At the perceptual level, this deficit may be due to the sharply degraded CI signal, combined with the reduced richness of compressed speech. At the cognitive level, the rapidity of time-compressed speech can deprive the listener of the ordinarily available processing time present when speech is delivered at a normal speech rate. Two experiments are reported. Experiment 1 was conducted with 27 normal-hearing young adults as a proof-of-concept demonstration that restoring lost processing time by inserting silent pauses at linguistically salient points within a time-compressed narrative ("time-restoration") returns recall accuracy to a level approximating that for a normal speech rate. Noise vocoder conditions with 10 and 6 channels reduced the effectiveness of time-restoration. Pupil dilation indicated that additional effort was expended by participants while attempting to process the time-compressed narratives, with the effortful demand on resources reduced with time restoration. In Experiment 2, 15 adult CI users tested with the same (unvocoded) materials showed a similar pattern of behavioral and pupillary responses, but with the notable exception that meaningful recovery of recall accuracy with time-restoration was limited to a subgroup of CI users identified by better working memory spans, and better word and sentence recognition scores. Results are discussed in terms of sensory-cognitive interactions in data-limited and resource-limited processes among adult users of cochlear implants.
Collapse
Affiliation(s)
- Ryan M. O’Leary
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | - Jonathan Neukam
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Thomas A. Hansen
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | | | - Nicole Capach
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Mario A. Svirsky
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
47
|
Książek P, Zekveld AA, Fiedler L, Kramer SE, Wendt D. Time-specific Components of Pupil Responses Reveal Alternations in Effort Allocation Caused by Memory Task Demands During Speech Identification in Noise. Trends Hear 2023; 27:23312165231153280. [PMID: 36938784 PMCID: PMC10028670 DOI: 10.1177/23312165231153280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2023] Open
Abstract
Daily communication may be effortful due to poor acoustic quality. In addition, memory demands can induce effort, especially for long or complex sentences. In the current study, we tested the impact of memory task demands and speech-to-noise ratio on the time-specific components of effort allocation during speech identification in noise. Thirty normally hearing adults (15 females, mean age 42.2 years) participated. In an established auditory memory test, listeners had to listen to a list of seven sentences in noise, and repeat the sentence-final word after presentation, and, if instructed, recall the repeated words. We tested the effects of speech-to-noise ratio (SNR; -4 dB, +1 dB) and recall (Recall; Yes, No), on the time-specific components of pupil responses, trial baseline pupil size, and their dynamics (change) along the list. We found three components in the pupil responses (early, middle, and late). While the additional memory task (recall versus no recall) lowered all components' values, SNR (-4 dB versus +1 dB SNR) increased the middle and late component values. Increasing memory demands (Recall) progressively increased trial baseline and steepened decrease of the late component's values. Trial baseline increased most steeply in the condition of +1 dB SNR with recall. The findings suggest that adding a recall to the auditory task alters effort allocation for listening. Listeners are dynamically re-allocating effort from listening to memorizing under changing memory and acoustic demands. The pupil baseline and the time-specific components of pupil responses provide a comprehensive picture of the interplay of SNR and recall on effort.
Collapse
Affiliation(s)
- Patrycja Książek
- 26066Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology/Head and Neck Surgery, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
- 263099Eriksholm Research Centre, Snekkersten, Denmark
| | - Adriana A Zekveld
- 26066Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology/Head and Neck Surgery, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | | | - Sophia E Kramer
- 26066Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology/Head and Neck Surgery, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | - Dorothea Wendt
- 263099Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, 5205Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
48
|
Zhang Y, Malaval F, Lehmann A, Deroche MLD. Luminance effects on pupil dilation in speech-in-noise recognition. PLoS One 2022; 17:e0278506. [PMID: 36459511 PMCID: PMC9718387 DOI: 10.1371/journal.pone.0278506] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 11/17/2022] [Indexed: 12/03/2022] Open
Abstract
There is an increasing interest in the field of audiology and speech communication to measure the effort that it takes to listen in noisy environments, with obvious implications for populations suffering from hearing loss. Pupillometry offers one avenue to make progress in this enterprise but important methodological questions remain to be addressed before such tools can serve practical applications. Typically, cocktail-party situations may occur in less-than-ideal lighting conditions, e.g. a pub or a restaurant, and it is unclear how robust pupil dynamics are to luminance changes. In this study, we first used a well-known paradigm where sentences were presented at different signal-to-noise ratios (SNR), all conducive of good intelligibility. This enabled us to replicate findings, e.g. a larger and later peak pupil dilation (PPD) at adverse SNR, or when the sentences were misunderstood, and to investigate the dependency of the PPD on sentence duration. A second experiment reiterated two of the SNR levels, 0 and +14 dB, but measured at 0, 75, and 220 lux. The results showed that the impact of luminance on the SNR effect was non-monotonic (sub-optimal in darkness or in bright light), and as such, there is no trivial way to derive pupillary metrics that are robust to differences in background light, posing considerable constraints for applications of pupillometry in daily life. Our findings raise an under-examined but crucial issue when designing and understanding listening effort studies using pupillometry, and offer important insights to future clinical application of pupillometry across sites.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
- * E-mail:
| | - Florian Malaval
- Department of Otolaryngology, McGill University, Montreal, Canada
| | - Alexandre Lehmann
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
| | - Mickael L. D. Deroche
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
- Department of Psychology, Concordia University, Montreal, Canada
| |
Collapse
|
49
|
Relaño-Iborra H, Wendt D, Neagu MB, Kressner AA, Dau T, Bækgaard P. Baseline pupil size encodes task-related information and modulates the task-evoked response in a speech-in-noise task. Trends Hear 2022; 26:23312165221134003. [PMID: 36426573 PMCID: PMC9703509 DOI: 10.1177/23312165221134003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Pupillometry data are commonly reported relative to a baseline value recorded in a controlled pre-task condition. In this study, the influence of the experimental design and the preparatory processing related to task difficulty on the baseline pupil size was investigated during a speech intelligibility in noise paradigm. Furthermore, the relationship between the baseline pupil size and the temporal dynamics of the pupil response was assessed. The analysis revealed strong effects of block presentation order, within-block sentence order and task difficulty on the baseline values. An interaction between signal-to-noise ratio and block order was found, indicating that baseline values reflect listener expectations arising from the order in which the different blocks were presented. Furthermore, the baseline pupil size was found to affect the slope, delay and curvature of the pupillary response as well as the peak pupil dilation. This suggests that baseline correction might be sufficient when reporting pupillometry results in terms of mean pupil dilation only, but not when a more complex characterization of the temporal dynamics of the response is considered. By clarifying which factors affect baseline pupil size and how baseline values interact with the task-evoked response, the results from the present study can contribute to a better interpretation of the pupillary response as a marker of cognitive processing.
Collapse
Affiliation(s)
- Helia Relaño-Iborra
- Cognitive Systems Section, Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark,Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark,Helia Relaño-Iborra, Cognitive Systems Section, Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark.
| | - Dorothea Wendt
- Eriksholm Research Center, Oticon, 3070 Snekkersten, Denmark
| | - Mihaela Beatrice Neagu
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark
| | - Abigail Anne Kressner
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark,Copenhagen Hearing and Balance Center, Rigshospitalet, 2100, Copenhagen, Denmark
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark
| | - Per Bækgaard
- Cognitive Systems Section, Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs, Lyngby, Denmark
| |
Collapse
|
50
|
Micula A, Rönnberg J, Książek P, Murmu Nielsen R, Wendt D, Fiedler L, Ng EHN. A Glimpse of Memory Through the Eyes: Pupillary Responses Measured During Encoding Reflect the Likelihood of Subsequent Memory Recall in an Auditory Free Recall Test. Trends Hear 2022; 26:23312165221130581. [PMID: 36305085 PMCID: PMC9620000 DOI: 10.1177/23312165221130581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
The aim of the current study was to investigate whether task-evoked pupillary
responses measured during encoding, individual working memory capacity and noise
reduction in hearing aids were associated with the likelihood of subsequently
recalling an item in an auditory free recall test combined with pupillometry.
Participants with mild to moderately severe symmetrical sensorineural hearing
loss (n = 21) were included. The Sentence-final Word Identification and Recall
(SWIR) test was administered in a background noise composed of sixteen talkers
with noise reduction in hearing aids activated and deactivated. The task-evoked
peak pupil dilation (PPD) was measured. The Reading Span (RS) test was used as a
measure of individual working memory capacity. Larger PPD at a single trial
level was significantly associated with higher likelihood of subsequently
recalling a word, presumably reflecting the intensity of attention devoted
during encoding. There was no clear evidence of a significant relationship
between working memory capacity and subsequent memory recall, which may be
attributed to the SWIR test and RS test being administered in different
modalities, as well as differences in task characteristics. Noise reduction did
not have a significant effect on subsequent memory recall. This may be due to
the background noise not having a detrimental effect on attentional processing
at the favorable signal-to-noise ratio levels at which the test was
conducted.
Collapse
Affiliation(s)
- Andreea Micula
- Department of Behavioural Sciences and Learning, Linnaeus Centre
HEAD, Swedish Institute for Disability Research, Linköping
University, Linköping, Sweden,Andreea Micula, Department of Behavioural
Sciences and Learning, Linköping University, SE-581 83 Linköping, Sweden.
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre
HEAD, Swedish Institute for Disability Research, Linköping
University, Linköping, Sweden
| | - Patrycja Książek
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and
Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute,
Amsterdam, The Netherlands,Eriksholm Research Centre, Snekkersten, Denmark
| | | | - Dorothea Wendt
- Eriksholm Research Centre, Snekkersten, Denmark,Hearing Systems, Hearing Systems Group, Department of Electrical
Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | | | - Elaine Hoi Ning Ng
- Department of Behavioural Sciences and Learning, Linnaeus Centre
HEAD, Swedish Institute for Disability Research, Linköping
University, Linköping, Sweden,Oticon
A/S, Smørum, Denmark
| |
Collapse
|