1
|
de Gee JW, Mridha Z, Hudson M, Shi Y, Ramsaywak H, Smith S, Karediya N, Thompson M, Jaspe K, Jiang H, Zhang W, McGinley MJ. Strategic stabilization of arousal boosts sustained attention. Curr Biol 2024:S0960-9822(24)01009-1. [PMID: 39151432 DOI: 10.1016/j.cub.2024.07.070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 07/17/2024] [Accepted: 07/18/2024] [Indexed: 08/19/2024]
Abstract
Arousal and motivation interact to profoundly influence behavior. For example, experience tells us that we have some capacity to control our arousal when appropriately motivated, such as staying awake while driving a motor vehicle. However, little is known about how arousal and motivation jointly influence decision computations, including if and how animals, such as rodents, adapt their arousal state to their needs. Here, we developed and show results from an auditory, feature-based, sustained-attention task with intermittently shifting task utility. We use pupil size to estimate arousal across a wide range of states and apply tailored signal-detection theoretic, hazard function, and accumulation-to-bound modeling approaches in a large cohort of mice. We find that pupil-linked arousal and task utility both have major impacts on multiple aspects of task performance. Although substantial arousal fluctuations persist across utility conditions, mice partially stabilize their arousal near an intermediate and optimal level when task utility is high. Behavioral analyses show that multiple elements of behavior improve during high task utility and that arousal influences some, but not all, of them. Specifically, arousal influences the likelihood and timescale of sensory evidence accumulation but not the quantity of evidence accumulated per time step while attending. In sum, the results establish specific decision-computational signatures of arousal, motivation, and their interaction in attention. So doing, we provide an experimental and analysis framework for studying arousal self-regulation in neurotypical brains and in diseases such as attention-deficit/hyperactivity disorder.
Collapse
Affiliation(s)
- Jan Willem de Gee
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA; Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, the Netherlands; Research Priority Area Brain and Cognition, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, the Netherlands.
| | - Zakir Mridha
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA
| | - Marissa Hudson
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA
| | - Yanchen Shi
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA
| | - Hannah Ramsaywak
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA
| | - Spencer Smith
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA
| | - Nishad Karediya
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA
| | - Matthew Thompson
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA
| | - Kit Jaspe
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA
| | - Hong Jiang
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA
| | - Wenhao Zhang
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA
| | - Matthew J McGinley
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA; Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, 1250 Moursund Street, Houston, TX 77030, USA; Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX 77005, USA.
| |
Collapse
|
2
|
Xu S, Zhang H, Fan J, Jiang X, Zhang M, Guan J, Ding H, Zhang Y. Auditory Challenges and Listening Effort in School-Age Children With Autism: Insights From Pupillary Dynamics During Speech-in-Noise Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:2410-2453. [PMID: 38861391 DOI: 10.1044/2024_jslhr-23-00553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
PURPOSE This study aimed to investigate challenges in speech-in-noise (SiN) processing faced by school-age children with autism spectrum conditions (ASCs) and their impact on listening effort. METHOD Participants, including 23 Mandarin-speaking children with ASCs and 19 age-matched neurotypical (NT) peers, underwent sentence recognition tests in both quiet and noisy conditions, with a speech-shaped steady-state noise masker presented at 0-dB signal-to-noise ratio in the noisy condition. Recognition accuracy rates and task-evoked pupil responses were compared to assess behavioral performance and listening effort during auditory tasks. RESULTS No main effect of group was found on accuracy rates. Instead, significant effects emerged for autistic trait scores, listening conditions, and their interaction, indicating that higher trait scores were associated with poorer performance in noise. Pupillometric data revealed significantly larger and earlier peak dilations, along with more varied pupillary dynamics in the ASC group relative to the NT group, especially under noisy conditions. Importantly, the ASC group's peak dilation in quiet mirrored that of the NT group in noise. However, the ASC group consistently exhibited reduced mean dilations than the NT group. CONCLUSIONS Pupillary responses suggest a different resource allocation pattern in ASCs: An initial sharper and larger dilation may signal an intense, narrowed resource allocation, likely linked to heightened arousal, engagement, and cognitive load, whereas a subsequent faster tail-off may indicate a greater decrease in resource availability and engagement, or a quicker release of arousal and cognitive load. The presence of noise further accentuates this pattern. This highlights the unique SiN processing challenges children with ASCs may face, underscoring the importance of a nuanced, individual-centric approach for interventions and support.
Collapse
Affiliation(s)
- Suyun Xu
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Hua Zhang
- Department of Child and Adolescent Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, China
| | - Juan Fan
- Department of Child and Adolescent Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, China
| | - Xiaoming Jiang
- Institute of Linguistics, Shanghai International Studies University, China
| | - Minyue Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | | | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis
| |
Collapse
|
3
|
Keur-Huizinga L, Kramer SE, de Geus EJC, Zekveld AA. A Multimodal Approach to Measuring Listening Effort: A Systematic Review on the Effects of Auditory Task Demand on Physiological Measures and Their Relationship. Ear Hear 2024:00003446-990000000-00297. [PMID: 38880960 PMCID: PMC11325958 DOI: 10.1097/aud.0000000000001508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
OBJECTIVES Listening effort involves the mental effort required to perceive an auditory stimulus, for example in noisy environments. Prolonged increased listening effort, for example due to impaired hearing ability, may increase risk of health complications. It is therefore important to identify valid and sensitive measures of listening effort. Physiological measures have been shown to be sensitive to auditory task demand manipulations and are considered to reflect changes in listening effort. Such measures include pupil dilation, alpha power, skin conductance level, and heart rate variability. The aim of the current systematic review was to provide an overview of studies to listening effort that used multiple physiological measures. The two main questions were: (1) what is the effect of changes in auditory task demand on simultaneously acquired physiological measures from various modalities? and (2) what is the relationship between the responses in these physiological measures? DESIGN Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, relevant articles were sought in PubMed, PsycInfo, and Web of Science and by examining the references of included articles. Search iterations with different combinations of psychophysiological measures were performed in conjunction with listening effort-related search terms. Quality was assessed using the Appraisal Tool for Cross-Sectional Studies. RESULTS A total of 297 articles were identified from three databases, of which 27 were included. One additional article was identified from reference lists. Of the total 28 included articles, 16 included an analysis regarding the relationship between the physiological measures. The overall quality of the included studies was reasonable. CONCLUSIONS The included studies showed that most of the physiological measures either show no effect to auditory task demand manipulations or a consistent effect in the expected direction. For example, pupil dilation increased, pre-ejection period decreased, and skin conductance level increased with increasing auditory task demand. Most of the relationships between the responses of these physiological measures were nonsignificant or weak. The physiological measures varied in their sensitivity to auditory task demand manipulations. One of the identified knowledge gaps was that the included studies mostly used tasks with high-performance levels, resulting in an underrepresentation of the physiological changes at lower performance levels. This makes it difficult to capture how the physiological responses behave across the full psychometric curve. Our results support the Framework for Understanding Effortful Listening and the need for a multimodal approach to listening effort. We furthermore discuss focus points for future studies.
Collapse
Affiliation(s)
- Laura Keur-Huizinga
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
- Department of Biological Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sophia E Kramer
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| | - Eco J C de Geus
- Department of Biological Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Adriana A Zekveld
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Herrmann B, Ryan JD. Pupil Size and Eye Movements Differently Index Effort in Both Younger and Older Adults. J Cogn Neurosci 2024; 36:1325-1340. [PMID: 38683698 DOI: 10.1162/jocn_a_02172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The assessment of mental effort is increasingly relevant in neurocognitive and life span domains. Pupillometry, the measure of the pupil size, is often used to assess effort but has disadvantages. Analysis of eye movements may provide an alternative, but research has been limited to easy and difficult task demands in younger adults. An effort measure must be sensitive to the whole effort profile, including "giving up" effort investment, and capture effort in different age groups. The current study comprised three experiments in which younger (n = 66) and older (n = 44) adults listened to speech masked by background babble at different signal-to-noise ratios associated with easy, difficult, and impossible speech comprehension. We expected individuals to invest little effort for easy and impossible speech (giving up) but to exert effort for difficult speech. Indeed, pupil size was largest for difficult but lower for easy and impossible speech. In contrast, gaze dispersion decreased with increasing speech masking in both age groups. Critically, gaze dispersion during difficult speech returned to levels similar to easy speech after sentence offset, when acoustic stimulation was similar across conditions, whereas gaze dispersion during impossible speech continued to be reduced. These findings show that a reduction in eye movements is not a byproduct of acoustic factors, but instead suggest that neurocognitive processes, different from arousal-related systems regulating the pupil size, drive reduced eye movements during high task demands. The current data thus show that effort in one sensory domain (audition) differentially impacts distinct functional properties in another sensory domain (vision).
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| | - Jennifer D Ryan
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| |
Collapse
|
5
|
Baror S, Baumgarten TJ, He BJ. Neural Mechanisms Determining the Duration of Task-free, Self-paced Visual Perception. J Cogn Neurosci 2024; 36:756-775. [PMID: 38357932 DOI: 10.1162/jocn_a_02131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2024]
Abstract
Humans spend hours each day spontaneously engaging with visual content, free from specific tasks and at their own pace. Currently, the brain mechanisms determining the duration of self-paced perceptual behavior remain largely unknown. Here, participants viewed naturalistic images under task-free settings and self-paced each image's viewing duration while undergoing EEG and pupillometry recordings. Across two independent data sets, we observed large inter- and intra-individual variability in viewing duration. However, beyond an image's presentation order and category, specific image content had no consistent effects on spontaneous viewing duration across participants. Overall, longer viewing durations were associated with sustained enhanced posterior positivity and anterior negativity in the ERPs. Individual-specific variations in the spontaneous viewing duration were consistently correlated with evoked EEG activity amplitudes and pupil size changes. By contrast, presentation order was selectively correlated with baseline alpha power and baseline pupil size. Critically, spontaneous viewing duration was strongly predicted by the temporal stability in neural activity patterns starting as early as 350 msec after image onset, suggesting that early neural stability is a key predictor for sustained perceptual engagement. Interestingly, neither bottom-up nor top-down predictions about image category influenced spontaneous viewing duration. Overall, these results suggest that individual-specific factors can influence perceptual processing at a surprisingly early time point and influence the multifaceted ebb and flow of spontaneous human perceptual behavior in naturalistic settings.
Collapse
Affiliation(s)
- Shira Baror
- New York University Grossman School of Medicine
- Hebrew University of Jerusalem
| | - Thomas J Baumgarten
- New York University Grossman School of Medicine
- Heinrich Heine University, Düsseldorf
| | - Biyu J He
- New York University Grossman School of Medicine
| |
Collapse
|
6
|
Cody P, Kumar M, Tzounopoulos T. Cortical Zinc Signaling Is Necessary for Changes in Mouse Pupil Diameter That Are Evoked by Background Sounds with Different Contrasts. J Neurosci 2024; 44:e0939232024. [PMID: 38242698 PMCID: PMC10941062 DOI: 10.1523/jneurosci.0939-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 12/29/2023] [Accepted: 01/14/2024] [Indexed: 01/21/2024] Open
Abstract
Luminance-independent changes in pupil diameter (PD) during wakefulness influence and are influenced by neuromodulatory, neuronal, and behavioral responses. However, it is unclear whether changes in neuromodulatory activity in a specific brain area are necessary for the associated changes in PD or whether some different mechanisms cause parallel fluctuations in both PD and neuromodulation. To answer this question, we simultaneously recorded PD and cortical neuronal activity in male and female mice. Namely, we measured PD and neuronal activity during adaptation to sound contrast, which is a well-described adaptation conserved in many species and brain areas. In the primary auditory cortex (A1), increases in the variability of sound level (contrast) induce a decrease in the slope of the neuronal input-output relationship, neuronal gain, which depends on cortical neuromodulatory zinc signaling. We found a previously unknown modulation of PD by changes in background sensory context: high stimulus contrast sounds evoke larger increases in evoked PD compared with low-contrast sounds. To explore whether these changes in evoked PD are controlled by cortical neuromodulatory zinc signaling, we imaged single-cell neural activity in A1, manipulated zinc signaling in the cortex, and assessed PD in the same awake mouse. We found that cortical synaptic zinc signaling is necessary for increases in PD during high-contrast background sounds compared with low-contrast sounds. This finding advances our knowledge about how cortical neuromodulatory activity affects PD changes and thus advances our understanding of the brain states, circuits, and neuromodulatory mechanisms that can be inferred from pupil size fluctuations.
Collapse
Affiliation(s)
- Patrick Cody
- Department of Otolaryngology, Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Manoj Kumar
- Department of Otolaryngology, Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| | - Thanos Tzounopoulos
- Department of Otolaryngology, Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
7
|
Ershaid H, Lizarazu M, McLaughlin D, Cooke M, Simantiraki O, Koutsogiannaki M, Lallier M. Contributions of listening effort and intelligibility to cortical tracking of speech in adverse listening conditions. Cortex 2024; 172:54-71. [PMID: 38215511 DOI: 10.1016/j.cortex.2023.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 09/05/2023] [Accepted: 11/14/2023] [Indexed: 01/14/2024]
Abstract
Cortical tracking of speech is vital for speech segmentation and is linked to speech intelligibility. However, there is no clear consensus as to whether reduced intelligibility leads to a decrease or an increase in cortical speech tracking, warranting further investigation of the factors influencing this relationship. One such factor is listening effort, defined as the cognitive resources necessary for speech comprehension, and reported to have a strong negative correlation with speech intelligibility. Yet, no studies have examined the relationship between speech intelligibility, listening effort, and cortical tracking of speech. The aim of the present study was thus to examine these factors in quiet and distinct adverse listening conditions. Forty-nine normal hearing adults listened to sentences produced casually, presented in quiet and two adverse listening conditions: cafeteria noise and reverberant speech. Electrophysiological responses were registered with electroencephalogram, and listening effort was estimated subjectively using self-reported scores and objectively using pupillometry. Results indicated varying impacts of adverse conditions on intelligibility, listening effort, and cortical tracking of speech, depending on the preservation of the speech temporal envelope. The more distorted envelope in the reverberant condition led to higher listening effort, as reflected in higher subjective scores, increased pupil diameter, and stronger cortical tracking of speech in the delta band. These findings suggest that using measures of listening effort in addition to those of intelligibility is useful for interpreting cortical tracking of speech results. Moreover, reading and phonological skills of participants were positively correlated with listening effort in the cafeteria condition, suggesting a special role of expert language skills in processing speech in this noisy condition. Implications for future research and theories linking atypical cortical tracking of speech and reading disorders are further discussed.
Collapse
Affiliation(s)
- Hadeel Ershaid
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Mikel Lizarazu
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Drew McLaughlin
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Martin Cooke
- Ikerbasque, Basque Science Foundation, Bilbao, Spain.
| | | | | | - Marie Lallier
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Science Foundation, Bilbao, Spain.
| |
Collapse
|
8
|
Giuliani NP, Venkitakrishnan S, Wu YH. Input-related demands: vocoded sentences evoke different pupillometrics and subjective listening effort than sentences in speech-shaped noise. Int J Audiol 2024; 63:199-206. [PMID: 36519812 PMCID: PMC10947987 DOI: 10.1080/14992027.2022.2150901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 12/23/2022]
Abstract
OBJECTIVES The Framework for Effortful Listening (FUEL) suggests five input-related demands can alter listening effort: source, transmission, listener, message and context factors. We hypothesised that vocoded sentences represented a source factor degradation and sentences in speech-shaped noise represented a transmission factor degradation. We used pupillometry and a subjective scale to examine our hypothesis. DESIGN Participants listened to vocoded sentences and sentences in speech-shaped noise at several difficulty levels designed to produce similar word recognition abilities; they also listened to unprocessed sentences. Within-participant pupillometrics and subjective listening effort were analysed. Post-hoc analyses were performed to examine if word recognition accuracy differentially influenced pupil responses. STUDY SAMPLES Twenty young adults with normal hearing. RESULTS Baseline pupil diameter was significantly smaller, peak pupil dilation was significantly larger, peak pupil dilation latency was significantly shorter, and subjective listening effort was significantly greater for the vocoded sentences than the sentences-in-noise. Word recognition ability also affected pupillometrics, but only for the vocoded sentences. CONCLUSIONS Our findings suggest that source factor degradations result in greater listening effort than transmission factor degradations. Future research should address how clinical interventions tailored towards different input-related demands may lead to reduced listening effort and improve patient outcomes.
Collapse
Affiliation(s)
- Nicholas P. Giuliani
- Department of Otolaryngology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Soumya Venkitakrishnan
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Yu-Hsiang Wu
- Department of Otolaryngology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
9
|
Fitzgerald LP, DeDe G, Shen J. Effects of linguistic context and noise type on speech comprehension. Front Psychol 2024; 15:1345619. [PMID: 38375107 PMCID: PMC10875108 DOI: 10.3389/fpsyg.2024.1345619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/17/2024] [Indexed: 02/21/2024] Open
Abstract
Introduction Understanding speech in background noise is an effortful endeavor. When acoustic challenges arise, linguistic context may help us fill in perceptual gaps. However, more knowledge is needed regarding how different types of background noise affect our ability to construct meaning from perceptually complex speech input. Additionally, there is limited evidence regarding whether perceptual complexity (e.g., informational masking) and linguistic complexity (e.g., occurrence of contextually incongruous words) interact during processing of speech material that is longer and more complex than a single sentence. Our first research objective was to determine whether comprehension of spoken sentence pairs is impacted by the informational masking from a speech masker. Our second objective was to identify whether there is an interaction between perceptual and linguistic complexity during speech processing. Methods We used multiple measures including comprehension accuracy, reaction time, and processing effort (as indicated by task-evoked pupil response), making comparisons across three different levels of linguistic complexity in two different noise conditions. Context conditions varied by final word, with each sentence pair ending with an expected exemplar (EE), within-category violation (WV), or between-category violation (BV). Forty young adults with typical hearing performed a speech comprehension in noise task over three visits. Each participant heard sentence pairs presented in either multi-talker babble or spectrally shaped steady-state noise (SSN), with the same noise condition across all three visits. Results We observed an effect of context but not noise on accuracy. Further, we observed an interaction of noise and context in peak pupil dilation data. Specifically, the context effect was modulated by noise type: context facilitated processing only in the more perceptually complex babble noise condition. Discussion These findings suggest that when perceptual complexity arises, listeners make use of the linguistic context to facilitate comprehension of speech obscured by background noise. Our results extend existing accounts of speech processing in noise by demonstrating how perceptual and linguistic complexity affect our ability to engage in higher-level processes, such as construction of meaning from speech segments that are longer than a single sentence.
Collapse
Affiliation(s)
- Laura P. Fitzgerald
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Gayle DeDe
- Speech, Language, and Brain Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Jing Shen
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| |
Collapse
|
10
|
Yang T, He Y, Wu L, Wang H, Wang X, Li Y, Guo Y, Wu S, Liu X. The effects of object size on spatial orientation: an eye movement study. Front Neurosci 2023; 17:1197618. [PMID: 38027477 PMCID: PMC10668018 DOI: 10.3389/fnins.2023.1197618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 10/30/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The processing of visual information in the human brain is divided into two streams, namely, the dorsal and ventral streams, object identification is related to the ventral stream and motion processing is related to the dorsal stream. Object identification is interconnected with motion processing, object size was found to affect the information processing of motion characteristics in uniform linear motion. However, whether the object size affects the spatial orientation is still unknown. Methods Thirty-eight college students were recruited to participate in an experiment based on the spatial visualization dynamic test. Eyelink 1,000 Plus was used to collect eye movement data. The final direction difference (the difference between the final moving direction of the target and the final direction of the moving target pointing to the destination point), rotation angle (the rotation angle of the knob from the start of the target movement to the moment of key pressing) and eye movement indices under conditions of different object sizes and motion velocities were compared. Results The final direction difference and rotation angle under the condition of a 2.29°-diameter moving target and a 0.76°-diameter destination point were significantly smaller than those under the other conditions (a 0.76°-diameter moving target and a 0.76°-diameter destination point; a 0.76°-diameter moving target and a 2.29°-diameter destination point). The average pupil size under the condition of a 2.29°-diameter moving target and a 0.76°-diameter destination point was significantly larger than the average pupil size under other conditions (a 0.76°-diameter moving target and a 0.76°-diameter destination point; a 0.76°-diameter moving target and a 2.29°-diameter destination point). Discussion A relatively large moving target can resist the landmark attraction effect in spatial orientation, and the influence of object size on spatial orientation may originate from differences in cognitive resource consumption. The present study enriches the interaction theory of the processing of object characteristics and motion characteristics and provides new ideas for the application of eye movement technology in the examination of spatial orientation ability.
Collapse
Affiliation(s)
- Tianqi Yang
- Department of Military Medical Psychology, Air Force Medical University, Xi’an, China
| | - Yang He
- Department of Military Medical Psychology, Air Force Medical University, Xi’an, China
| | - Lin Wu
- Department of Military Medical Psychology, Air Force Medical University, Xi’an, China
| | - Hui Wang
- Department of Military Medical Psychology, Air Force Medical University, Xi’an, China
| | - Xiuchao Wang
- Department of Military Medical Psychology, Air Force Medical University, Xi’an, China
| | - Yahong Li
- Central Theater Command Air Force Hospital of PLA, Datong, China
| | - Yaning Guo
- Department of Military Medical Psychology, Air Force Medical University, Xi’an, China
| | - Shengjun Wu
- Department of Military Medical Psychology, Air Force Medical University, Xi’an, China
| | - Xufeng Liu
- Department of Military Medical Psychology, Air Force Medical University, Xi’an, China
| |
Collapse
|
11
|
Zekveld AA, Pielage H, Versfeld NJ, Kramer SE. The Influence of Hearing Loss on the Pupil Response to Degraded Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4083-4099. [PMID: 37699194 DOI: 10.1044/2023_jslhr-23-00093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/14/2023]
Abstract
PURPOSE Current evidence regarding the influence of hearing loss on the pupil response elicited by speech perception is inconsistent. This might be partially due to confounding effects of age. This study aimed to compare pupil responses in age-matched groups of normal-hearing (NH) and hard of hearing (HH) listeners during listening to speech. METHOD We tested the baseline pupil size and mean and peak pupil dilation response of 17 NH participants (Mage = 46 years; age range: 20-62 years) and 17 HH participants (Mage = 45 years; age range: 20-63 years) who were pairwise matched on age and educational level. Participants performed three speech perception tasks at a 50% intelligibility level: noise-vocoded speech and speech masked with either stationary noise or interfering speech. They also listened to speech presented in quiet. RESULTS Hearing loss was associated with poorer speech perception, except for noise-vocoded speech. In contrast to NH participants, performance of HH participants did not improve across trials for the interfering speech condition, and it decreased for speech in stationary noise. HH participants had a smaller mean pupil dilation in degraded speech conditions compared to NH participants, but not for speech in quiet. They also had a steeper decline in the baseline pupil size across trials. The baseline pupil size was smaller for noise-vocoded speech as compared to the other conditions. The normalized data showed an additional group effect on the baseline pupil response. CONCLUSIONS Hearing loss is associated with a smaller pupil response and steeper decline in baseline pupil size during the perception of degraded speech. This suggests difficulties of the HH participants to sustain their effort investment and performance across the test session.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Ear & Hearing Section, Otolaryngology-Head and Neck Surgery, Amsterdam UMC, VU University medical center Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Hidde Pielage
- Ear & Hearing Section, Otolaryngology-Head and Neck Surgery, Amsterdam UMC, VU University medical center Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Niek J Versfeld
- Ear & Hearing Section, Otolaryngology-Head and Neck Surgery, Amsterdam UMC, VU University medical center Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| | - Sophia E Kramer
- Ear & Hearing Section, Otolaryngology-Head and Neck Surgery, Amsterdam UMC, VU University medical center Amsterdam, the Netherlands
- Amsterdam Public Health Research Institute, the Netherlands
| |
Collapse
|
12
|
Cui ME, Herrmann B. Eye Movements Decrease during Effortful Speech Listening. J Neurosci 2023; 43:5856-5869. [PMID: 37491313 PMCID: PMC10423048 DOI: 10.1523/jneurosci.0240-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 06/09/2023] [Accepted: 07/18/2023] [Indexed: 07/27/2023] Open
Abstract
Hearing impairment affects many older adults but is often diagnosed decades after speech comprehension in noisy situations has become effortful. Accurate assessment of listening effort may thus help diagnose hearing impairment earlier. However, pupillometry-the most used approach to assess listening effort-has limitations that hinder its use in practice. The current study explores a novel way to assess listening effort through eye movements. Building on cognitive and neurophysiological work, we examine the hypothesis that eye movements decrease when speech listening becomes challenging. In three experiments with human participants from both sexes, we demonstrate, consistent with this hypothesis, that fixation duration increases and spatial gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (simple sentences, naturalistic stories). In contrast, pupillometry was less sensitive to speech masking during story listening, suggesting pupillometric measures may not be as effective for the assessments of listening effort in naturalistic speech-listening paradigms. Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in the brain regions that support the regulation of eye movements, such as frontal eye field and superior colliculus, are modulated when listening is effortful.SIGNIFICANCE STATEMENT Assessment of listening effort is critical for early diagnosis of age-related hearing loss. Pupillometry is most used but has several disadvantages. The current study explores a novel way to assess listening effort through eye movements. We examine the hypothesis that eye movements decrease when speech listening becomes effortful. We demonstrate, consistent with this hypothesis, that fixation duration increases and gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (sentences, naturalistic stories). Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in brain regions that support the regulation of eye movements are modulated when listening is effortful.
Collapse
Affiliation(s)
- M Eric Cui
- Rotman Research Institute, Baycrest Academy for Research and Education, North York, Ontario M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, North York, Ontario M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| |
Collapse
|
13
|
Lanzilotti C, Andéol G, Micheyl C, Scannella S. Cocktail party training induces increased speech intelligibility and decreased cortical activity in bilateral inferior frontal gyri. A functional near-infrared study. PLoS One 2022; 17:e0277801. [PMID: 36454948 PMCID: PMC9714910 DOI: 10.1371/journal.pone.0277801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 11/03/2022] [Indexed: 12/03/2022] Open
Abstract
The human brain networks responsible for selectively listening to a voice amid other talkers remain to be clarified. The present study aimed to investigate relationships between cortical activity and performance in a speech-in-speech task, before (Experiment I) and after training-induced improvements (Experiment II). In Experiment I, 74 participants performed a speech-in-speech task while their cortical activity was measured using a functional near infrared spectroscopy (fNIRS) device. One target talker and one masker talker were simultaneously presented at three different target-to-masker ratios (TMRs): adverse, intermediate and favorable. Behavioral results show that performance may increase monotonically with TMR in some participants and failed to decrease, or even improved, in the adverse-TMR condition for others. On the neural level, an extensive brain network including the frontal (left prefrontal cortex, right dorsolateral prefrontal cortex and bilateral inferior frontal gyri) and temporal (bilateral auditory cortex) regions was more solicited by the intermediate condition than the two others. Additionally, bilateral frontal gyri and left auditory cortex activities were found to be positively correlated with behavioral performance in the adverse-TMR condition. In Experiment II, 27 participants, whose performance was the poorest in the adverse-TMR condition of Experiment I, were trained to improve performance in that condition. Results show significant performance improvements along with decreased activity in bilateral inferior frontal gyri, the right dorsolateral prefrontal cortex, the left inferior parietal cortex and the right auditory cortex in the adverse-TMR condition after training. Arguably, lower neural activity reflects higher efficiency in processing masker inhibition after speech-in-speech training. As speech-in-noise tasks also imply frontal and temporal regions, we suggest that regardless of the type of masking (speech or noise) the complexity of the task will prompt the implication of a similar brain network. Furthermore, the initial significant cognitive recruitment will be reduced following a training leading to an economy of cognitive resources.
Collapse
Affiliation(s)
- Cosima Lanzilotti
- Département Neuroscience et Sciences Cognitives, Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France
- ISAE-SUPAERO, Université de Toulouse, Toulouse, France
- Thales SIX GTS France, Gennevilliers, France
| | - Guillaume Andéol
- Département Neuroscience et Sciences Cognitives, Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France
| | | | | |
Collapse
|
14
|
Shen J, Fitzgerald LP, Kulick ER. Interactions between acoustic challenges and processing depth in speech perception as measured by task-evoked pupil response. Front Psychol 2022; 13:959638. [PMID: 36389464 PMCID: PMC9641013 DOI: 10.3389/fpsyg.2022.959638] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 09/12/2022] [Indexed: 08/21/2023] Open
Abstract
Speech perception under adverse conditions is a multistage process involving a dynamic interplay among acoustic, cognitive, and linguistic factors. Nevertheless, prior research has primarily focused on factors within this complex system in isolation. The primary goal of the present study was to examine the interaction between processing depth and the acoustic challenge of noise and its effect on processing effort during speech perception in noise. Two tasks were used to represent different depths of processing. The speech recognition task involved repeating back a sentence after auditory presentation (higher-level processing), while the tiredness judgment task entailed a subjective judgment of whether the speaker sounded tired (lower-level processing). The secondary goal of the study was to investigate whether pupil response to alteration of dynamic pitch cues stems from difficult linguistic processing of speech content in noise or a perceptual novelty effect due to the unnatural pitch contours. Task-evoked peak pupil response from two groups of younger adult participants with typical hearing was measured in two experiments. Both tasks (speech recognition and tiredness judgment) were implemented in both experiments, and stimuli were presented with background noise in Experiment 1 and without noise in Experiment 2. Increased peak pupil dilation was associated with deeper processing (i.e., the speech recognition task), particularly in the presence of background noise. Importantly, there is a non-additive interaction between noise and task, as demonstrated by the heightened peak pupil dilation to noise in the speech recognition task as compared to in the tiredness judgment task. Additionally, peak pupil dilation data suggest dynamic pitch alteration induced an increased perceptual novelty effect rather than reflecting effortful linguistic processing of the speech content in noise. These findings extend current theories of speech perception under adverse conditions by demonstrating that the level of processing effort expended by a listener is influenced by the interaction between acoustic challenges and depth of linguistic processing. The study also provides a foundation for future work to investigate the effects of this complex interaction in clinical populations who experience both hearing and cognitive challenges.
Collapse
Affiliation(s)
- Jing Shen
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Laura P. Fitzgerald
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Erin R. Kulick
- Department of Epidemiology and Biostatistics, College of Public Health, Temple University, Philadelphia, PA, United States
| |
Collapse
|
15
|
Zhou X, Burg E, Kan A, Litovsky RY. Investigating effortful speech perception using fNIRS and pupillometry measures. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100052. [PMID: 36518346 PMCID: PMC9743070 DOI: 10.1016/j.crneur.2022.100052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 05/12/2022] [Accepted: 08/12/2022] [Indexed: 10/15/2022] Open
Abstract
The current study examined the neural mechanisms for mental effort and its correlation to speech perception using functional near-infrared spectroscopy (fNIRS) in listeners with normal hearing (NH). Data were collected while participants listened and responded to unprocessed and degraded sentences, where words were presented in grammatically correct or shuffled order. Effortful listening and task difficulty due to stimulus manipulations was confirmed using a subjective questionnaire and a well-established objective measure of mental effort - pupillometry. fNIRS measures focused on cortical responses in two a priori regions of interest, the left auditory cortex (AC) and lateral frontal cortex (LFC), which are closely related to auditory speech perception and listening effort, respectively. We examined the relations between the two objective measures and behavioral measures of speech perception (task performance) and task difficulty. Results demonstrated that changes in pupil dilation were positively correlated with the self-reported task difficulty levels and negatively correlated with the task performance scores. A significant and negative correlation between the two behavioral measures was also found. That is, as perceived task demands increased and task performance scores decreased, pupils dilated more. fNIRS measures (cerebral oxygenation) in the left AC and LFC were both negatively correlated with the self-reported task difficulty levels and positively correlated with task performance scores. These results suggest that pupillometry measures can indicate task demands and listening effort; whereas, fNIRS measures using a similar paradigm seem to reflect speech processing, but not effort.
Collapse
Affiliation(s)
- Xin Zhou
- Waisman Center, University of Wisconsin Madison, WI, USA
| | - Emily Burg
- Waisman Center, University of Wisconsin Madison, WI, USA
- Department of Communication Science and Disorders, University of Wisconsin Madison, WI, USA
| | - Alan Kan
- School of Engineering, Macquarie University, Sydney, NSW, Australia
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin Madison, WI, USA
- Department of Communication Science and Disorders, University of Wisconsin Madison, WI, USA
| |
Collapse
|
16
|
Comparing methods of analysis in pupillometry: application to the assessment of listening effort in hearing-impaired patients. Heliyon 2022; 8:e09631. [PMID: 35734572 PMCID: PMC9207619 DOI: 10.1016/j.heliyon.2022.e09631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 12/26/2021] [Accepted: 05/26/2022] [Indexed: 11/20/2022] Open
|
17
|
Grant AM, Kousaie S, Coulter K, Gilbert AC, Baum SR, Gracco V, Titone D, Klein D, Phillips NA. Age of Acquisition Modulates Alpha Power During Bilingual Speech Comprehension in Noise. Front Psychol 2022; 13:865857. [PMID: 35548507 PMCID: PMC9083356 DOI: 10.3389/fpsyg.2022.865857] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 03/11/2022] [Indexed: 12/20/2022] Open
Abstract
Research on bilingualism has grown exponentially in recent years. However, the comprehension of speech in noise, given the ubiquity of both bilingualism and noisy environments, has seen only limited focus. Electroencephalogram (EEG) studies in monolinguals show an increase in alpha power when listening to speech in noise, which, in the theoretical context where alpha power indexes attentional control, is thought to reflect an increase in attentional demands. In the current study, English/French bilinguals with similar second language (L2) proficiency and who varied in terms of age of L2 acquisition (AoA) from 0 (simultaneous bilinguals) to 15 years completed a speech perception in noise task. Participants were required to identify the final word of high and low semantically constrained auditory sentences such as "Stir your coffee with a spoon" vs. "Bob could have known about the spoon" in both of their languages and in both noise (multi-talker babble) and quiet during electrophysiological recording. We examined the effects of language, AoA, semantic constraint, and listening condition on participants' induced alpha power during speech comprehension. Our results show an increase in alpha power when participants were listening in their L2, suggesting that listening in an L2 requires additional attentional control compared to the first language, particularly early in processing during word identification. Additionally, despite similar proficiency across participants, our results suggest that under difficult processing demands, AoA modulates the amount of attention required to process the second language.
Collapse
Affiliation(s)
- Angela M Grant
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada
| | - Shanna Kousaie
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,School of Psychology, University of Ottawa, Ottawa, ON, Canada.,Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Kristina Coulter
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada
| | - Annie C Gilbert
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Shari R Baum
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Vincent Gracco
- School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada.,Haskins Laboratories, New Haven, CT, United States
| | - Debra Titone
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,Department of Psychology, McGill University Montreal, Montreal, QC, Canada
| | - Denise Klein
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.,Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Natalie A Phillips
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,Bloomfield Centre for Research in Aging, Lady Davis Institute for Medical Research and Jewish General Hospital, McGill University Memory Clinic, Jewish General Hospital, Montreal, QC, Canada
| |
Collapse
|
18
|
Zhou X, Sobczak GS, McKay CM, Litovsky RY. Effects of degraded speech processing and binaural unmasking investigated using functional near-infrared spectroscopy (fNIRS). PLoS One 2022; 17:e0267588. [PMID: 35468160 PMCID: PMC9037936 DOI: 10.1371/journal.pone.0267588] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 04/11/2022] [Indexed: 12/24/2022] Open
Abstract
The present study aimed to investigate the effects of degraded speech perception and binaural unmasking using functional near-infrared spectroscopy (fNIRS). Normal hearing listeners were tested when attending to unprocessed or vocoded speech, presented to the left ear at two speech-to-noise ratios (SNRs). Additionally, by comparing monaural versus diotic masker noise, we measured binaural unmasking. Our primary research question was whether the prefrontal cortex and temporal cortex responded differently to varying listening configurations. Our a priori regions of interest (ROIs) were located at the left dorsolateral prefrontal cortex (DLPFC) and auditory cortex (AC). The left DLPFC has been reported to be involved in attentional processes when listening to degraded speech and in spatial hearing processing, while the AC has been reported to be sensitive to speech intelligibility. Comparisons of cortical activity between these two ROIs revealed significantly different fNIRS response patterns. Further, we showed a significant and positive correlation between self-reported task difficulty levels and fNIRS responses in the DLPFC, with a negative but non-significant correlation for the left AC, suggesting that the two ROIs played different roles in effortful speech perception. Our secondary question was whether activity within three sub-regions of the lateral PFC (LPFC) including the DLPFC was differentially affected by varying speech-noise configurations. We found significant effects of spectral degradation and SNR, and significant differences in fNIRS response amplitudes between the three regions, but no significant interaction between ROI and speech type, or between ROI and SNR. When attending to speech with monaural and diotic noises, participants reported the latter conditions being easier; however, no significant main effect of masker condition on cortical activity was observed. For cortical responses in the LPFC, a significant interaction between SNR and masker condition was observed. These findings suggest that binaural unmasking affects cortical activity through improving speech reception threshold in noise, rather than by reducing effort exerted.
Collapse
Affiliation(s)
- Xin Zhou
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Gabriel S. Sobczak
- School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Colette M. McKay
- The Bionics Institute of Australia, Melbourne, VIC, Australia
- Department of Medical Bionics, University of Melbourne, Melbourne, VIC, Australia
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
- Department of Communication Science and Disorders, University of Wisconsin-Madison, Madison, WI, United States of America
- Division of Otolaryngology, Department of Surgery, University of Wisconsin-Madison, Madison, WI, United States of America
| |
Collapse
|
19
|
Avivi-Reich M, Sran RK, Schneider BA. Do Age and Linguistic Status Alter the Effect of Sound Source Diffuseness on Speech Recognition in Noise? Front Psychol 2022; 13:838576. [PMID: 35369266 PMCID: PMC8965325 DOI: 10.3389/fpsyg.2022.838576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 02/14/2022] [Indexed: 11/13/2022] Open
Abstract
One aspect of auditory scenes that has received very little attention is the level of diffuseness of sound sources. This aspect has increasing importance due to growing use of amplification systems. When an auditory stimulus is amplified and presented over multiple, spatially-separated loudspeakers, the signal's timbre is altered due to comb filtering. In a previous study we examined how increasing the diffuseness of the sound sources might affect listeners' ability to recognize speech presented in different types of background noise. Listeners performed similarly when both the target and the masker were presented via a similar number of loudspeakers. However, performance improved when the target was presented using a single speaker (compact) and the masker from three spatially separate speakers (diffuse) but worsened when the target was diffuse, and the masker was compact. In the current study, we extended our research to examine whether the effects of timbre changes with age and linguistic experience. Twenty-four older adults whose first language was English (Old-EFLs) and 24 younger adults whose second language was English (Young-ESLs) were asked to repeat non-sense sentences masked by either Noise, Babble, or Speech and their results were compared with those of the Young-EFLs previously tested. Participants were divided into two experimental groups: (1) A Compact-Target group where the target sentences were presented over a single loudspeaker, while the masker was either presented over three loudspeakers or over a single loudspeaker; (2) A Diffuse-Target group, where the target sentences were diffuse while the masker was either compact or diffuse. The results indicate that the Target Timbre has a negligible effect on thresholds when the timbre of the target matches the timbre of the masker in all three groups. When there is a timbre contrast between target and masker, thresholds are significantly lower when the target is compact than when it is diffuse for all three listening groups in a Noise background. However, while this difference is maintained for the Young and Old-EFLs when the masker is Babble or Speech, speech reception thresholds in the Young-ESL group tend to be equivalent for all four combinations of target and masker timbre.
Collapse
Affiliation(s)
- Meital Avivi-Reich
- Department of Communication Arts, Sciences and Disorders, Brooklyn College, City University of New York, Brooklyn, NY, United States
| | - Rupinder Kaur Sran
- Human Communication Lab, Department of Psychology, University of Toronto Mississauga, Toronto, ON, Canada
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| | - Bruce A. Schneider
- Human Communication Lab, Department of Psychology, University of Toronto Mississauga, Toronto, ON, Canada
| |
Collapse
|
20
|
Fluid intelligence and the locus coeruleus-norepinephrine system. Proc Natl Acad Sci U S A 2021; 118:2110630118. [PMID: 34764223 DOI: 10.1073/pnas.2110630118] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/05/2021] [Indexed: 11/18/2022] Open
Abstract
The last decade has seen significant progress identifying genetic and brain differences related to intelligence. However, there remain considerable gaps in our understanding of how cognitive mechanisms that underpin intelligence map onto various brain functions. In this article, we argue that the locus coeruleus-norepinephrine system is essential for understanding the biological basis of intelligence. We review evidence suggesting that the locus coeruleus-norepinephrine system plays a central role at all levels of brain function, from metabolic processes to the organization of large-scale brain networks. We connect this evidence with our executive attention view of working-memory capacity and fluid intelligence and present analyses on baseline pupil size, an indicator of locus coeruleus activity. Using a latent variable approach, our analyses showed that a common executive attention factor predicted baseline pupil size. Additionally, the executive attention function of disengagement--not maintenance--uniquely predicted baseline pupil size. These findings suggest that the ability to control attention may be important for understanding how cognitive mechanisms of fluid intelligence map onto the locus coeruleus-norepinephrine system. We discuss how further research is needed to better understand the relationships between fluid intelligence, the locus coeruleus-norepinephrine system, and functionally organized brain networks.
Collapse
|
21
|
Stenbäck V, Marsja E, Hällgren M, Lyxell B, Larsby B. The Contribution of Age, Working Memory Capacity, and Inhibitory Control on Speech Recognition in Noise in Young and Older Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4513-4523. [PMID: 34550765 DOI: 10.1044/2021_jslhr-20-00251] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose The study aimed to investigate the relationship between speech recognition in noise, age, hearing ability, self-rated listening effort, inhibitory control (measured with the Swedish Hayling task), and working memory capacity (WMC; measured with the Reading Span test). Two different speech materials were used: the Hagerman test with low semantic context and Hearing in Noise Test sentences with high semantic context, masked with either energetic or informational maskers. Method A mixed design was used. Twenty-four young normally hearing (M age = 25.6 years) and 24 older, for their age, normally hearing individuals (M age = 60.6 years) participated in the study. Speech recognition in noise in both speech materials and self-rated effort in all four background maskers were correlated with inhibitory control and WMC. A linear mixed-effects model was set up to assess differences between the two different speech materials, the four different maskers used in the study, and if age and hearing ability affected performance in the speech materials or the various background noises. Results Results showed that high WMC was related to lower scores of self-rated listening effort for informational maskers, as well as better performance in speech recognition in noise when informational maskers were used. The linear mixed-effects model revealed differences in performance between the low-context and the high-context speech materials, and the various maskers used. Lastly, inhibitory control had some impact on performance in the low-context speech material when masked with an informational masker. Conclusion Different background noises, especially informational maskers, affect speech recognition and self-rated listening effort differently depending on age, hearing ability, and individual variation in WMC and inhibitory control.
Collapse
Affiliation(s)
- Victoria Stenbäck
- Linnaeus Centre HEAD, Linköping University, Sweden
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Erik Marsja
- Linnaeus Centre HEAD, Linköping University, Sweden
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mathias Hällgren
- Department of Otorhinolaryngology in Östergötland, Linköping University, Sweden
- Department of Biomedical and Clinical Sciences, Linköping University, Sweden
| | - Björn Lyxell
- Linnaeus Centre HEAD, Linköping University, Sweden
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Birgitta Larsby
- Division of Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University, Sweden
| |
Collapse
|
22
|
Defenderfer J, Forbes S, Wijeakumar S, Hedrick M, Plyler P, Buss AT. Frontotemporal activation differs between perception of simulated cochlear implant speech and speech in background noise: An image-based fNIRS study. Neuroimage 2021; 240:118385. [PMID: 34256138 PMCID: PMC8503862 DOI: 10.1016/j.neuroimage.2021.118385] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 06/10/2021] [Accepted: 07/09/2021] [Indexed: 10/27/2022] Open
Abstract
In this study we used functional near-infrared spectroscopy (fNIRS) to investigate neural responses in normal-hearing adults as a function of speech recognition accuracy, intelligibility of the speech stimulus, and the manner in which speech is distorted. Participants listened to sentences and reported aloud what they heard. Speech quality was distorted artificially by vocoding (simulated cochlear implant speech) or naturally by adding background noise. Each type of distortion included high and low-intelligibility conditions. Sentences in quiet were used as baseline comparison. fNIRS data were analyzed using a newly developed image reconstruction approach. First, elevated cortical responses in the middle temporal gyrus (MTG) and middle frontal gyrus (MFG) were associated with speech recognition during the low-intelligibility conditions. Second, activation in the MTG was associated with recognition of vocoded speech with low intelligibility, whereas MFG activity was largely driven by recognition of speech in background noise, suggesting that the cortical response varies as a function of distortion type. Lastly, an accuracy effect in the MFG demonstrated significantly higher activation during correct perception relative to incorrect perception of speech. These results suggest that normal-hearing adults (i.e., untrained listeners of vocoded stimuli) do not exploit the same attentional mechanisms of the frontal cortex used to resolve naturally degraded speech and may instead rely on segmental and phonetic analyses in the temporal lobe to discriminate vocoded speech.
Collapse
Affiliation(s)
- Jessica Defenderfer
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Samuel Forbes
- Psychology, University of East Anglia, Norwich, England.
| | | | - Mark Hedrick
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Patrick Plyler
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Aaron T Buss
- Psychology, University of Tennessee, Knoxville, TN, United States.
| |
Collapse
|
23
|
Reduced Semantic Context and Signal-to-Noise Ratio Increase Listening Effort As Measured Using Functional Near-Infrared Spectroscopy. Ear Hear 2021; 43:836-848. [PMID: 34623112 DOI: 10.1097/aud.0000000000001137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Understanding speech-in-noise can be highly effortful. Decreasing the signal-to-noise ratio (SNR) of speech increases listening effort, but it is relatively unclear if decreasing the level of semantic context does as well. The current study used functional near-infrared spectroscopy to evaluate two primary hypotheses: (1) listening effort (operationalized as oxygenation of the left lateral PFC) increases as the SNR decreases and (2) listening effort increases as context decreases. DESIGN Twenty-eight younger adults with normal hearing completed the Revised Speech Perception in Noise Test, in which they listened to sentences and reported the final word. These sentences either had an easy SNR (+4 dB) or a hard SNR (-2 dB), and were either low in semantic context (e.g., "Tom could have thought about the sport") or high in context (e.g., "She had to vacuum the rug"). PFC oxygenation was measured throughout using functional near-infrared spectroscopy. RESULTS Accuracy on the Revised Speech Perception in Noise Test was worse when the SNR was hard than when it was easy, and worse for sentences low in semantic context than high in context. Similarly, oxygenation across the entire PFC (including the left lateral PFC) was greater when the SNR was hard, and left lateral PFC oxygenation was greater when context was low. CONCLUSIONS These results suggest that activation of the left lateral PFC (interpreted here as reflecting listening effort) increases to compensate for acoustic and linguistic challenges. This may reflect the increased engagement of domain-general and domain-specific processes subserved by the dorsolateral prefrontal cortex (e.g., cognitive control) and inferior frontal gyrus (e.g., predicting the sensory consequences of articulatory gestures), respectively.
Collapse
|
24
|
Patro C, Kreft HA, Wojtczak M. The search for correlates of age-related cochlear synaptopathy: Measures of temporal envelope processing and spatial release from speech-on-speech masking. Hear Res 2021; 409:108333. [PMID: 34425347 PMCID: PMC8424701 DOI: 10.1016/j.heares.2021.108333] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 07/17/2021] [Accepted: 08/04/2021] [Indexed: 01/13/2023]
Abstract
Older adults often experience difficulties understanding speech in adverse listening conditions. It has been suggested that for listeners with normal and near-normal audiograms, these difficulties may, at least in part, arise from age-related cochlear synaptopathy. The aim of this study was to assess if performance on auditory tasks relying on temporal envelope processing reveal age-related deficits consistent with those expected from cochlear synaptopathy. Listeners aged 20 to 66 years were tested using a series of psychophysical, electrophysiological, and speech-perception measures using stimulus configurations that promote coding by medium- and low-spontaneous-rate auditory-nerve fibers. Cognitive measures of executive function were obtained to control for age-related cognitive decline. Results from the different tests were not significantly correlated with each other despite a presumed reliance on common mechanisms involved in temporal envelope processing. Only gap-detection thresholds for a tone in noise and spatial release from speech-on-speech masking were significantly correlated with age. Increasing age was related to impaired cognitive executive function. Multivariate regression analyses showed that individual differences in hearing sensitivity, envelope-based measures, and scores from nonauditory cognitive tests did not significantly contribute to the variability in spatial release from speech-on-speech masking for small target/masker spatial separation, while age was a significant contributor.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA.
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| | - Magdalena Wojtczak
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| |
Collapse
|
25
|
Colby S, McMurray B. Cognitive and Physiological Measures of Listening Effort During Degraded Speech Perception: Relating Dual-Task and Pupillometry Paradigms. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3627-3652. [PMID: 34491779 PMCID: PMC8642090 DOI: 10.1044/2021_jslhr-20-00583] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 04/01/2021] [Accepted: 05/21/2021] [Indexed: 06/13/2023]
Abstract
Purpose Listening effort is quickly becoming an important metric for assessing speech perception in less-than-ideal situations. However, the relationship between the construct of listening effort and the measures used to assess it remains unclear. We compared two measures of listening effort: a cognitive dual task and a physiological pupillometry task. We sought to investigate the relationship between these measures of effort and whether engaging effort impacts speech accuracy. Method In Experiment 1, 30 participants completed a dual task and a pupillometry task that were carefully matched in stimuli and design. The dual task consisted of a spoken word recognition task and a visual match-to-sample task. In the pupillometry task, pupil size was monitored while participants completed a spoken word recognition task. Both tasks presented words at three levels of listening difficulty (unmodified, eight-channel vocoding, and four-channel vocoding) and provided response feedback on every trial. We refined the pupillometry task in Experiment 2 (n = 31); crucially, participants no longer received response feedback. Finally, we ran a new group of subjects on both tasks in Experiment 3 (n = 30). Results In Experiment 1, accuracy in the visual task decreased with increased signal degradation in the dual task, but pupil size was sensitive to accuracy and not vocoding condition. After removing feedback in Experiment 2, changes in pupil size were predicted by listening condition, suggesting the task was now sensitive to engaged effort. Both tasks were sensitive to listening difficulty in Experiment 3, but there was no relationship between the tasks and neither task predicted speech accuracy. Conclusions Consistent with previous work, we found little evidence for a relationship between different measures of listening effort. We also found no evidence that effort predicts speech accuracy, suggesting that engaging more effort does not lead to improved speech recognition. Cognitive and physiological measures of listening effort are likely sensitive to different aspects of the construct of listening effort. Supplemental Material https://doi.org/10.23641/asha.16455900.
Collapse
Affiliation(s)
- Sarah Colby
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City
| | - Bob McMurray
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City
| |
Collapse
|
26
|
Yuen NH, Tam F, Churchill NW, Schweizer TA, Graham SJ. Driving With Distraction: Measuring Brain Activity and Oculomotor Behavior Using fMRI and Eye-Tracking. Front Hum Neurosci 2021; 15:659040. [PMID: 34483861 PMCID: PMC8415783 DOI: 10.3389/fnhum.2021.659040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 06/29/2021] [Indexed: 11/13/2022] Open
Abstract
Introduction Driving motor vehicles is a complex task that depends heavily on how visual stimuli are received and subsequently processed by the brain. The potential impact of distraction on driving performance is well known and poses a safety concern - especially for individuals with cognitive impairments who may be clinically unfit to drive. The present study is the first to combine functional magnetic resonance imaging (fMRI) and eye-tracking during simulated driving with distraction, providing oculomotor metrics to enhance scientific understanding of the brain activity that supports driving performance. Materials and Methods As initial work, twelve healthy young, right-handed participants performed turns ranging in complexity, including simple right and left turns without oncoming traffic, and left turns with oncoming traffic. Distraction was introduced as an auditory task during straight driving, and during left turns with oncoming traffic. Eye-tracking data were recorded during fMRI to characterize fixations, saccades, pupil diameter and blink rate. Results Brain activation maps for right turns, left turns without oncoming traffic, left turns with oncoming traffic, and the distraction conditions were largely consistent with previous literature reporting the neural correlates of simulated driving. When the effects of distraction were evaluated for left turns with oncoming traffic, increased activation was observed in areas involved in executive function (e.g., middle and inferior frontal gyri) as well as decreased activation in the posterior brain (e.g., middle and superior occipital gyri). Whereas driving performance remained mostly unchanged (e.g., turn speed, time to turn, collisions), the oculomotor measures showed that distraction resulted in more consistent gaze at oncoming traffic in a small area of the visual scene; less time spent gazing at off-road targets (e.g., speedometer, rear-view mirror); more time spent performing saccadic eye movements; and decreased blink rate. Conclusion Oculomotor behavior modulated with driving task complexity and distraction in a manner consistent with the brain activation features revealed by fMRI. The results suggest that eye-tracking technology should be included in future fMRI studies of simulated driving behavior in targeted populations, such as the elderly and individuals with cognitive complaints - ultimately toward developing better technology to assess and enhance fitness to drive.
Collapse
Affiliation(s)
- Nicole H Yuen
- Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, ON, Canada.,Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON, Canada
| | - Fred Tam
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON, Canada
| | - Nathan W Churchill
- Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, ON, Canada
| | - Tom A Schweizer
- Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, ON, Canada.,Division of Neurosurgery, St. Michael's Hospital, Toronto, ON, Canada
| | - Simon J Graham
- Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, ON, Canada.,Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON, Canada
| |
Collapse
|
27
|
Silcox JW, Payne BR. The costs (and benefits) of effortful listening on context processing: A simultaneous electrophysiology, pupillometry, and behavioral study. Cortex 2021; 142:296-316. [PMID: 34332197 DOI: 10.1016/j.cortex.2021.06.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 04/02/2021] [Accepted: 06/10/2021] [Indexed: 11/24/2022]
Abstract
There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words ("The prisoners were planning their escape/party") or were low-constraint sentences with unexpected sentence-final words ("All day she thought about the party"). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants' showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory.
Collapse
Affiliation(s)
| | - Brennan R Payne
- Department of Psychology, University of Utah, USA; Interdepartmental Neuroscience Program, University of Utah, USA
| |
Collapse
|
28
|
Pauquet J, Thiel CM, Mathys C, Rosemann S. Relationship between Memory Load and Listening Demands in Age-Related Hearing Impairment. Neural Plast 2021; 2021:8840452. [PMID: 34188676 PMCID: PMC8195652 DOI: 10.1155/2021/8840452] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 04/27/2021] [Accepted: 05/24/2021] [Indexed: 01/10/2023] Open
Abstract
Age-related hearing loss has been associated with increased recruitment of frontal brain areas during speech perception to compensate for the decline in auditory input. This additional recruitment may bind resources otherwise needed for understanding speech. However, it is unknown how increased demands on listening interact with increasing cognitive demands when processing speech in age-related hearing loss. The current study used a full-sentence working memory task manipulating demands on working memory and listening and studied untreated mild to moderate hard of hearing (n = 20) and normal-hearing age-matched participants (n = 19) with functional MRI. On the behavioral level, we found a significant interaction of memory load and listening condition; this was, however, similar for both groups. Under low, but not high memory load, listening condition significantly influenced task performance. Similarly, under easy but not difficult listening conditions, memory load had a significant effect on task performance. On the neural level, as measured by the BOLD response, we found increased responses under high compared to low memory load conditions in the left supramarginal gyrus, left middle frontal gyrus, and left supplementary motor cortex regardless of hearing ability. Furthermore, we found increased responses in the bilateral superior temporal gyri under easy compared to difficult listening conditions. We found no group differences nor interactions of group with memory load or listening condition. This suggests that memory load and listening condition interacted on a behavioral level, however, only the increased memory load was reflected in increased BOLD responses in frontal and parietal brain regions. Hence, when evaluating listening abilities in elderly participants, memory load should be considered as it might interfere with the assessed performance. We could not find any further evidence that BOLD responses for the different memory and listening conditions are affected by mild to moderate age-related hearing loss.
Collapse
Affiliation(s)
- Julia Pauquet
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität, 26111 Oldenburg, Germany
| | - Christiane M. Thiel
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität, 26111 Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany
| | - Christian Mathys
- Institute of Radiology and Neuroradiology, Evangelisches Krankenhaus, Carl von Ossietzky Universität Oldenburg, 26122 Oldenburg, Germany
- Research Center Neurosensory Science, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany
| | - Stephanie Rosemann
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität, 26111 Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
29
|
Kadem M, Herrmann B, Rodd JM, Johnsrude IS. Pupil Dilation Is Sensitive to Semantic Ambiguity and Acoustic Degradation. Trends Hear 2021; 24:2331216520964068. [PMID: 33124518 PMCID: PMC7607724 DOI: 10.1177/2331216520964068] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Speech comprehension is challenged by background noise, acoustic interference, and linguistic factors, such as the presence of words with more than one meaning (homonyms and homophones). Previous work suggests that homophony in spoken language increases cognitive demand. Here, we measured pupil dilation—a physiological index of cognitive demand—while listeners heard high-ambiguity sentences, containing words with more than one meaning, or well-matched low-ambiguity sentences without ambiguous words. This semantic-ambiguity manipulation was crossed with an acoustic manipulation in two experiments. In Experiment 1, sentences were masked with 30-talker babble at 0 and +6 dB signal-to-noise ratio (SNR), and in Experiment 2, sentences were heard with or without a pink noise masker at –2 dB SNR. Speech comprehension was measured by asking listeners to judge the semantic relatedness of a visual probe word to the previous sentence. In both experiments, comprehension was lower for high- than for low-ambiguity sentences when SNRs were low. Pupils dilated more when sentences included ambiguous words, even when no noise was added (Experiment 2). Pupil also dilated more when SNRs were low. The effect of masking was larger than the effect of ambiguity for performance and pupil responses. This work demonstrates that the presence of homophones, a condition that is ubiquitous in natural language, increases cognitive demand and reduces intelligibility of speech heard with a noisy background.
Collapse
Affiliation(s)
- Mason Kadem
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,School of Biomedical Engineering, McMaster University, Hamilton, Ontario, Canada
| | - Björn Herrmann
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,Rotman Research Institute, Baycrest, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Jennifer M Rodd
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Ingrid S Johnsrude
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,School of Communication and Speech Disorders, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
30
|
Abstract
OBJECTIVES The aim of this study was to modify a speech perception in noise test to assess whether the presence of another individual (copresence), relative to being alone, affected listening performance and effort expenditure. Furthermore, this study assessed if the effect of the other individual's presence on listening effort was influenced by the difficulty of the task and whether participants had to repeat the sentences they listened to or not. DESIGN Thirty-four young, normal-hearing participants (mean age: 24.7 years) listened to spoken Dutch sentences that were masked with a stationary noise masker and presented through a loudspeaker. The participants alternated between repeating sentences (active condition) and not repeating sentences (passive condition). They did this either alone or together with another participant in the booth. When together, participants took turns repeating sentences. The speech-in-noise test was performed adaptively at three intelligibility levels (20%, 50%, and 80% sentences correct) in a block-wise fashion. During testing, pupil size was recorded as an objective outcome measure of listening effort. RESULTS Lower speech intelligibility levels were associated with increased peak pupil dilation (PPDs) and doing the task in the presence of another individual (compared with doing it alone) significantly increased PPD. No interaction effect between intelligibility and copresence on PPD was found. The results suggested that the change of PPD between doing the task alone or together was especially apparent for people who started the experiment in the presence of another individual. Furthermore, PPD was significantly lower during passive listening, compared with active listening. Finally, it seemed that performance was unaffected by copresence. CONCLUSION The increased PPDs during listening in the presence of another participant suggest that more effort was invested during the task. However, it seems that the additional effort did not result in a change of performance. This study showed that at least one aspect of the social context of a listening situation (in this case copresence) can affect listening effort, indicating that social context might be important to consider in future cognitive hearing research.
Collapse
|
31
|
Endestad T, Godøy RI, Sneve MH, Hagen T, Bochynska A, Laeng B. Mental Effort When Playing, Listening, and Imagining Music in One Pianist's Eyes and Brain. Front Hum Neurosci 2020; 14:576888. [PMID: 33192407 PMCID: PMC7593683 DOI: 10.3389/fnhum.2020.576888] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Accepted: 09/07/2020] [Indexed: 01/17/2023] Open
Abstract
We investigated "musical effort" with an internationally renowned, classical, pianist while playing, listening, and imagining music. We used pupillometry as an objective measure of mental effort and fMRI as an exploratory method of effort with the same musical pieces. We also compared a group of non-professional pianists and non-musicians by the use of pupillometry and a small group of non-musicians with fMRI. This combined approach of psychophysiology and neuroimaging revealed the cognitive work during different musical activities. We found that pupil diameters were largest when "playing" (regardless of whether there was sound produced or not) compared to conditions with no movement (i.e., "listening" and "imagery"). We found positive correlations between pupil diameters of the professional pianist during different conditions with the same piano piece (i.e., normal playing, silenced playing, listen, imagining), which might indicate similar degrees of load on cognitive resources as well as an intimate link between the motor imagery of sound-producing body motions and gestures. We also confirmed that musical imagery had a strong commonality with music listening in both pianists and musically naïve individuals. Neuroimaging provided evidence for a relationship between noradrenergic (NE) activity and mental workload or attentional intensity within the domain of music cognition. We found effort related activity in the superior part of the locus coeruleus (LC) and, similarly to the pupil, the listening and imagery engaged less the LC-NE network than the motor condition. The pianists attended more intensively to the most difficult piece than the non-musicians since they showed larger pupils for the most difficult piece. Non-musicians were the most engaged by the music listening task, suggesting that the amount of attention allocated for the same task may follow a hierarchy of expertise demanding less attentional effort in expert or performers than in novices. In the professional pianist, we found only weak evidence for a commonality between subjective effort (as rated measure-by-measure) and the objective effort gauged with pupil diameter during listening. We suggest that psychophysiological methods like pupillometry can index mental effort in a manner that is not available to subjective awareness or introspection.
Collapse
Affiliation(s)
- Tor Endestad
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Helgelandssykehuset, Mosjøen, Norway
| | - Rolf Inge Godøy
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | | | - Thomas Hagen
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Agata Bochynska
- Department of Psychology, University of Oslo, Oslo, Norway
- Department of Psychology, New York University, New York, NY, United States
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| |
Collapse
|
32
|
Wisniewski MG, Zakrzewski AC. Effects of auditory training on low-pass filtered speech perception and listening-related cognitive load. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:EL394. [PMID: 33138495 PMCID: PMC7599074 DOI: 10.1121/10.0001742] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 07/20/2020] [Accepted: 07/24/2020] [Indexed: 06/11/2023]
Abstract
Studies supporting learning-induced reductions in listening-related cognitive load have lacked procedural learning controls, making it difficult to determine the extent to which effects arise from perceptual or procedural learning. Here, listeners were trained in the coordinate response measure (CRM) task under unfiltered (UT) or degraded low-pass filtered (FT) conditions. Improvements in low-pass filtered CRM performance were larger for FT. Both conditions showed training-related reductions in cognitive load as indexed by a secondary working memory task. However, only the FT condition showed a correlation between CRM improvement and secondary task performance, suggesting that effects can be driven by perceptual and procedural learning.
Collapse
Affiliation(s)
- Matthew G Wisniewski
- Department of Psychological Sciences, Kansas State University, 1114 Mid-Campus Drive North ,
| | - Alexandria C Zakrzewski
- Department of Psychological Sciences, Kansas State University, 1114 Mid-Campus Drive North ,
| |
Collapse
|
33
|
Abstract
As all human activities, verbal communication is fraught with errors. It is estimated that humans produce around 16,000 words per day, but the word that is selected for production is not always correct and neither is the articulation always flawless. However, to facilitate communication, it is important to limit the number of errors. This is accomplished via the verbal monitoring mechanism. A body of research over the last century has uncovered a number of properties of the mechanisms at work during verbal monitoring. Over a dozen routes for verbal monitoring have been postulated. However, to date a complete account of verbal monitoring does not exist. In the current paper we first outline the properties of verbal monitoring that have been empirically demonstrated. This is followed by a discussion of current verbal monitoring models: the perceptual loop theory, conflict monitoring, the hierarchical state feedback control model, and the forward model theory. Each of these models is evaluated given empirical findings and theoretical considerations. We then outline lacunae of current theories, which we address with a proposal for a new model of verbal monitoring for production and perception, based on conflict monitoring models. Additionally, this novel model suggests a mechanism of how a detected error leads to a correction. The error resolution mechanism proposed in our new model is then tested in a computational model. Finally, we outline the advances and predictions of the model.
Collapse
|
34
|
Zekveld AA, van Scheepen JAM, Versfeld NJ, Kramer SE, van Steenbergen H. The Influence of Hearing Loss on Cognitive Control in an Auditory Conflict Task: Behavioral and Pupillometry Findings. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2483-2492. [PMID: 32610026 DOI: 10.1044/2020_jslhr-20-00107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The pupil dilation response is sensitive not only to auditory task demand but also to cognitive conflict. Conflict is induced by incompatible trials in auditory Stroop tasks in which participants have to identify the presentation location (left or right ear) of the words "left" or "right." Previous studies demonstrated that the compatibility effect is reduced if the trial is preceded by another incompatible trial (conflict adaptation). Here, we investigated the influence of hearing status on cognitive conflict and conflict adaptation in an auditory Stroop task. Method Two age-matched groups consisting of 32 normal-hearing participants (M age = 52 years, age range: 25-67 years) and 28 participants with hearing impairment (M age = 52 years, age range: 23-64 years) performed an auditory Stroop task. We assessed the effects of hearing status and stimulus compatibility on reaction times (RTs) and pupil dilation responses. We furthermore analyzed the Pearson correlation coefficients between age, degree of hearing loss, and the compatibility effects on the RT and pupil response data across all participants. Results As expected, the RTs were longer and pupil dilation was larger for incompatible relative to compatible trials. Furthermore, these effects were reduced for trials following incompatible (as compared to compatible) trials (conflict adaptation). No general effect of hearing status was observed, but the correlations suggested that higher age and a larger degree of hearing loss were associated with more interference of current incompatibility on RTs. Conclusions Conflict processing and adaptation effects were observed on the RTs and pupil dilation responses in an auditory Stroop task. No general effects of hearing status were observed, but the correlations suggested that higher age and a greater degree of hearing loss were related to reduced conflict processing ability. The current study underlines the relevance of taking into account cognitive control and conflict adaptation processes.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - J A M van Scheepen
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Niek J Versfeld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Sophia E Kramer
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Henk van Steenbergen
- Cognitive Psychology Unit, Institute of Psychology, University of Leiden, the Netherlands
- Leiden Institute for Brain and Cognition, the Netherlands
| |
Collapse
|
35
|
Reilly J, Zuckerman B, Kelly A, Flurie M, Rao S. Neuromodulation of cursing in American English: A combined tDCS and pupillometry study. BRAIN AND LANGUAGE 2020; 206:104791. [PMID: 32339951 DOI: 10.1016/j.bandl.2020.104791] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 02/03/2020] [Accepted: 03/20/2020] [Indexed: 06/11/2023]
Abstract
Many neurological disorders are associated with excessive and/or uncontrolled cursing. The right prefrontal cortex has long been implicated in a diverse range of cognitive processes that underlie the propensity for cursing, including non-propositional language representation, emotion regulation, theory of mind, and affective arousal. Neurogenic cursing often poses significant negative social consequences, and there is no known behavioral intervention for this communicative disorder. We examined whether right vs. left lateralized prefrontal neurostimultion via tDCS could modulate taboo word production in neurotypical adults. We employed a pre/post design with a bilateral frontal electrode montage. Half the participants received left anodal and right cathodal stimulation; the remainder received the opposite polarity stimulation at the same anatomical loci. We employed physiological (pupillometry) and behavioral (reaction time) dependent measures as participants read aloud taboo and non-taboo words. Pupillary responses demonstrated a crossover reaction, suggestive of modulation of phasic arousal during cursing. Participants in the right anodal condition showed elevated pupil responses for taboo words post stimulation. In contrast, participants in the right cathodal condition showed relative dampening of pupil responses for taboo words post stimulation. We observed no effects of stimulation on response times. We interpret these findings as supporting modulation of right hemisphere affective arousal that disproportionately impacts taboo word processing. We discuss alternate accounts of the data and future applications to neurological disorders.
Collapse
Affiliation(s)
- Jamie Reilly
- Eleanor M. Saffran Center for Cognitive Neuroscience, USA; Department of Communication Sciences and Disorders, Temple University, Philadelphia, PA, USA.
| | - Bonnie Zuckerman
- Eleanor M. Saffran Center for Cognitive Neuroscience, USA; Department of Communication Sciences and Disorders, Temple University, Philadelphia, PA, USA
| | - Alexandra Kelly
- Department of Psychology, Drexel University, Philadelphia, PA, USA
| | - Maurice Flurie
- Eleanor M. Saffran Center for Cognitive Neuroscience, USA; Department of Communication Sciences and Disorders, Temple University, Philadelphia, PA, USA
| | - Sagar Rao
- Swarthmore College, Swarthmore, PA, USA
| |
Collapse
|
36
|
Decruy L, Lesenfants D, Vanthornhout J, Francart T. Top-down modulation of neural envelope tracking: The interplay with behavioral, self-report and neural measures of listening effort. Eur J Neurosci 2020; 52:3375-3393. [PMID: 32306466 DOI: 10.1111/ejn.14753] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Revised: 04/09/2020] [Accepted: 04/11/2020] [Indexed: 11/27/2022]
Abstract
When listening to natural speech, our brain activity tracks the slow amplitude modulations of speech, also called the speech envelope. Moreover, recent research has demonstrated that this neural envelope tracking can be affected by top-down processes. The present study was designed to examine if neural envelope tracking is modulated by the effort that a person expends during listening. Five measures were included to quantify listening effort: two behavioral measures based on a novel dual-task paradigm, a self-report effort measure and two neural measures related to phase synchronization and alpha power. Electroencephalography responses to sentences, presented at a wide range of subject-specific signal-to-noise ratios, were recorded in thirteen young, normal-hearing adults. A comparison of the five measures revealed different effects of listening effort as a function of speech understanding. Reaction times on the primary task and self-reported effort decreased with increasing speech understanding. In contrast, reaction times on the secondary task and alpha power showed a peak-shaped behavior with highest effort at intermediate speech understanding levels. With regard to neural envelope tracking, we found that the reaction times on the secondary task and self-reported effort explained a small part of the variability in theta-band envelope tracking. Speech understanding was found to strongly modulate neural envelope tracking. More specifically, our results demonstrated a robust increase in envelope tracking with increasing speech understanding. The present study provides new insights in the relations among different effort measures and highlights the potential of neural envelope tracking to objectively measure speech understanding in young, normal-hearing adults.
Collapse
Affiliation(s)
- Lien Decruy
- Department of Neurosciences Research, Group Experimental Oto-rhino-laryngology (ExpORL), KU Leuven, Leuven, Belgium
| | - Damien Lesenfants
- Department of Neurosciences Research, Group Experimental Oto-rhino-laryngology (ExpORL), KU Leuven, Leuven, Belgium
| | - Jonas Vanthornhout
- Department of Neurosciences Research, Group Experimental Oto-rhino-laryngology (ExpORL), KU Leuven, Leuven, Belgium
| | - Tom Francart
- Department of Neurosciences Research, Group Experimental Oto-rhino-laryngology (ExpORL), KU Leuven, Leuven, Belgium
| |
Collapse
|
37
|
Parthasarathy A, Hancock KE, Bennett K, DeGruttola V, Polley DB. Bottom-up and top-down neural signatures of disordered multi-talker speech perception in adults with normal hearing. eLife 2020; 9:e51419. [PMID: 31961322 PMCID: PMC6974362 DOI: 10.7554/elife.51419] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 12/15/2019] [Indexed: 12/16/2022] Open
Abstract
In social settings, speech waveforms from nearby speakers mix together in our ear canals. Normally, the brain unmixes the attended speech stream from the chorus of background speakers using a combination of fast temporal processing and cognitive active listening mechanisms. Of >100,000 patient records,~10% of adults visited our clinic because of reduced hearing, only to learn that their hearing was clinically normal and should not cause communication difficulties. We found that multi-talker speech intelligibility thresholds varied widely in normal hearing adults, but could be predicted from neural phase-locking to frequency modulation (FM) cues measured with ear canal EEG recordings. Combining neural temporal fine structure processing, pupil-indexed listening effort, and behavioral FM thresholds accounted for 78% of the variability in multi-talker speech intelligibility. The disordered bottom-up and top-down markers of poor multi-talker speech perception identified here could inform the design of next-generation clinical tests for hidden hearing disorders.
Collapse
Affiliation(s)
- Aravindakshan Parthasarathy
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of Otolaryngology – Head and Neck SurgeryHarvard Medical SchoolBostonUnited States
| | - Kenneth E Hancock
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of Otolaryngology – Head and Neck SurgeryHarvard Medical SchoolBostonUnited States
| | - Kara Bennett
- Bennett Statistical Consulting IncBallstonUnited States
| | - Victor DeGruttola
- Department of BiostatisticsHarvard TH Chan School of Public HealthBostonUnited States
| | - Daniel B Polley
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of Otolaryngology – Head and Neck SurgeryHarvard Medical SchoolBostonUnited States
| |
Collapse
|
38
|
Rosemann S, Thiel CM. Neural Signatures of Working Memory in Age-related Hearing Loss. Neuroscience 2020; 429:134-142. [PMID: 31935488 DOI: 10.1016/j.neuroscience.2019.12.046] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 12/20/2019] [Accepted: 12/29/2019] [Indexed: 11/17/2022]
Abstract
Age-related hearing loss affects the ability to hear high frequencies and therefore leads to difficulties in understanding speech, particularly under adverse listening conditions. This decrease in hearing can be partly compensated by the recruitment of executive functions, such as working memory. The compensatory effort may, however, lead to a decrease in available neural resources compromising cognitive abilities. We here aim to investigate whether mild to moderate hearing loss impacts prefrontal functions and related executive processes and whether these are related to speech-in-noise perception abilities. Nineteen hard of hearing and nineteen age-matched normal-hearing participants performed a working memory task to drive prefrontal activity, which was gauged with functional magnetic resonance imaging. In addition, speech-in-noise understanding, cognitive flexibility and inhibition control were assessed. Our results showed no differences in frontoparietal activation patterns and working memory performance between normal-hearing and hard of hearing participants. The behavioral assessment of further executive functions, however, provided evidence of lower cognitive flexibility in hard of hearing participants. Cognitive flexibility and hearing abilities further predicted speech-in-noise perception. We conclude that neural and behavioral signatures of working memory are intact in mild to moderate hearing loss. Moreover, cognitive flexibility seems to be closely related to hearing impairment and speech-in-noise perception and should, therefore, be investigated in future studies assessing age-related hearing loss and its implications on prefrontal functions.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
39
|
Ayasse ND, Wingfield A. Anticipatory Baseline Pupil Diameter Is Sensitive to Differences in Hearing Thresholds. Front Psychol 2020; 10:2947. [PMID: 31998196 PMCID: PMC6965006 DOI: 10.3389/fpsyg.2019.02947] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Accepted: 12/12/2019] [Indexed: 12/23/2022] Open
Abstract
Task-evoked changes in pupil dilation have long been used as a physiological index of cognitive effort. Unlike this response, that is measured during or after an experimental trial, the baseline pupil dilation (BPD) is a measure taken prior to an experimental trial. As such, it is considered to reflect an individual’s arousal level in anticipation of an experimental trial. We report data for 68 participants, ages 18 to 89, whose hearing acuity ranged from normal hearing to a moderate hearing loss, tested over a series 160 trials on an auditory sentence comprehension task. Results showed that BPDs progressively declined over the course of the experimental trials, with participants with poorer pure tone detection thresholds showing a steeper rate of decline than those with better thresholds. Data showed this slope difference to be due to participants with poorer hearing having larger BPDs than those with better hearing at the start of the experiment, but with their BPDs approaching that of the better hearing participants by the end of the 160 trials. A finding of increasing response accuracy over trials was seen as inconsistent with a fatigue or reduced task engagement account of the diminishing BPDs. Rather, the present results imply BPD as reflecting a heightened arousal level in poorer-hearing participants in anticipation of a task that demands accurate speech perception, a concern that dissipates over trials with task success. These data taken with others suggest that the baseline pupillary response may not reflect a single construct.
Collapse
Affiliation(s)
- Nicolai D Ayasse
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| |
Collapse
|
40
|
Human Auditory Detection and Discrimination Measured with the Pupil Dilation Response. J Assoc Res Otolaryngol 2019; 21:43-59. [PMID: 31792632 PMCID: PMC7062948 DOI: 10.1007/s10162-019-00739-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 11/01/2019] [Indexed: 12/04/2022] Open
Abstract
In the standard Hughson-Westlake hearing tests (Carhart and Jerger 1959), patient responses like a button press, raised hand, or verbal response are used to assess detection of brief test signals such as tones of varying pitch and level. Because of its reliance on voluntary responses, Hughson-Westlake audiometry is not suitable for patients who cannot follow instructions reliably, such as pre-lingual infants (Northern and Downs 2002). As an alternative approach, we explored the use of the pupillary dilation response (PDR), a short-latency component of the orienting response evoked by novel stimuli, as an indicator of sound detection. The pupils of 31 adult participants (median age 24 years) were monitored with an infrared video camera during a standard hearing test in which they indicated by button press whether or not they heard narrowband noises centered at 1, 2, 4, and 8 kHz. Tests were conducted in a quiet, carpeted office. Pupil size was summed over the first 1750 ms after stimulus delivery, excluding later dilations linked to expenditure of cognitive effort (Kahneman and Beatty 1966; Kahneman et al. 1969). The PDR yielded thresholds comparable to the standard test at all center frequencies tested, suggesting that the PDR is as sensitive as traditional methods of assessing detection. We also tested the effects of repeating a stimulus on the habituation of the PDR. Results showed that habituation can be minimized by operating at near-threshold stimulus levels. At sound levels well above threshold, the PDR habituated but could be recovered by changing the frequency or sound level, suggesting that the PDR can also be used to test stimulus discrimination. Given these features, the PDR may be useful as an audiometric tool or as a means of assessing auditory discrimination in those who cannot produce a reliable voluntary response.
Collapse
|
41
|
Giannakos MN, Sharma K, Pappas IO, Kostakos V, Velloso E. Multimodal data as a means to understand the learning experience. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2019. [DOI: 10.1016/j.ijinfomgt.2019.02.003] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
42
|
Zhao S, Chait M, Dick F, Dayan P, Furukawa S, Liao HI. Pupil-linked phasic arousal evoked by violation but not emergence of regularity within rapid sound sequences. Nat Commun 2019; 10:4030. [PMID: 31492881 PMCID: PMC6731273 DOI: 10.1038/s41467-019-12048-1] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Accepted: 08/19/2019] [Indexed: 11/09/2022] Open
Abstract
The ability to track the statistics of our surroundings is a key computational challenge. A prominent theory proposes that the brain monitors for unexpected uncertainty - events which deviate substantially from model predictions, indicating model failure. Norepinephrine is thought to play a key role in this process by serving as an interrupt signal, initiating model-resetting. However, evidence is from paradigms where participants actively monitored stimulus statistics. To determine whether Norepinephrine routinely reports the statistical structure of our surroundings, even when not behaviourally relevant, we used rapid tone-pip sequences that contained salient pattern-changes associated with abrupt structural violations vs. emergence of regular structure. Phasic pupil dilations (PDR) were monitored to assess Norepinephrine. We reveal a remarkable specificity: When not behaviourally relevant, only abrupt structural violations evoke a PDR. The results demonstrate that Norepinephrine tracks unexpected uncertainty on rapid time scales relevant to sensory signals.
Collapse
Affiliation(s)
- Sijia Zhao
- Ear Institute, University College London, London, WC1X 8EE, UK
| | - Maria Chait
- Ear Institute, University College London, London, WC1X 8EE, UK.
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck College, London, WC1E 7HX, UK
- Department of Experimental Psychology, University College London, London, WC1H 0DS, UK
| | - Peter Dayan
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| | - Shigeto Furukawa
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, 243-0198, Japan
| | - Hsin-I Liao
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, 243-0198, Japan
| |
Collapse
|
43
|
Zhang M, Siegle GJ, McNeil MR, Pratt SR, Palmer C. The role of reward and task demand in value-based strategic allocation of auditory comprehension effort. Hear Res 2019; 381:107775. [DOI: 10.1016/j.heares.2019.107775] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 07/30/2019] [Accepted: 07/31/2019] [Indexed: 12/19/2022]
|
44
|
Zekveld AA, van Scheepen JA, Versfeld NJ, Veerman EC, Kramer SE. Please try harder! The influence of hearing status and evaluative feedback during listening on the pupil dilation response, saliva-cortisol and saliva alpha-amylase levels. Hear Res 2019; 381:107768. [DOI: 10.1016/j.heares.2019.07.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 07/09/2019] [Accepted: 07/10/2019] [Indexed: 10/26/2022]
|
45
|
Zekveld AA, Kramer SE, Rönnberg J, Rudner M. In a Concurrent Memory and Auditory Perception Task, the Pupil Dilation Response Is More Sensitive to Memory Load Than to Auditory Stimulus Characteristics. Ear Hear 2019; 40:272-286. [PMID: 29923867 PMCID: PMC6400496 DOI: 10.1097/aud.0000000000000612] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Accepted: 04/10/2018] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words. DESIGN Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating). RESULTS Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall. CONCLUSIONS Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.
Collapse
Affiliation(s)
- Adriana A. Zekveld
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Linköping, Sweden
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health research institute VU University Medical Center, Amsterdam, The Netherlands
| | - Sophia E. Kramer
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health research institute VU University Medical Center, Amsterdam, The Netherlands
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Linköping, Sweden
| |
Collapse
|
46
|
The human task-evoked pupillary response function is linear: Implications for baseline response scaling in pupillometry. Behav Res Methods 2019; 51:865-878. [PMID: 30264368 DOI: 10.3758/s13428-018-1134-4] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
The human task-evoked pupillary response provides a sensitive physiological index of the intensity and online resource demands of numerous cognitive processes (e.g., memory retrieval, problem solving, or target detection). Cognitive pupillometry is a well-established technique that relies upon precise measurement of these subtle response functions. Baseline variability of pupil diameter is a complex artifact that typically necessitates mathematical correction. A methodological paradox within pupillometry is that linear and nonlinear forms of baseline scaling both remain accepted baseline correction techniques, despite yielding highly disparate results. The task-evoked pupillary response (TEPR) could potentially scale nonlinearly, similar to autonomic functions such as heart rate, in which the amplitude of an evoked response diminishes as the baseline rises. Alternatively, the TEPR could scale similarly to the cortical hemodynamic response, as a linear function that is independent of its baseline. However, the TEPR cannot scale both linearly and nonlinearly. Our aim was to adjudicate between linear and nonlinear scaling of human TEPR. We manipulated baseline pupil size by modulating the illuminance in the testing room as participants heard abrupt pure-tone transitions (Exp. 1) or visually monitored word lists (Exp. 2). Phasic pupillary responses scaled according to a linear function across all lighting (dark, mid, bright) and task (tones, words) conditions, demonstrating that the TEPR is independent of its baseline amplitude. We discuss methodological implications and identify a need to reevaluate past pupillometry studies.
Collapse
|
47
|
Cagiltay NE, Menekse Dalveren GG. Are Left- and Right-Eye Pupil Sizes Always Equal? J Eye Mov Res 2019; 12:10.16910/jemr.12.2.1. [PMID: 33828724 PMCID: PMC7881883 DOI: 10.16910/jemr.12.2.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Eye movements provide very critical information about the cognitive load and behaviors of human beings. Earlier studies report that under normal conditions, the left- and right-eye pupil sizes are equal. For this reason, most studies undertaking eye-movement analysis are conducted by only considering the pupil size of a single eye or taking the average size of both eye pupils. This study attempts to offer a better understanding concerning whether there are any differences between the left- and right-eye pupil sizes of the right-handed surgical residents while performing surgical tasks in a computer-based simulation environment under different conditions (left-hand, right-hand and both hands). According to the results, in many cases, the right-eye pupil sizes of the participants were larger than their left-eye pupil sizes while performing the tasks under right-hand and both hands conditions. However, no significant difference was found in relation to the tasks performed under left-hand condition in all scenarios. These results are very critical to shed further light on the cognitive load of the surgical residents by analyzing their left-eye and right-eye pupil sizes. Further research is required to investigate the effect of the difficulty level of each scenario, its appropriateness with the skill level of the participants, and handedness on the differences between the leftand right-eye pupil sizes.
Collapse
Affiliation(s)
- Nergiz Ercil Cagiltay
- Atilim University, Faculty of Engineering, Department of Software Engineering, Ankara, Turkey
| | | |
Collapse
|
48
|
Peinkhofer C, Knudsen GM, Moretti R, Kondziella D. Cortical modulation of pupillary function: systematic review. PeerJ 2019; 7:e6882. [PMID: 31119083 PMCID: PMC6510220 DOI: 10.7717/peerj.6882] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Accepted: 03/26/2019] [Indexed: 12/25/2022] Open
Abstract
BACKGROUND The pupillary light reflex is the main mechanism that regulates the pupillary diameter; it is controlled by the autonomic system and mediated by subcortical pathways. In addition, cognitive and emotional processes influence pupillary function due to input from cortical innervation, but the exact circuits remain poorly understood. We performed a systematic review to evaluate the mechanisms behind pupillary changes associated with cognitive efforts and processing of emotions and to investigate the cerebral areas involved in cortical modulation of the pupillary light reflex. METHODOLOGY We searched multiple databases until November 2018 for studies on cortical modulation of pupillary function in humans and non-human primates. Of 8,809 papers screened, 258 studies were included. RESULTS Most investigators focused on pupillary dilatation and/or constriction as an index of cognitive and emotional processing, evaluating how changes in pupillary diameter reflect levels of attention and arousal. Only few tried to correlate specific cerebral areas to pupillary changes, using either cortical activation models (employing micro-stimulation of cortical structures in non-human primates) or cortical lesion models (e.g., investigating patients with stroke and damage to salient cortical and/or subcortical areas). Results suggest the involvement of several cortical regions, including the insular cortex (Brodmann areas 13 and 16), the frontal eye field (Brodmann area 8) and the prefrontal cortex (Brodmann areas 11 and 25), and of subcortical structures such as the locus coeruleus and the superior colliculus. CONCLUSIONS Pupillary dilatation occurs with many kinds of mental or emotional processes, following sympathetic activation or parasympathetic inhibition. Conversely, pupillary constriction may occur with anticipation of a bright stimulus (even in its absence) and relies on a parasympathetic activation. All these reactions are controlled by subcortical and cortical structures that are directly or indirectly connected to the brainstem pupillary innervation system.
Collapse
Affiliation(s)
- Costanza Peinkhofer
- Department of Neurology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- Medical Faculty, University of Trieste, Trieste, Italy
| | - Gitte M. Knudsen
- Department of Neurology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- Neurobiology Research Unit, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- Faculty of Health and Medical Science, University of Copenhagen, Copenhagen, Denmark
| | - Rita Moretti
- Medical Faculty, University of Trieste, Trieste, Italy
- Department of Medical, Surgical and Health Sciences, Neurological Unit, Trieste University Hospital, Cattinara, Trieste, Italy
| | - Daniel Kondziella
- Department of Neurology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- Faculty of Health and Medical Science, University of Copenhagen, Copenhagen, Denmark
- Department of Neuroscience, Norwegian University of Technology and Science, Trondheim, Norway
| |
Collapse
|
49
|
Winn MB, Moore AN. Pupillometry Reveals That Context Benefit in Speech Perception Can Be Disrupted by Later-Occurring Sounds, Especially in Listeners With Cochlear Implants. Trends Hear 2019; 22:2331216518808962. [PMID: 30375282 PMCID: PMC6207967 DOI: 10.1177/2331216518808962] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Contextual cues can be used to improve speech recognition, especially for people with hearing impairment. However, previous work has suggested that when the auditory signal is degraded, context might be used more slowly than when the signal is clear. This potentially puts the hearing-impaired listener in a dilemma of continuing to process the last sentence when the next sentence has already begun. This study measured the time course of the benefit of context using pupillary responses to high- and low-context sentences that were followed by silence or various auditory distractors (babble noise, ignored digits, or attended digits). Participants were listeners with cochlear implants or normal hearing using a 12-channel noise vocoder. Context-related differences in pupil dilation were greater for normal hearing than for cochlear implant listeners, even when scaled for differences in pupil reactivity. The benefit of context was systematically reduced for both groups by the presence of the later-occurring sounds, including virtually complete negation when sentences were followed by another attended utterance. These results challenge how we interpret the benefit of context in experiments that present just one utterance at a time. If a listener uses context to “repair” part of a sentence, and later-occurring auditory stimuli interfere with that repair process, the benefit of context might not survive outside the idealized laboratory or clinical environment. Elevated listening effort in hearing-impaired listeners might therefore result not just from poor auditory encoding but also inefficient use of context and prolonged processing of misperceived utterances competing with perception of incoming speech.
Collapse
Affiliation(s)
- Matthew B Winn
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Ashley N Moore
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
50
|
Müller JA, Wendt D, Kollmeier B, Debener S, Brand T. Effect of Speech Rate on Neural Tracking of Speech. Front Psychol 2019; 10:449. [PMID: 30906273 PMCID: PMC6418035 DOI: 10.3389/fpsyg.2019.00449] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Accepted: 02/14/2019] [Indexed: 12/03/2022] Open
Abstract
Speech comprehension requires effort in demanding listening situations. Selective attention may be required for focusing on a specific talker in a multi-talker environment, may enhance effort by requiring additional cognitive resources, and is known to enhance the neural representation of the attended talker in the listener's neural response. The aim of the study was to investigate the relation of listening effort, as quantified by subjective effort ratings and pupil dilation, and neural speech tracking during sentence recognition. Task demands were varied using sentences with varying levels of linguistic complexity and using two different speech rates in a picture-matching paradigm with 20 normal-hearing listeners. The participants' task was to match the acoustically presented sentence with a picture presented before the acoustic stimulus. Afterwards they rated their perceived effort on a categorical effort scale. During each trial, pupil dilation (as an indicator of listening effort) and electroencephalogram (as an indicator of neural speech tracking) were recorded. Neither measure was significantly affected by linguistic complexity. However, speech rate showed a strong influence on subjectively rated effort, pupil dilation, and neural tracking. The neural tracking analysis revealed a shorter latency for faster sentences, which may reflect a neural adaptation to the rate of the input. No relation was found between neural tracking and listening effort, even though both measures were clearly influenced by speech rate. This is probably due to factors that influence both measures differently. Consequently, the amount of listening effort is not clearly represented in the neural tracking.
Collapse
Affiliation(s)
- Jana Annina Müller
- Cluster of Excellence ‘Hearing4all’, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Medizinische Physik, Department of Medical Physics and Acoustics, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Dorothea Wendt
- Hearing Systems, Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Birger Kollmeier
- Cluster of Excellence ‘Hearing4all’, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Medizinische Physik, Department of Medical Physics and Acoustics, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Stefan Debener
- Cluster of Excellence ‘Hearing4all’, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Neuropsychology Lab, Department of Psychology, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Thomas Brand
- Cluster of Excellence ‘Hearing4all’, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Medizinische Physik, Department of Medical Physics and Acoustics, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|