1
|
Herrmann B, Ryan JD. Pupil Size and Eye Movements Differently Index Effort in Both Younger and Older Adults. J Cogn Neurosci 2024; 36:1325-1340. [PMID: 38683698 DOI: 10.1162/jocn_a_02172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The assessment of mental effort is increasingly relevant in neurocognitive and life span domains. Pupillometry, the measure of the pupil size, is often used to assess effort but has disadvantages. Analysis of eye movements may provide an alternative, but research has been limited to easy and difficult task demands in younger adults. An effort measure must be sensitive to the whole effort profile, including "giving up" effort investment, and capture effort in different age groups. The current study comprised three experiments in which younger (n = 66) and older (n = 44) adults listened to speech masked by background babble at different signal-to-noise ratios associated with easy, difficult, and impossible speech comprehension. We expected individuals to invest little effort for easy and impossible speech (giving up) but to exert effort for difficult speech. Indeed, pupil size was largest for difficult but lower for easy and impossible speech. In contrast, gaze dispersion decreased with increasing speech masking in both age groups. Critically, gaze dispersion during difficult speech returned to levels similar to easy speech after sentence offset, when acoustic stimulation was similar across conditions, whereas gaze dispersion during impossible speech continued to be reduced. These findings show that a reduction in eye movements is not a byproduct of acoustic factors, but instead suggest that neurocognitive processes, different from arousal-related systems regulating the pupil size, drive reduced eye movements during high task demands. The current data thus show that effort in one sensory domain (audition) differentially impacts distinct functional properties in another sensory domain (vision).
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| | - Jennifer D Ryan
- Rotman Research Institute, North York, Ontario, Canada
- University of Toronto, Ontario, Canada
| |
Collapse
|
2
|
Hu J, Vetter P. How the eyes respond to sounds. Ann N Y Acad Sci 2024; 1532:18-36. [PMID: 38152040 DOI: 10.1111/nyas.15093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2023]
Abstract
Eye movements have been extensively studied with respect to visual stimulation. However, we live in a multisensory world, and how the eyes are driven by other senses has been explored much less. Here, we review the evidence on how audition can trigger and drive different eye responses and which cortical and subcortical neural correlates are involved. We provide an overview on how different types of sounds, from simple tones and noise bursts to spatially localized sounds and complex linguistic stimuli, influence saccades, microsaccades, smooth pursuit, pupil dilation, and eye blinks. The reviewed evidence reveals how the auditory system interacts with the oculomotor system, both behaviorally and neurally, and how this differs from visually driven eye responses. Some evidence points to multisensory interaction, and potential multisensory integration, but the underlying computational and neural mechanisms are still unclear. While there are marked differences in how the eyes respond to auditory compared to visual stimuli, many aspects of auditory-evoked eye responses remain underexplored, and we summarize the key open questions for future research.
Collapse
Affiliation(s)
- Junchao Hu
- Visual and Cognitive Neuroscience Lab, Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Petra Vetter
- Visual and Cognitive Neuroscience Lab, Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
3
|
Chiossi JSC, Patou F, Ng EHN, Faulkner KF, Lyxell B. Phonological discrimination and contrast detection in pupillometry. Front Psychol 2023; 14:1232262. [PMID: 38023001 PMCID: PMC10646334 DOI: 10.3389/fpsyg.2023.1232262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The perception of phonemes is guided by both low-level acoustic cues and high-level linguistic context. However, differentiating between these two types of processing can be challenging. In this study, we explore the utility of pupillometry as a tool to investigate both low- and high-level processing of phonological stimuli, with a particular focus on its ability to capture novelty detection and cognitive processing during speech perception. Methods Pupillometric traces were recorded from a sample of 22 Danish-speaking adults, with self-reported normal hearing, while performing two phonological-contrast perception tasks: a nonword discrimination task, which included minimal-pair combinations specific to the Danish language, and a nonword detection task involving the detection of phonologically modified words within sentences. The study explored the perception of contrasts in both unprocessed speech and degraded speech input, processed with a vocoder. Results No difference in peak pupil dilation was observed when the contrast occurred between two isolated nonwords in the nonword discrimination task. For unprocessed speech, higher peak pupil dilations were measured when phonologically modified words were detected within a sentence compared to sentences without the nonwords. For vocoded speech, higher peak pupil dilation was observed for sentence stimuli, but not for the isolated nonwords, although performance decreased similarly for both tasks. Conclusion Our findings demonstrate the complexity of pupil dynamics in the presence of acoustic and phonological manipulation. Pupil responses seemed to reflect higher-level cognitive and lexical processing related to phonological perception rather than low-level perception of acoustic cues. However, the incorporation of multiple talkers in the stimuli, coupled with the relatively low task complexity, may have affected the pupil dilation.
Collapse
Affiliation(s)
- Julia S. C. Chiossi
- Oticon A/S, Smørum, Denmark
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| | | | - Elaine Hoi Ning Ng
- Oticon A/S, Smørum, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | | | - Björn Lyxell
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| |
Collapse
|
4
|
Thye M, Hoffman P, Mirman D. The words that little by little revealed everything: Neural response to lexical-semantic content during narrative comprehension. Neuroimage 2023; 276:120204. [PMID: 37257674 DOI: 10.1016/j.neuroimage.2023.120204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 04/19/2023] [Accepted: 05/27/2023] [Indexed: 06/02/2023] Open
Abstract
The ease with which narratives are understood belies the complexity of the information being conveyed and the cognitive processes that support comprehension. The meanings of the words must be rapidly accessed and integrated with the reader's mental representation of the overarching, unfolding scenario. A broad, bilateral brain network is engaged by this process, but it is not clear how words that vary on specific semantic dimensions, such as ambiguity, emotion, or socialness, engage the semantic, semantic control, or social cognition systems. In the present study, data from 48 participants who listened to The Little Prince audiobook during MRI scanning were selected from the Le Petit Prince dataset. The lexical and semantic content within the narrative was quantified from the transcript words with factor scores capturing Word Length, Semantic Flexibility, Emotional Strength, and Social Impact. These scores, along with word quantity variables, were used to investigate where these predictors co-vary with activation across the brain. In contrast to studies of isolated word processing, large networks were found to co-vary with the lexical and semantic content within the narrative. An increase in semantic content engaged the ventral portion of ventrolateral ATL, consistent with its role as a semantic hub. Decreased semantic content engaged temporal pole and inferior parietal lobule, which may reflect semantic integration. The semantic control network was engaged by words with low Semantic Flexibility, perhaps due to the demand required to process infrequent, less semantically diverse language. Activation in ATL co-varied with an increase in Social Impact, which is consistent with the claim that social knowledge is housed within the neural architecture of the semantic system. These results suggest that current models of language processing may present an impoverished estimate of the neural systems that coordinate to support narrative comprehension, and, by extension, real-world language processing.
Collapse
Affiliation(s)
- Melissa Thye
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, United Kingdom.
| | - Paul Hoffman
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, United Kingdom
| | - Daniel Mirman
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, United Kingdom
| |
Collapse
|
5
|
Haro J, López-Cortés N, Ferré P. Pupillometric and behavioural evidence shows no differences between polyseme and homonym processing. Acta Psychol (Amst) 2023; 238:103985. [PMID: 37453281 DOI: 10.1016/j.actpsy.2023.103985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 06/22/2023] [Accepted: 07/10/2023] [Indexed: 07/18/2023] Open
Abstract
Ambiguous words can have related meanings (polysemes, e.g., newspaper) or unrelated meanings (homonyms, e.g., bat). Here we examined the processing of both types of ambiguous words (as well as unambiguous words) in tasks of increasing level of semantic engagement. Four experiments were conducted in which the degree of semantic engagement of the task was manipulated: lexical decision task (Experiments 1 and 2), semantic categorization task (Experiment 3) and number-of-meanings task (Experiment 4). RTs and pupillary response were recorded. To our knowledge, pupillary response had never been used before to study ambiguous words processing in isolation. Results showed faster RTs for ambiguous words with respect to unambiguous words in LDT, and larger pupil dilation was observed for ambiguous words in comparison to unambiguous ones in number-of-meanings task. However, differences between polysemes and homonyms were not observed in any task. These results provide no evidence that polysemes and homonyms are processed differently.
Collapse
Affiliation(s)
- Juan Haro
- Universitat Rovira i Virgili, Department of Psychology, Research Center for Behaviour Assessment (CRAMC), Tarragona, Spain.
| | | | - Pilar Ferré
- Universitat Rovira i Virgili, Department of Psychology, Research Center for Behaviour Assessment (CRAMC), Tarragona, Spain
| |
Collapse
|
6
|
Winn MB. Time Scales and Moments of Listening Effort Revealed in Pupillometry. Semin Hear 2023; 44:106-123. [PMID: 37122881 PMCID: PMC10147502 DOI: 10.1055/s-0043-1767741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023] Open
Abstract
This article offers a collection of observations that highlight the value of time course data in pupillometry and points out ways in which these observations create deeper understanding of listening effort. The main message is that listening effort should be considered on a moment-to-moment basis rather than as a singular amount. A review of various studies and the reanalysis of data reveal distinct signatures of effort before a stimulus, during a stimulus, in the moments after a stimulus, and changes over whole experimental testing sessions. Collectively these observations motivate questions that extend beyond the "amount" of effort, toward understanding how long the effort lasts, and how precisely someone can allocate effort at specific points in time or reduce effort at other times. Apparent disagreements between studies are reconsidered as informative lessons about stimulus selection and the nature of pupil dilation as a reflection of decision making rather than the difficulty of sensory encoding.
Collapse
Affiliation(s)
- Matthew B. Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota
| |
Collapse
|
7
|
Niikuni K, Wang M, Makuuchi M, Koizumi M, Kiyama S. Pupil Dilation Reflects Emotional Arousal Via Poetic Language. Percept Mot Skills 2022; 129:1691-1708. [PMID: 36151717 PMCID: PMC9947723 DOI: 10.1177/00315125221126778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We investigated pupillary responses to the world's shortest fixed verses, Japanese haiku as aesthetic poetry (AP) and senryu as comic poetry (CP), in comparison with non-poetry control stimuli (NP) comprised of slogans that had the same rhythm patterns. Native Japanese speakers without literary training listened to these stimuli while we recorded their pupil diameters. We found that participants' pupils were significantly dilated for CP compared to NP in an early time window. While AP also evoked larger dilations than NP, the latency for AP-related pupil dilation was relatively long. Thus, lay people experience quick and intense arousal in response to funny and humorous words, while aesthetic properties of words may also elicit intense but slower changes in listeners' arousal levels, presumably because they evoke more implicit and subtle emotional effects. This study is the first to provide evidence that poetic language elicits human pupillary dilation. A better understanding of the cognitive and neural substrates for the sensitive awareness of pleasures expressed via poetic language will provide insights for improving mental and physical health. Hence, pupillometry can act as a useful convenient measurement to delineate the sympathetic activation of emotional contexts via language.
Collapse
Affiliation(s)
- Keiyu Niikuni
- Department of Clinical
Psychology, Niigata Seiryo
University, Niigata, Japan,Keiyu Niikuni, Department of Clinical
Psychology, Niigata Seiryo University, 1-5939 Suidocho, Chuo-ku, Niigata
951-8121, Japan.
| | - Ming Wang
- Department of
Linguistics, Graduate School of Arts and
Letters, Tohoku University, Sendai, Japan
| | - Michiru Makuuchi
- Section of
Neuropsychology, National Rehabilitation Center for
Persons with Disabilities, Tokorozawa, Japan
| | - Masatoshi Koizumi
- Department of
Linguistics, Graduate School of Arts and
Letters, Tohoku University, Sendai, Japan,National Institute for Japanese
Language and Linguistics, Tokyo, Japan
| | - Sachiko Kiyama
- Department of
Linguistics, Graduate School of Arts and
Letters, Tohoku University, Sendai, Japan
| |
Collapse
|
8
|
Blott LM, Hartopp O, Nation K, Rodd JM. Learning about the meanings of ambiguous words: evidence from a word-meaning priming paradigm with short narratives. PeerJ 2022; 10:e14070. [PMID: 36281360 PMCID: PMC9587715 DOI: 10.7717/peerj.14070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 08/27/2022] [Indexed: 01/20/2023] Open
Abstract
Fluent language comprehension requires people to rapidly activate and integrate context-appropriate word meanings. This process is challenging for meanings of ambiguous words that are comparatively lower in frequency (e.g., the "bird" meaning of "crane"). Priming experiments have shown that recent experience makes such subordinate (less frequent) word meanings more readily available at the next encounter. These experiments used lists of unconnected sentences in which each ambiguity was disambiguated locally by neighbouring words. In natural language, however, disambiguation may occur via more distant contextual cues, embedded in longer, connected communicative contexts. In the present experiment, participants (N = 51) listened to 3-sentence narratives that ended in an ambiguous prime. Cues to disambiguation were relatively distant from the prime; the first sentence of each narrative established a situational context congruent with the subordinate meaning of the prime, but the remainder of the narrative did not provide disambiguating information. Following a short delay, primed subordinate meanings were more readily available (compared with an unprimed control), as assessed by responses in a word association task related to the primed meaning. This work confirms that listeners reliably disambiguate spoken ambiguous words on the basis of cues from wider narrative contexts, and that they retain information about the outcome of these disambiguation processes to inform subsequent encounters of the same word form.
Collapse
Affiliation(s)
- Lena M. Blott
- Division of Psychology and Language Sciences, University College London, University of London, London, United Kingdom
| | - Oliver Hartopp
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Kate Nation
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Jennifer M. Rodd
- Division of Psychology and Language Sciences, University College London, University of London, London, United Kingdom
| |
Collapse
|
9
|
Amichetti NM, Neukam J, Kinney AJ, Capach N, March SU, Svirsky MA, Wingfield A. Adults with cochlear implants can use prosody to determine the clausal structure of spoken sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:4315. [PMID: 34972310 PMCID: PMC8674009 DOI: 10.1121/10.0008899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 11/04/2021] [Accepted: 11/08/2021] [Indexed: 06/14/2023]
Abstract
Speech prosody, including pitch contour, word stress, pauses, and vowel lengthening, can aid the detection of the clausal structure of a multi-clause sentence and this, in turn, can help listeners determine the meaning. However, for cochlear implant (CI) users, the reduced acoustic richness of the signal raises the question of whether CI users may have difficulty using sentence prosody to detect syntactic clause boundaries within sentences or whether this ability is rescued by the redundancy of the prosodic features that normally co-occur at clause boundaries. Twenty-two CI users, ranging in age from 19 to 77 years old, recalled three types of sentences: sentences in which the prosodic pattern was appropriate to the location of a clause boundary within the sentence (congruent prosody), sentences with reduced prosodic information, or sentences in which the location of the clause boundary and the prosodic marking of a clause boundary were placed in conflict. The results showed the presence of congruent prosody to be associated with superior sentence recall and a reduced processing effort as indexed by the pupil dilation. The individual differences in a standard test of word recognition (consonant-nucleus-consonant score) were related to the recall accuracy as well as the processing effort. The outcomes are discussed in terms of the redundancy of the prosodic features, which normally accompany a clause boundary and processing effort.
Collapse
Affiliation(s)
- Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Jonathan Neukam
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Alexander J Kinney
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Nicole Capach
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Samantha U March
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Mario A Svirsky
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| |
Collapse
|
10
|
An implicit representation of stimulus ambiguity in pupil size. Proc Natl Acad Sci U S A 2021; 118:2107997118. [PMID: 34819369 DOI: 10.1073/pnas.2107997118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/07/2021] [Indexed: 11/18/2022] Open
Abstract
To guide behavior, perceptual systems must operate on intrinsically ambiguous sensory input. Observers are usually able to acknowledge the uncertainty of their perception, but in some cases, they critically fail to do so. Here, we show that a physiological correlate of ambiguity can be found in pupil dilation even when the observer is not aware of such ambiguity. We used a well-known auditory ambiguous stimulus, known as the tritone paradox, which can induce the perception of an upward or downward pitch shift within the same individual. In two experiments, behavioral responses showed that listeners could not explicitly access the ambiguity in this stimulus, even though their responses varied from trial to trial. However, pupil dilation was larger for the more ambiguous cases. The ambiguity of the stimulus for each listener was indexed by the entropy of behavioral responses, and this entropy was also a significant predictor of pupil size. In particular, entropy explained additional variation in pupil size independent of the explicit judgment of confidence in the specific situation that we investigated, in which the two measures were decoupled. Our data thus suggest that stimulus ambiguity is implicitly represented in the brain even without explicit awareness of this ambiguity.
Collapse
|
11
|
Abstract
Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.
Collapse
Affiliation(s)
- Matthew B. Winn
- Matthew B. Winn, University of Minnesota, Twin Cities, 164 Pillsbury Dr SE, Minneapolis, MN Minnesota 55455, United States.
| | | |
Collapse
|