1
|
Fry R, Tanaka J, Cohan S, Wilmer J, Germine LT, DeGutis J. Effects of age on face perception: Reduced eye region discrimination ability but intact holistic processing. Psychol Aging 2023; 38:548-561. [PMID: 37589691 PMCID: PMC10521214 DOI: 10.1037/pag0000759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/18/2023]
Abstract
While age-related decline in face recognition memory is well-established, the degree of decline in face perceptual abilities across the lifespan and the underlying mechanisms are incompletely characterized. In the present study, we used the part-whole task to examine lifespan changes in holistic and featural processing. After studying an intact face, participants are tested for memory of a face part (eyes, nose, mouth) with the target and foil part presented either in isolation or in the context of the whole face. To the extent that parts are encoded into a holistic face representation, an advantage is expected for part recognition when tested in the whole face condition. The task therefore provides measures of holistic processing (whole-over-isolated-part trial advantage) and featural processing for each part when tested in isolation. Using a large sample of 3,341 online participants aged 18-69 years, we found that while discrimination of the eye region decreased beginning in the 50s, both mouth discrimination accuracy and the holistic advantage of whole versus part trial discrimination were stable with age. In separate analyses by gender, we found that age-related declines in eye region accuracy were more pronounced in males than females. We discuss potential mechanistic explanations for this eye region-specific decline with age, including age-related hearing loss directing attention toward the mouth. Further, we discuss how this could be related to the age-related positivity effect, which is associated with reduced sensitivity to eye-related emotions (e.g., anger) but preserved mouth-related emotion sensitivity (e.g., happiness). (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Regan Fry
- Boston Attention and Learning Laboratory, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| | - James Tanaka
- Department of Psychology, University of Victoria, Victoria, British Columbia, Canada
| | - Sarah Cohan
- Vision Sciences Laboratory, Department of Psychology, Harvard University, USA
- Division of Chronic Disease Research Across the Lifecourse, Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA
| | - Jeremy Wilmer
- Department of Psychology, Wellesley College, Wellesley, MA, USA
| | - Laura T. Germine
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
- Institute for Technology in Psychiatry, McLean Hospital, Belmont, MA, USA
| | - Joseph DeGutis
- Boston Attention and Learning Laboratory, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
2
|
Xiu B, Paul BT, Chen JM, Le TN, Lin VY, Dimitrijevic A. Neural responses to naturalistic audiovisual speech are related to listening demand in cochlear implant users. Front Hum Neurosci 2022; 16:1043499. [DOI: 10.3389/fnhum.2022.1043499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/21/2022] [Indexed: 11/09/2022] Open
Abstract
There is a weak relationship between clinical and self-reported speech perception outcomes in cochlear implant (CI) listeners. Such poor correspondence may be due to differences in clinical and “real-world” listening environments and stimuli. Speech in the real world is often accompanied by visual cues, background environmental noise, and is generally in a conversational context, all factors that could affect listening demand. Thus, our objectives were to determine if brain responses to naturalistic speech could index speech perception and listening demand in CI users. Accordingly, we recorded high-density electroencephalogram (EEG) while CI users listened/watched a naturalistic stimulus (i.e., the television show, “The Office”). We used continuous EEG to quantify “speech neural tracking” (i.e., TRFs, temporal response functions) to the show’s soundtrack and 8–12 Hz (alpha) brain rhythms commonly related to listening effort. Background noise at three different signal-to-noise ratios (SNRs), +5, +10, and +15 dB were presented to vary the difficulty of following the television show, mimicking a natural noisy environment. The task also included an audio-only (no video) condition. After each condition, participants subjectively rated listening demand and the degree of words and conversations they felt they understood. Fifteen CI users reported progressively higher degrees of listening demand and less words and conversation with increasing background noise. Listening demand and conversation understanding in the audio-only condition was comparable to that of the highest noise condition (+5 dB). Increasing background noise affected speech neural tracking at a group level, in addition to eliciting strong individual differences. Mixed effect modeling showed that listening demand and conversation understanding were correlated to early cortical speech tracking, such that high demand and low conversation understanding occurred with lower amplitude TRFs. In the high noise condition, greater listening demand was negatively correlated to parietal alpha power, where higher demand was related to lower alpha power. No significant correlations were observed between TRF/alpha and clinical speech perception scores. These results are similar to previous findings showing little relationship between clinical speech perception and quality-of-life in CI users. However, physiological responses to complex natural speech may provide an objective measure of aspects of quality-of-life measures like self-perceived listening demand.
Collapse
|
3
|
Wang Q, Ma M, Huang Y, Wang X, Wang T. Impact of home literacy environment on literacy development of children with hearing loss: A mediation model. Front Psychol 2022; 13:895342. [DOI: 10.3389/fpsyg.2022.895342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 09/29/2022] [Indexed: 11/13/2022] Open
Abstract
Reading presents an unsolved difficulty for children with hearing loss and research on factors influencing their literacy development is very limited. This work aimed to study the influence of home literacy environment (HLE) on literacy development of children with hearing loss and explore possible mediating effects of reading interest and parent-child relationship. 112 Chinese children with hearing loss were surveyed for scales of HLE, literacy development, reading interest, and parent-child relationship. Result analysis showed that HLE significantly predicted literacy development of children with hearing loss and this effect was no longer significant after including reading interest and parent-child relationship as variables. Further, HLE significantly predicted reading interest and parent-child relationship, each of which predicted literacy development and played a significant mediating role in HLE’s influence on literacy development. These findings provide educational tips for families of children with hearing loss.
Collapse
|
4
|
Audio-visual integration in noise: Influence of auditory and visual stimulus degradation on eye movements and perception of the McGurk effect. Atten Percept Psychophys 2020; 82:3544-3557. [PMID: 32533526 PMCID: PMC7788022 DOI: 10.3758/s13414-020-02042-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Seeing a talker’s face can aid audiovisual (AV) integration when speech is presented in noise. However, few studies have simultaneously manipulated auditory and visual degradation. We aimed to establish how degrading the auditory and visual signal affected AV integration. Where people look on the face in this context is also of interest; Buchan, Paré and Munhall (Brain Research, 1242, 162–171, 2008) found fixations on the mouth increased in the presence of auditory noise whilst Wilson, Alsius, Paré and Munhall (Journal of Speech, Language, and Hearing Research, 59(4), 601–615, 2016) found mouth fixations decreased with decreasing visual resolution. In Condition 1, participants listened to clear speech, and in Condition 2, participants listened to vocoded speech designed to simulate the information provided by a cochlear implant. Speech was presented in three levels of auditory noise and three levels of visual blurring. Adding noise to the auditory signal increased McGurk responses, while blurring the visual signal decreased McGurk responses. Participants fixated the mouth more on trials when the McGurk effect was perceived. Adding auditory noise led to people fixating the mouth more, while visual degradation led to people fixating the mouth less. Combined, the results suggest that modality preference and where people look during AV integration of incongruent syllables varies according to the quality of information available.
Collapse
|
5
|
Schotter ER, Johnson E, Lieberman AM. The sign superiority effect: Lexical status facilitates peripheral handshape identification for deaf signers. J Exp Psychol Hum Percept Perform 2020; 46:1397-1410. [PMID: 32940493 PMCID: PMC7887614 DOI: 10.1037/xhp0000862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2024]
Abstract
Deaf signers exhibit an enhanced ability to process information in their peripheral visual field, particularly the motion of dots or orientation of lines. Does their experience processing sign language, which involves identifying meaningful visual forms across the visual field, contribute to this enhancement? We tested whether deaf signers recruit language knowledge to facilitate peripheral identification through a sign superiority effect (i.e., better handshape discrimination in a sign than a pseudosign) and whether such a superiority effect might be responsible for perceptual enhancements relative to hearing individuals (i.e., a decrease in the effect of eccentricity on perceptual identification). Deaf signers and hearing signers or nonsigners identified the handshape presented within a static ASL fingerspelling letter (Experiment 1), fingerspelled sequence (Experiment 2), or sign or pseudosign (Experiment 3) presented in the near or far periphery. Accuracy on all tasks was higher for deaf signers than hearing nonsigning participants and was higher in the near than the far periphery. Across experiments, there were different patterns of interactions between hearing status and eccentricity depending on the type of stimulus; deaf signers showed an effect of eccentricity for static fingerspelled letters, fingerspelled sequences, and pseudosigns but not for ASL signs. In contrast, hearing nonsigners showed an effect of eccentricity for all stimuli. Thus, deaf signers recruit lexical knowledge to facilitate peripheral perceptual identification, and this perceptual enhancement may derive from their extensive experience processing visual linguistic information in the periphery during sign comprehension. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Emily Johnson
- Department of Psychology, University of South Florida
| | - Amy M Lieberman
- Wheelock College of Education and Human Development, Boston University
| |
Collapse
|
6
|
Wang J, Zhu Y, Chen Y, Mamat A, Yu M, Zhang J, Dang J. An Eye-Tracking Study on Audiovisual Speech Perception Strategies Adopted by Normal-Hearing and Deaf Adults Under Different Language Familiarities. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2245-2254. [PMID: 32579867 DOI: 10.1044/2020_jslhr-19-00223] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur-Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.
Collapse
Affiliation(s)
- Jianrong Wang
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- College of Intelligence and Computing, Tianjin University, China
| | - Yumeng Zhu
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- College of Intelligence and Computing, Tianjin University, China
| | - Yu Chen
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- Technical College for the Deaf, Tianjin University of Technology, China
| | - Abdilbar Mamat
- Institute of Physical Education, Hotan Teacher's College, China
| | - Mei Yu
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- College of Intelligence and Computing, Tianjin University, China
| | - Ju Zhang
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- College of Intelligence and Computing, Tianjin University, China
| | - Jianwu Dang
- Tianjin Key Laboratory of Cognitive Computing and Application, China
- College of Intelligence and Computing, Tianjin University, China
| |
Collapse
|
7
|
Bosworth R, Stone A, Hwang SO. Effects of Video Reversal on Gaze Patterns during Signed Narrative Comprehension. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2020; 25:283-297. [PMID: 32427289 PMCID: PMC7260695 DOI: 10.1093/deafed/enaa007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 01/29/2020] [Accepted: 02/24/2020] [Indexed: 06/11/2023]
Abstract
Language knowledge, age of acquisition (AoA), and stimulus intelligibility all affect gaze behavior for reading print, but it is unknown how these factors affect "sign-watching" among signers. This study investigated how these factors affect gaze behavior during sign language comprehension in 52 adult signers who acquired American Sign Language (ASL) at different ages. We examined gaze patterns and story comprehension in four subject groups who differ in hearing status and when they learned ASL (i.e. Deaf Early, Deaf Late, Hearing Late, and Hearing Novice). Participants watched signed stories in normal (high intelligibility) and video-reversed (low intelligibility) conditions. This video manipulation was used because it distorts word order and thus disrupts the syntax and semantic content of narratives, while preserving most surface phonological features of individual signs. Video reversal decreased story comprehension accuracy, and this effect was greater for those who learned ASL later in life. Reversal also was associated with more dispersed gaze behavior. Although each subject group had unique gaze patterns, the effect of video reversal on gaze measures was similar across all groups. Among fluent signers, gaze behavior was not correlated with AoA, suggesting that "efficient" sign watching can be quickly learnt even among signers exposed to signed language later in life.
Collapse
Affiliation(s)
- Rain Bosworth
- National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, New York
| | - Adam Stone
- Department of Psychology, University of California, San Diego
| | - So-One Hwang
- Center for Research in Language, University of California, San Diego
| |
Collapse
|
8
|
Sign language experience redistributes attentional resources to the inferior visual field. Cognition 2019; 191:103957. [PMID: 31255921 DOI: 10.1016/j.cognition.2019.04.026] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 04/24/2019] [Accepted: 04/25/2019] [Indexed: 11/22/2022]
Abstract
While a substantial body of work has suggested that deafness brings about an increased allocation of visual attention to the periphery there has been much less work on how using a signed language may also influence this attentional allocation. Signed languages are visual-gestural and produced using the body and perceived via the human visual system. Signers fixate upon the face of interlocutors and do not directly look at the hands moving in the inferior visual field. It is therefore reasonable to predict that signed languages require a redistribution of covert visual attention to the inferior visual field. Here we report a prospective and statistically powered assessment of the spatial distribution of attention to inferior and superior visual fields in signers - both deaf and hearing - in a visual search task. Using a Bayesian Hierarchical Drift Diffusion Model, we estimated decision making parameters for the superior and inferior visual field in deaf signers, hearing signers and hearing non-signers. Results indicated a greater attentional redistribution toward the inferior visual field in adult signers (both deaf and hearing) than in hearing sign-naïve adults. The effect was smaller for hearing signers than for deaf signers, suggestive of either a role for extent of exposure or greater plasticity of the visual system in the deaf. The data provide support for a process by which the demands of linguistic processing can influence the human attentional system.
Collapse
|
9
|
Mastrantuono E, Saldaña D, Rodríguez-Ortiz IR. Inferencing in Deaf Adolescents during Sign-Supported Speech Comprehension. DISCOURSE PROCESSES 2019. [DOI: 10.1080/0163853x.2018.1490133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Eliana Mastrantuono
- Departamento de Psicología Evolutiva y de la Educación, Universidad de Sevilla, Sevilla, Spain
| | - David Saldaña
- Departamento de Psicología Evolutiva y de la Educación, Universidad de Sevilla, Sevilla, Spain
| | | |
Collapse
|
10
|
Deaf signers outperform hearing non-signers in recognizing happy facial expressions. PSYCHOLOGICAL RESEARCH 2019; 84:1485-1494. [DOI: 10.1007/s00426-019-01160-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 02/25/2019] [Indexed: 01/21/2023]
|
11
|
Facial perception of infants with cleft lip and palate with/without the NAM appliance. J Orofac Orthop 2018; 79:380-388. [DOI: 10.1007/s00056-018-0157-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 07/27/2018] [Indexed: 11/25/2022]
|