1
|
Stosic MD, Helwig S, Ruben MA. More Than Meets the Eyes: Bringing Attention to the Eyes Increases First Impressions of Warmth and Competence. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2024; 50:253-269. [PMID: 36259443 DOI: 10.1177/01461672221128114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The present research examined how face masks alter first impressions of warmth and competence for different racial groups. Participants were randomly assigned to view photographs of White, Black, and Asian targets with or without masks. Across four separate studies (total N = 1,012), masked targets were rated significantly higher in warmth and competence compared with unmasked targets, regardless of their race. However, Asian targets benefited the least from being seen masked compared with Black or White targets. Studies 3 and 4 demonstrate how the positive effect of masks is likely due to these clothing garments re-directing attention toward the eyes of the wearer. Participants viewing faces cropped to the eyes (Study 3), or instructed to gaze into the eyes of faces (Study 4), rated these targets similarly to masked targets, and higher than unmasked targets. Neither political affiliation, belief in mask effectiveness, nor explicit racial prejudice moderated any hypothesized effects.
Collapse
Affiliation(s)
| | - Shelby Helwig
- The University of Maine, Orono, ME, USA
- Husson University, Bangor, ME, USA
| | - Mollie A Ruben
- The University of Maine, Orono, ME, USA
- The University of Rhode Island, Kingston, RI, USA
| |
Collapse
|
2
|
Arioli M, Segatta C, Papagno C, Tettamanti M, Cattaneo Z. Social perception in deaf individuals: A meta-analysis of neuroimaging studies. Hum Brain Mapp 2023; 44:5402-5415. [PMID: 37609693 PMCID: PMC10543108 DOI: 10.1002/hbm.26444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 06/22/2023] [Accepted: 07/22/2023] [Indexed: 08/24/2023] Open
Abstract
Deaf individuals may report difficulties in social interactions. However, whether these difficulties depend on deafness affecting social brain circuits is controversial. Here, we report the first meta-analysis comparing brain activations of hearing and (prelingually) deaf individuals during social perception. Our findings showed that deafness does not impact on the functional mechanisms supporting social perception. Indeed, both deaf and hearing control participants recruited regions of the action observation network during performance of different social tasks employing visual stimuli, and including biological motion perception, face identification, action observation, viewing, identification and memory for signs and lip reading. Moreover, we found increased recruitment of the superior-middle temporal cortex in deaf individuals compared with hearing participants, suggesting a preserved and augmented function during social communication based on signs and lip movements. Overall, our meta-analysis suggests that social difficulties experienced by deaf individuals are unlikely to be associated with brain alterations but may rather depend on non-supportive environments.
Collapse
Affiliation(s)
- Maria Arioli
- Department of Human and Social SciencesUniversity of BergamoBergamoItaly
| | - Cecilia Segatta
- Department of Human and Social SciencesUniversity of BergamoBergamoItaly
| | - Costanza Papagno
- Center for Mind/Brain Sciences (CIMeC)University of TrentoTrentoItaly
| | | | - Zaira Cattaneo
- Department of Human and Social SciencesUniversity of BergamoBergamoItaly
- IRCCS Mondino FoundationPaviaItaly
| |
Collapse
|
3
|
Yang J, Chen Z, Qiu G, Li X, Li C, Yang K, Chen Z, Gao L, Lu S. Exploring the relationship between children's facial emotion processing characteristics and speech communication ability using deep learning on eye tracking and speech performance measures. COMPUT SPEECH LANG 2022. [DOI: 10.1016/j.csl.2022.101389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
4
|
Lau WK, Chalupny J, Grote K, Huckauf A. How sign language expertise can influence the effects of face masks on non-linguistic characteristics. Cogn Res Princ Implic 2022; 7:53. [PMID: 35737184 PMCID: PMC9219384 DOI: 10.1186/s41235-022-00405-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 06/06/2022] [Indexed: 11/15/2022] Open
Abstract
Face masks occlude parts of the face which hinders social communication and emotion recognition. Since sign language users are known to process facial information not only perceptually but also linguistically, examining face processing in deaf signers may reveal how linguistic aspects add to perceptual information. In general, signers could be born deaf or acquire hearing loss later in life. For this study, we focused on signers who were born deaf. Specifically, we analyzed data from a sample of 59 signers who were born deaf and investigated the impacts of face masks on non-linguistic characteristics of the face. Signers rated still-image faces with and without face masks for the following characteristics: arousal and valence of three facial expressions (happy, neutral, sad), invariant characteristics (DV:sex, age), and trait-like characteristics (attractiveness, trustworthiness, approachability). Results indicated that, when compared to masked faces, signers rated no-masked faces with stronger valence intensity across all expressions. Masked faces also appeared older, albeit a tendency to look more approachable. This experiment was a repeat of a previous study conducted on hearing participants, and a post hoc comparison was performed to assess rating differences between signers and hearing people. From this comparison, signers exhibited a larger tendency to rate facial expressions more intensely than hearing people. This suggests that deaf people perceive more intense information from facial expressions and face masks are more inhibiting for deaf people than hearing people. We speculate that deaf people found face masks more approachable due to societal norms when interacting with people wearing masks. Other factors like age and face database’s legitimacy are discussed.
Collapse
Affiliation(s)
- Wee Kiat Lau
- General Psychology, Institute of Psychology and Pedagogics, Ulm University, Albert-Einstein-Allee 47, 89081, Ulm, Germany.
| | - Jana Chalupny
- Regionalstelle Bad Nauheim, Autismus-Therapieinstitut Langen, Karlstraße 57 - 59, 61231, Bad Nauheim, Germany
| | - Klaudia Grote
- Competence Centre for Sign Language and Gesture (SignGes), RWTH Aachen, Theaterplatz 14, 52062, Aachen, Germany
| | - Anke Huckauf
- General Psychology, Institute of Psychology and Pedagogics, Ulm University, Albert-Einstein-Allee 47, 89081, Ulm, Germany
| |
Collapse
|
5
|
Tsou YT, Li B, Kret ME, Frijns JHM, Rieffe C. Hearing Status Affects Children's Emotion Understanding in Dynamic Social Situations: An Eye-Tracking Study. Ear Hear 2021; 42:1024-1033. [PMID: 33369943 PMCID: PMC8221710 DOI: 10.1097/aud.0000000000000994] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES For children to understand the emotional behavior of others, the first two steps involve emotion encoding and emotion interpreting, according to the Social Information Processing model. Access to daily social interactions is prerequisite to a child acquiring these skills, and barriers to communication such as hearing loss impede this access. Therefore, it could be challenging for children with hearing loss to develop these two skills. The present study aimed to understand the effect of prelingual hearing loss on children's emotion understanding, by examining how they encode and interpret nonverbal emotional cues in dynamic social situations. DESIGN Sixty deaf or hard-of-hearing (DHH) children and 71 typically hearing (TH) children (3-10 years old, mean age 6.2 years, 54% girls) watched videos of prototypical social interactions between a target person and an interaction partner. At the end of each video, the target person did not face the camera, rendering their facial expressions out of view to participants. Afterward, participants were asked to interpret the emotion they thought the target person felt at the end of the video. As participants watched the videos, their encoding patterns were examined by an eye tracker, which measured the amount of time participants spent looking at the target person's head and body and at the interaction partner's head and body. These regions were preselected for analyses because they had been found to provide cues for interpreting people's emotions and intentions. RESULTS When encoding emotional cues, both the DHH and TH children spent more time looking at the head of the target person and at the head of the interaction partner than they spent looking at the body or actions of either person. Yet, compared with the TH children, the DHH children looked at the target person's head for a shorter time (b = -0.03, p = 0.030), and at the target person's body (b = 0.04, p = 0.006) and at the interaction partner's head (b = 0.03, p = 0.048) for a longer time. The DHH children were also less accurate when interpreting emotions than their TH peers (b = -0.13, p = 0.005), and their lower scores were associated with their distinctive encoding pattern. CONCLUSIONS The findings suggest that children with limited auditory access to the social environment tend to collect visually observable information to compensate for ambiguous emotional cues in social situations. These children may have developed this strategy to support their daily communication. Yet, to fully benefit from such a strategy, these children may need extra support for gaining better social-emotional knowledge.
Collapse
Affiliation(s)
- Yung-Ting Tsou
- Unit of Developmental and Educational Psychology, Institute of Psychology, Leiden University, Leiden, The Netherlands
| | - Boya Li
- Unit of Developmental and Educational Psychology, Institute of Psychology, Leiden University, Leiden, The Netherlands
| | - Mariska E. Kret
- Cognitive Psychology Unit, Institute of Psychology, Leiden University, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands
| | - Johan H. M. Frijns
- Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands
- Department of Otorhinolaryngology and Head & Neck Surgery, Leiden University Medical Center, Leiden, The Netherlands
| | - Carolien Rieffe
- Unit of Developmental and Educational Psychology, Institute of Psychology, Leiden University, Leiden, The Netherlands
- Department of Psychology and Human Development, Institute of Education, University College London, London, UK
| |
Collapse
|
6
|
The signing body: extensive sign language practice shapes the size of hands and face. Exp Brain Res 2021; 239:2233-2249. [PMID: 34028597 PMCID: PMC8282562 DOI: 10.1007/s00221-021-06121-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 04/21/2021] [Indexed: 11/20/2022]
Abstract
The representation of the metrics of the hands is distorted, but is susceptible to malleability due to expert dexterity (magicians) and long-term tool use (baseball players). However, it remains unclear whether modulation leads to a stable representation of the hand that is adopted in every circumstance, or whether the modulation is closely linked to the spatial context where the expertise occurs. To this aim, a group of 10 experienced Sign Language (SL) interpreters were recruited to study the selective influence of expertise and space localisation in the metric representation of hands. Experiment 1 explored differences in hands’ size representation between the SL interpreters and 10 age-matched controls in near-reaching (Condition 1) and far-reaching space (Condition 2), using the localisation task. SL interpreters presented reduced hand size in near-reaching condition, with characteristic underestimation of finger lengths, and reduced overestimation of hands and wrists widths in comparison with controls. This difference was lost in far-reaching space, confirming the effect of expertise on hand representations is closely linked to the spatial context where an action is performed. As SL interpreters are also experts in the use of their face with communication purposes, the effects of expertise in the metrics of the face were also studied (Experiment 2). SL interpreters were more accurate than controls, with overall reduction of width overestimation. Overall, expertise modifies the representation of relevant body parts in a specific and context-dependent manner. Hence, different representations of the same body part can coexist simultaneously.
Collapse
|
7
|
Lee KR, Groesbeck E, Gwinn OS, Webster MA, Jiang F. ENHANCED PERIPHERAL FACE PROCESSING IN DEAF INDIVIDUALS. JOURNAL OF PERCEPTUAL IMAGING 2021; 5:jpi0140. [PMID: 35434528 PMCID: PMC9007248 DOI: 10.2352/j.percept.imaging.2022.5.000401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Studies of compensatory changes in visual functions in response to auditory loss have shown that enhancements tend to be restricted to the processing of specific visual features, such as motion in the periphery. Previous studies have also shown that deaf individuals can show greater face processing abilities in the central visual field. Enhancements in the processing of peripheral stimuli are thought to arise from a lack of auditory input and a subsequent increase in the allocation of attentional resources to peripheral locations, while enhancements in face processing abilities are thought to be driven by experience with ASL and not necessarily hearing loss. This combined with the fact that face processing abilities typically decline with eccentricity suggests that face processing enhancements may not extend to the periphery for deaf individuals. Using a face matching task, we examined whether deaf individuals' enhanced ability to discriminate between faces extends to the peripheral visual field. Deaf participants were more accurate than hearing participants in discriminating faces presented both centrally and in the periphery. Our results support earlier findings that deaf individuals possess enhanced face discrimination abilities in the central visual field and further extend them by showing that these enhancements also occur in the periphery for more complex stimuli.
Collapse
Affiliation(s)
| | | | - O Scott Gwinn
- College of Education, Psychology, and Social Work, Flinders University, Adelaide, Australia
| | | | - Fang Jiang
- Department of Psychology, University of Nevada, Reno
| |
Collapse
|
8
|
Shalev T, Schwartz S, Miller P, Hadad BS. Do deaf individuals have better visual skills in the periphery? Evidence from processing facial attributes. VISUAL COGNITION 2020. [DOI: 10.1080/13506285.2020.1770390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Tal Shalev
- Department of Special Education, University of Haifa, Haifa, Israel
| | - Sivan Schwartz
- Department of psychology, University of Haifa, Haifa, Israel
| | - Paul Miller
- Department of Special Education, University of Haifa, Haifa, Israel
| | - Bat-Sheva Hadad
- Department of Special Education, University of Haifa, Haifa, Israel
- Edmond J. Safra Brain Research Center, University of Haifa, Haifa, Israel
| |
Collapse
|
9
|
Krejtz I, Krejtz K, Wisiecka K, Abramczyk M, Olszanowski M, Duchowski AT. Attention Dynamics During Emotion Recognition by Deaf and Hearing Individuals. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2020; 25:10-21. [PMID: 31665493 DOI: 10.1093/deafed/enz036] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 07/11/2019] [Accepted: 08/01/2019] [Indexed: 06/10/2023]
Abstract
The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient-focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.
Collapse
Affiliation(s)
- Izabela Krejtz
- SWPS University of Social Sciences and Humanities, Chodakowska 19/31, Warsaw, Poland
| | - Krzysztof Krejtz
- SWPS University of Social Sciences and Humanities, Chodakowska 19/31, Warsaw, Poland
| | | | | | - Michał Olszanowski
- SWPS University of Social Sciences and Humanities, Chodakowska 19/31, Warsaw, Poland
| | | |
Collapse
|
10
|
Zeni S, Laudanna I, Baruffaldi F, Heimler B, Melcher D, Pavani F. Increased overt attention to objects in early deaf adults: An eye-tracking study of complex naturalistic scenes. Cognition 2019; 194:104061. [PMID: 31514103 DOI: 10.1016/j.cognition.2019.104061] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2018] [Revised: 08/27/2019] [Accepted: 08/28/2019] [Indexed: 11/25/2022]
Abstract
The study of selective attention in people with profound deafness has repeatedly documented enhanced attention to the peripheral regions of the visual field compared to hearing controls. This finding emerged from covert attention studies (i.e., without eye-movements) involving extremely simplified visual scenes and comprising few visual items. In this study, we aimed to test whether this key finding extends also to overt attention, using a more ecologically valid experimental context in which complex naturalistic images were presented for 3 s. In Experiment 1 (N = 35), all images contained a single central object superimposed on a congruent naturalistic background (e.g., a tiger in the woods). At the end of the visual exploration phase, an incidental memory task probed the participants' recollection of the seen central objects and image backgrounds. Results showed that hearing controls explored and remembered the image backgrounds more than deaf participants, who lingered on the central object to a greater extent. In Experiment 2 we aimed to disentangle if this behaviour of deaf participants reflected a bias in overt space-based attention towards the centre of the image, or instead, enhanced object-centred attention. We tested new participants (N = 42) in the visual exploration task adding images with lateralized objects, as well as images with multiple object or images without any object. Results confirmed increased exploration of objects in deaf participants. Taken together our novel findings show limitations of the well-known peripheral attention bias of deaf people and suggest that visual object-centred attention may also change after prolonged auditory deprivation.
Collapse
Affiliation(s)
- Silvia Zeni
- Center for Mind Brain Sciences, CIMeC, University of Trento, Italy; School of Psychology, University of Nottingham, UK.
| | - Irene Laudanna
- Center for Mind Brain Sciences, CIMeC, University of Trento, Italy; Dep. of Psychology and Cognitive Science, University of Trento, Italy
| | | | - Benedetta Heimler
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel; Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - David Melcher
- Center for Mind Brain Sciences, CIMeC, University of Trento, Italy; Dep. of Psychology and Cognitive Science, University of Trento, Italy
| | - Francesco Pavani
- Center for Mind Brain Sciences, CIMeC, University of Trento, Italy; Dep. of Psychology and Cognitive Science, University of Trento, Italy; Integrative Multisensory Perception Action & Cognition Team, CRNL, France.
| |
Collapse
|
11
|
Mastrantuono E, Burigo M, Rodríguez-Ortiz IR, Saldaña D. The Role of Multiple Articulatory Channels of Sign-Supported Speech Revealed by Visual Processing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1625-1656. [PMID: 31095442 DOI: 10.1044/2019_jslhr-s-17-0433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose The use of sign-supported speech (SSS) in the education of deaf students has been recently discussed in relation to its usefulness with deaf children using cochlear implants. To clarify the benefits of SSS for comprehension, 2 eye-tracking experiments aimed to detect the extent to which signs are actively processed in this mode of communication. Method Participants were 36 deaf adolescents, including cochlear implant users and native deaf signers. Experiment 1 attempted to shift observers' foveal attention to the linguistic source in SSS from which most information is extracted, lip movements or signs, by magnifying the face area, thus modifying lip movements perceptual accessibility (magnified condition), and by constraining the visual field to either the face or the sign through a moving window paradigm (gaze contingent condition). Experiment 2 aimed to explore the reliance on signs in SSS by occasionally producing a mismatch between sign and speech. Participants were required to concentrate upon the orally transmitted message. Results In Experiment 1, analyses revealed a greater number of fixations toward the signs and a reduction in accuracy in the gaze contingent condition across all participants. Fixations toward signs were also increased in the magnified condition. In Experiment 2, results indicated less accuracy in the mismatching condition across all participants. Participants looked more at the sign when it was inconsistent with speech. Conclusions All participants, even those with residual hearing, rely on signs when attending SSS, either peripherally or through overt attention, depending on the perceptual conditions. Supplemental Material https://doi.org/10.23641/asha.8121191.
Collapse
Affiliation(s)
- Eliana Mastrantuono
- Departamento de Psicología Evolutiva y de la Educación, Universidad de Sevilla, Spain
| | - Michele Burigo
- Cognitive Interaction Technology, University of Bielefeld, Germany
| | | | - David Saldaña
- Departamento de Psicología Evolutiva y de la Educación, Universidad de Sevilla, Spain
| |
Collapse
|
12
|
Deaf signers outperform hearing non-signers in recognizing happy facial expressions. PSYCHOLOGICAL RESEARCH 2019; 84:1485-1494. [DOI: 10.1007/s00426-019-01160-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 02/25/2019] [Indexed: 01/21/2023]
|
13
|
Ferrari C, Papagno C, Todorov A, Cattaneo Z. Differences in Emotion Recognition From Body and Face Cues Between Deaf and Hearing Individuals. Multisens Res 2019; 32:499-519. [PMID: 31117046 DOI: 10.1163/22134808-20191353] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2018] [Accepted: 04/05/2019] [Indexed: 11/19/2022]
Abstract
Deaf individuals may compensate for the lack of the auditory input by showing enhanced capacities in certain visual tasks. Here we assessed whether this also applies to recognition of emotions expressed by bodily and facial cues. In Experiment 1, we compared deaf participants and hearing controls in a task measuring recognition of the six basic emotions expressed by actors in a series of video-clips in which either the face, the body, or both the face and body were visible. In Experiment 2, we measured the weight of body and face cues in conveying emotional information when intense genuine emotions are expressed, a situation in which face expressions alone may have ambiguous valence. We found that deaf individuals were better at identifying disgust and fear from body cues (Experiment 1) and in integrating face and body cues in case of intense negative genuine emotions (Experiment 2). Our findings support the capacity of deaf individuals to compensate for the lack of the auditory input enhancing perceptual and attentional capacities in the spared modalities, showing that this capacity extends to the affective domain.
Collapse
Affiliation(s)
- Chiara Ferrari
- 1Department of Psychology, University of Milano-Bicocca, Milan 20126, Italy
| | - Costanza Papagno
- 1Department of Psychology, University of Milano-Bicocca, Milan 20126, Italy.,2CeRiN and CIMeC, University of Trento, Rovereto 38068, Italy
| | | | - Zaira Cattaneo
- 1Department of Psychology, University of Milano-Bicocca, Milan 20126, Italy.,4IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|
14
|
Kawakami K, Friesen J, Vingilis-Jaremko L. Visual attention to members of own and other groups: Preferences, determinants, and consequences. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2018. [DOI: 10.1111/spc3.12380] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
15
|
Stoll C, Palluel-Germain R, Caldara R, Lao J, Dye MWG, Aptel F, Pascalis O. Face Recognition is Shaped by the Use of Sign Language. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2018; 23:62-70. [PMID: 28977622 DOI: 10.1093/deafed/enx034] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Accepted: 08/24/2017] [Indexed: 06/07/2023]
Abstract
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing signers, and hearing non-signers. In the face categorization task, the three groups performed similarly in term of both response time and accuracy. However, in the face recognition task, signers (both deaf and hearing) were slower than hearing non-signers to accurately recognize faces, but had a higher accuracy rate. We conclude that sign language experience, but not deafness, drives a speed-accuracy trade-off in face recognition (but not face categorization). This suggests strategic differences in the processing of facial identity for individuals who use a sign language, regardless of their hearing status.
Collapse
Affiliation(s)
| | | | | | | | - Matthew W G Dye
- National Technical Institute for the Deaf, Rochester Institute of Technology
| | | | | |
Collapse
|
16
|
Wang Y, Zhou W, Cheng Y, Bian X. Gaze Patterns in Auditory-Visual Perception of Emotion by Children with Hearing Aids and Hearing Children. Front Psychol 2017; 8:2281. [PMID: 29312104 PMCID: PMC5743909 DOI: 10.3389/fpsyg.2017.02281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Accepted: 12/14/2017] [Indexed: 12/30/2022] Open
Abstract
This study investigated eye-movement patterns during emotion perception for children with hearing aids and hearing children. Seventy-eight participants aged from 3 to 7 were asked to watch videos with a facial expression followed by an oral statement, and these two cues were either congruent or incongruent in emotional valence. Results showed that while hearing children paid more attention to the upper part of the face, children with hearing aids paid more attention to the lower part of the face after the oral statement was presented, especially for the neutral facial expression/neutral oral statement condition. These results suggest that children with hearing aids have an altered eye contact pattern with others and a difficulty in matching visual and voice cues in emotion perception. The negative cause and effect of these gaze patterns should be avoided in earlier rehabilitation for hearing-impaired children with assistive devices.
Collapse
Affiliation(s)
- Yifang Wang
- School of Psychology, Capital Normal University, Beijing, China
| | - Wei Zhou
- School of Psychology, Capital Normal University, Beijing, China
| | | | - Xiaoying Bian
- School of Psychology, Capital Normal University, Beijing, China
| |
Collapse
|
17
|
Emotional recognition of dynamic facial expressions before and after cochlear implantation in adults with progressive deafness. Hear Res 2017; 354:64-72. [DOI: 10.1016/j.heares.2017.08.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Revised: 08/17/2017] [Accepted: 08/25/2017] [Indexed: 11/23/2022]
|
18
|
Dole M, Méary D, Pascalis O. Modifications of Visual Field Asymmetries for Face Categorization in Early Deaf Adults: A Study With Chimeric Faces. Front Psychol 2017; 8:30. [PMID: 28163692 PMCID: PMC5247456 DOI: 10.3389/fpsyg.2017.00030] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Accepted: 01/05/2017] [Indexed: 12/02/2022] Open
Abstract
Right hemisphere lateralization for face processing is well documented in typical populations. At the behavioral level, this right hemisphere bias is often related to a left visual field (LVF) bias. A conventional mean to study this phenomenon consists of using chimeric faces that are composed of the left and right parts of two faces. In this paradigm, participants generally use the left part of the chimeric face, mostly processed through the right optic tract, to determine its identity, gender or age. To assess the impact of early auditory deprivation on face processing abilities, we tested the LVF bias in a group of early deaf participants and hearing controls. In two experiments, deaf and hearing participants performed a gender categorization task with chimeric and normal average faces. Over the two experiments the results confirmed the presence of a LVF bias in participants, which was less frequent in deaf participants. This result suggested modifications of hemispheric lateralization for face processing in deaf participants. In Experiment 2 we also recorded eye movements to examine whether the LVF bias could be related to face scanning behavior. In this second study, participants performed a similar task while we recorded eye movements using an eye tracking system. Using areas of interest analysis we observed that the proportion of fixations on the mouth relatively to the other areas was increased in deaf participants in comparison with the hearing group. This was associated with a decrease of the proportion of fixations on the eyes. In addition these measures were correlated to the LVF bias suggesting a relationship between the LVF bias and the patterns of facial exploration. Taken together, these results suggest that early auditory deprivation results in plasticity phenomenon affecting the perception of static faces through modifications of hemispheric lateralization and of gaze behavior.
Collapse
Affiliation(s)
- Marjorie Dole
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble-AlpesGrenoble, France; Gipsa-Lab, Département Parole et Cognition, CNRS UMR 5216, Université Grenoble-AlpesGrenoble, France
| | - David Méary
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble-Alpes Grenoble, France
| | - Olivier Pascalis
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble-Alpes Grenoble, France
| |
Collapse
|
19
|
Mitchell TV. Category selectivity of the N170 and the role of expertise in deaf signers. Hear Res 2016; 343:150-161. [PMID: 27770622 DOI: 10.1016/j.heares.2016.10.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 10/07/2016] [Accepted: 10/15/2016] [Indexed: 10/20/2022]
Abstract
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.
Collapse
Affiliation(s)
- Teresa V Mitchell
- Eunice Kennedy Shriver Center, University of Massachusetts Medical School, Worcester, MA, USA; Brandeis University, Waltham, MA, USA.
| |
Collapse
|
20
|
He H, Xu B, Tanaka J. Investigating the face inversion effect in a deaf population using the Dimensions Tasks. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1221488] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
21
|
Campbell J, Sharma A. Visual Cross-Modal Re-Organization in Children with Cochlear Implants. PLoS One 2016; 11:e0147793. [PMID: 26807850 PMCID: PMC4726603 DOI: 10.1371/journal.pone.0147793] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2015] [Accepted: 01/09/2016] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Visual cross-modal re-organization is a neurophysiological process that occurs in deafness. The intact sensory modality of vision recruits cortical areas from the deprived sensory modality of audition. Such compensatory plasticity is documented in deaf adults and animals, and is related to deficits in speech perception performance in cochlear-implanted adults. However, it is unclear whether visual cross-modal re-organization takes place in cochlear-implanted children and whether it may be a source of variability contributing to speech and language outcomes. Thus, the aim of this study was to determine if visual cross-modal re-organization occurs in cochlear-implanted children, and whether it is related to deficits in speech perception performance. METHODS Visual evoked potentials (VEPs) were recorded via high-density EEG in 41 normal hearing children and 14 cochlear-implanted children, aged 5-15 years, in response to apparent motion and form change. Comparisons of VEP amplitude and latency, as well as source localization results, were conducted between the groups in order to view evidence of visual cross-modal re-organization. Finally, speech perception in background noise performance was correlated to the visual response in the implanted children. RESULTS Distinct VEP morphological patterns were observed in both the normal hearing and cochlear-implanted children. However, the cochlear-implanted children demonstrated larger VEP amplitudes and earlier latency, concurrent with activation of right temporal cortex including auditory regions, suggestive of visual cross-modal re-organization. The VEP N1 latency was negatively related to speech perception in background noise for children with cochlear implants. CONCLUSION Our results are among the first to describe cross modal re-organization of auditory cortex by the visual modality in deaf children fitted with cochlear implants. Our findings suggest that, as a group, children with cochlear implants show evidence of visual cross-modal recruitment, which may be a contributing source of variability in speech perception outcomes with their implant.
Collapse
Affiliation(s)
- Julia Campbell
- Brain and Behavior Laboratory, University of Colorado at Boulder, 409 UCB, 2501 Kittredge Loop Road, Boulder, Colorado, 80309, United States of America
- Institute of Cognitive Science, University of Colorado at Boulder, 344 UCB, Boulder, Colorado, 80309, United States of America
- Department of Speech, Language and Hearing Sciences, University of Colorado at Boulder, 409 UCB, 2501 Kittredge Loop Road, Boulder, Colorado, 80309, United States of America
| | - Anu Sharma
- Brain and Behavior Laboratory, University of Colorado at Boulder, 409 UCB, 2501 Kittredge Loop Road, Boulder, Colorado, 80309, United States of America
- Institute of Cognitive Science, University of Colorado at Boulder, 344 UCB, Boulder, Colorado, 80309, United States of America
- Department of Speech, Language and Hearing Sciences, University of Colorado at Boulder, 409 UCB, 2501 Kittredge Loop Road, Boulder, Colorado, 80309, United States of America
| |
Collapse
|
22
|
Dye MWG, Seymour JL, Hauser PC. Response bias reveals enhanced attention to inferior visual field in signers of American Sign Language. Exp Brain Res 2015; 234:1067-76. [PMID: 26708522 DOI: 10.1007/s00221-015-4530-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2015] [Accepted: 12/12/2015] [Indexed: 11/28/2022]
Abstract
Deafness results in cross-modal plasticity, whereby visual functions are altered as a consequence of a lack of hearing. Here, we present a reanalysis of data originally reported by Dye et al. (PLoS One 4(5):e5640, 2009) with the aim of testing additional hypotheses concerning the spatial redistribution of visual attention due to deafness and the use of a visuogestural language (American Sign Language). By looking at the spatial distribution of errors made by deaf and hearing participants performing a visuospatial selective attention task, we sought to determine whether there was evidence for (1) a shift in the hemispheric lateralization of visual selective function as a result of deafness, and (2) a shift toward attending to the inferior visual field in users of a signed language. While no evidence was found for or against a shift in lateralization of visual selective attention as a result of deafness, a shift in the allocation of attention from the superior toward the inferior visual field was inferred in native signers of American Sign Language, possibly reflecting an adaptation to the perceptual demands imposed by a visuogestural language.
Collapse
Affiliation(s)
- Matthew W G Dye
- Department of Liberal Studies, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY, 14623, USA.
| | - Jenessa L Seymour
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA
| | - Peter C Hauser
- Department of American Sign Language and Interpreter Education, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY, 14623, USA
| |
Collapse
|
23
|
Mestre JM, Larrán C, Herrero J, Guil R, de la Torre GG. PERVALE-S: a new cognitive task to assess deaf people's ability to perceive basic and social emotions. Front Psychol 2015; 6:1148. [PMID: 26300828 PMCID: PMC4528103 DOI: 10.3389/fpsyg.2015.01148] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2015] [Accepted: 07/23/2015] [Indexed: 11/20/2022] Open
Abstract
UNLABELLED A poorly understood aspect of deaf people (DP) is how their emotional information is processed. Verbal ability is key to improve emotional knowledge in people. Nevertheless, DP are unable to distinguish intonation, intensity, and the rhythm of language due to lack of hearing. Some DP have acquired both lip-reading abilities and sign language, but others have developed only sign language. PERVALE-S was developed to assess the ability of DP to perceive both social and basic emotions. PERVALE-S presents different sets of visual images of a real deaf person expressing both basic and social emotions, according to the normative standard of emotional expressions in Spanish Sign Language. Emotional expression stimuli were presented at two different levels of intensity (1: low; and 2: high) because DP do not distinguish an object in the same way as hearing people (HP) do. Then, participants had to click on the more suitable emotional expression. PERVALE-S contains video instructions (given by a sign language interpreter) to improve DP's understanding about how to use the software. DP had to watch the videos before answering the items. To test PERVALE-S, a sample of 56 individuals was recruited (18 signers, 8 lip-readers, and 30 HP). Participants also performed a personality test (High School Personality Questionnaire adapted) and a fluid intelligence (Gf) measure (RAPM). Moreover, all deaf participants were rated by four teachers for the deaf. RESULTS there were no significant differences between deaf and HP in performance in PERVALE-S. Confusion matrices revealed that embarrassment, envy, and jealousy were worse perceived. Age was just related to social-emotional tasks (but not in basic emotional tasks). Emotional perception ability was related mainly to warmth and consciousness, but negatively related to tension. Meanwhile, Gf was related to only social-emotional tasks. There were no gender differences.
Collapse
Affiliation(s)
- José M. Mestre
- Laboratorio de Inteligencia Emocional, Departamento de Psicología, Universidad de CádizCadiz, Spain
| | - Cristina Larrán
- Laboratorio de Inteligencia Emocional, Departamento de Psicología, Universidad de CádizCadiz, Spain
| | - Joaquín Herrero
- Centro de Educación Especial para Sordos, Junta de AndalucíaJerez de la Frontera, Spain
| | - Rocío Guil
- Laboratorio de Inteligencia Emocional, Departamento de Psicología, Universidad de CádizCadiz, Spain
| | - Gabriel G. de la Torre
- Laboratorio de Inteligencia Emocional, Departamento de Psicología, Universidad de CádizCadiz, Spain
| |
Collapse
|
24
|
Denmark T, Atkinson J, Campbell R, Swettenham J. How do typically developing deaf children and deaf children with autism spectrum disorder use the face when comprehending emotional facial expressions in British sign language? J Autism Dev Disord 2015; 44:2584-92. [PMID: 24803370 PMCID: PMC4167441 DOI: 10.1007/s10803-014-2130-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their ability to judge emotion in a signed utterance is impaired (Reilly et al. in Sign Lang Stud 75:113–118, 1992). We examined the role of the face in the comprehension of emotion in sign language in a group of typically developing (TD) deaf children and in a group of deaf children with autism spectrum disorder (ASD). We replicated Reilly et al.’s (Sign Lang Stud 75:113–118, 1992) adult results in the TD deaf signing children, confirming the importance of the face in understanding emotion in sign language. The ASD group performed more poorly on the emotion recognition task than the TD children. The deaf children with ASD showed a deficit in emotion recognition during sign language processing analogous to the deficit in vocal emotion recognition that has been observed in hearing children with ASD.
Collapse
Affiliation(s)
- Tanya Denmark
- Division of Psychology and Language Science, Department of Developmental Science, University College London, London, UK,
| | | | | | | |
Collapse
|
25
|
Heimler B, van Zoest W, Baruffaldi F, Donk M, Rinaldi P, Caselli MC, Pavani F. Finding the balance between capture and control: Oculomotor selection in early deaf adults. Brain Cogn 2015; 96:12-27. [DOI: 10.1016/j.bandc.2015.03.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2014] [Revised: 03/01/2015] [Accepted: 03/07/2015] [Indexed: 10/23/2022]
|
26
|
Campbell J, Sharma A. Cross-modal re-organization in adults with early stage hearing loss. PLoS One 2014; 9:e90594. [PMID: 24587400 PMCID: PMC3938766 DOI: 10.1371/journal.pone.0090594] [Citation(s) in RCA: 115] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2013] [Accepted: 02/01/2014] [Indexed: 11/19/2022] Open
Abstract
Cortical cross-modal re-organization, or recruitment of auditory cortical areas for visual processing, has been well-documented in deafness. However, the degree of sensory deprivation necessary to induce such cortical plasticity remains unclear. We recorded visual evoked potentials (VEP) using high-density electroencephalography in nine persons with adult-onset mild-moderate hearing loss and eight normal hearing control subjects. Behavioral auditory performance was quantified using a clinical measure of speech perception-in-noise. Relative to normal hearing controls, adults with hearing loss showed significantly larger P1, N1, and P2 VEP amplitudes, decreased N1 latency, and a novel positive component (P2') following the P2 VEP. Current source density reconstruction of VEPs revealed a shift toward ventral stream processing including activation of auditory temporal cortex in hearing-impaired adults. The hearing loss group showed worse than normal speech perception performance in noise, which was strongly correlated with a decrease in the N1 VEP latency. Overall, our findings provide the first evidence that visual cross-modal re-organization not only begins in the early stages of hearing impairment, but may also be an important factor in determining behavioral outcomes for listeners with hearing loss, a finding which demands further investigation.
Collapse
Affiliation(s)
- Julia Campbell
- University of Colorado at Boulder, Department of Speech, Language and Hearing Sciences, Boulder, Colorado, United States of America
| | - Anu Sharma
- University of Colorado at Boulder, Department of Speech, Language and Hearing Sciences, Boulder, Colorado, United States of America
- University of Colorado at Boulder, Institute of Cognitive Science, Boulder, Colorado, United States of America
- * E-mail:
| |
Collapse
|
27
|
Campbell J, Sharma A. Compensatory changes in cortical resource allocation in adults with hearing loss. Front Syst Neurosci 2013; 7:71. [PMID: 24478637 PMCID: PMC3905471 DOI: 10.3389/fnsys.2013.00071] [Citation(s) in RCA: 124] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Accepted: 10/07/2013] [Indexed: 12/13/2022] Open
Abstract
Hearing loss has been linked to many types of cognitive decline in adults, including an association between hearing loss severity and dementia. However, it remains unclear whether cortical re-organization associated with hearing loss occurs in early stages of hearing decline and in early stages of auditory processing. In this study, we examined compensatory plasticity in adults with mild-moderate hearing loss using obligatory, passively-elicited, cortical auditory evoked potentials (CAEP). High-density EEG elicited by speech stimuli was recorded in adults with hearing loss and age-matched normal hearing controls. Latency, amplitude and source localization of the P1, N1, P2 components of the CAEP were analyzed. Adults with mild-moderate hearing loss showed increases in latency and amplitude of the P2 CAEP relative to control subjects. Current density reconstructions revealed decreased activation in temporal cortex and increased activation in frontal cortical areas for hearing-impaired listeners relative to normal hearing listeners. Participants' behavioral performance on a clinical test of speech perception in noise was significantly correlated with the increases in P2 latency. Our results indicate that changes in cortical resource allocation are apparent in early stages of adult hearing loss, and that these passively-elicited cortical changes are related to behavioral speech perception outcome.
Collapse
Affiliation(s)
- Julia Campbell
- Department of Speech, Language and Hearing Sciences, University of Colorado at Boulder Boulder, CO, USA
| | - Anu Sharma
- Department of Speech, Language and Hearing Sciences, University of Colorado at Boulder Boulder, CO, USA ; Institute of Cognitive Science, University of Colorado at Boulder Boulder, CO, USA
| |
Collapse
|
28
|
Mitchell TV, Letourneau SM, Maslin MCT. Behavioral and neural evidence of increased attention to the bottom half of the face in deaf signers. Restor Neurol Neurosci 2013; 31:125-39. [PMID: 23142816 DOI: 10.3233/rnn-120233] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
PURPOSE This study examined the effects of deafness and sign language use on the distribution of attention across the top and bottom halves of faces. METHODS In a composite face task, congenitally deaf signers and typically hearing controls made same/different judgments of the top or bottom halves of faces presented with the halves aligned or spatially misaligned, while event-related potentials (ERPs) were recorded. RESULTS Both groups were more accurate when judging misaligned than aligned faces, which indicates holistic face processing. Misalignment affected all ERP components examined, with effects on the N170 resembling those of face inversion. Hearing adults were similarly accurate when judging the top and bottom halves of the faces, but deaf signers were more accurate when attending to the bottom than the top. Attending to the top elicited faster P1 and N170 latencies for both groups; within the deaf group, this effect was greatest for individuals who produced the highest accuracies when attending to the top. CONCLUSIONS These findings dovetail with previous research by providing behavioral and neural evidence of increased attention to the bottom half of the face in deaf signers, and by documenting that these effects generalize to a speeded task, in the absence of gaze shifts, with neutral facial expressions.
Collapse
Affiliation(s)
- Teresa V Mitchell
- Eunice Kennedy Shriver Center, University of Massachusetts Medical School, Waltham, MA 02452, USA.
| | | | | |
Collapse
|
29
|
Letourneau SM, Mitchell TV. Visual field bias in hearing and deaf adults during judgments of facial expression and identity. Front Psychol 2013; 4:319. [PMID: 23761774 PMCID: PMC3674475 DOI: 10.3389/fpsyg.2013.00319] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2012] [Accepted: 05/16/2013] [Indexed: 11/23/2022] Open
Abstract
The dominance of the right hemisphere during face perception is associated with more accurate judgments of faces presented in the left rather than the right visual field (RVF). Previous research suggests that the left visual field (LVF) bias typically observed during face perception tasks is reduced in deaf adults who use sign language, for whom facial expressions convey important linguistic information. The current study examined whether visual field biases were altered in deaf adults whenever they viewed expressive faces, or only when attention was explicitly directed to expression. Twelve hearing adults and 12 deaf signers were trained to recognize a set of novel faces posing various emotional expressions. They then judged the familiarity or emotion of faces presented in the left or RVF, or both visual fields simultaneously. The same familiar and unfamiliar faces posing neutral and happy expressions were presented in the two tasks. Both groups were most accurate when faces were presented in both visual fields. Across tasks, the hearing group demonstrated a bias toward the LVF. In contrast, the deaf group showed a bias toward the LVF during identity judgments that shifted marginally toward the RVF during emotion judgments. Two secondary conditions tested whether these effects generalized to angry faces and famous faces and similar effects were observed. These results suggest that attention to facial expression, not merely the presence of emotional expression, reduces a typical LVF bias for face processing in deaf signers.
Collapse
Affiliation(s)
- Susan M Letourneau
- Department of Psychology, Brandeis University Waltham, MA, USA ; Consortium for Research and Evaluation of Advanced Technologies in Education, Department of Educational Communication and Technology, New York University New York, NY, USA ; Department of Educational Psychology, The Graduate Center, City University of New York New York, NY, USA
| | | |
Collapse
|
30
|
de Heering A, Aljuhanay A, Rossion B, Pascalis O. Early deafness increases the face inversion effect but does not modulate the composite face effect. Front Psychol 2012; 3:124. [PMID: 22539929 PMCID: PMC3336184 DOI: 10.3389/fpsyg.2012.00124] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2011] [Accepted: 04/08/2012] [Indexed: 11/13/2022] Open
Abstract
Early deprivation in audition can have striking effects on the development of visual processing. Here we investigated whether early deafness induces changes in holistic/configural face processing. To this end, we compared the results of a group of early deaf participants to those of a group of hearing participants in an inversion-matching task (Experiment 1) and a composite face task (Experiment 2). We hypothesized that deaf individuals would show an enhanced inversion effect and/or an increased composite face effect compared to hearing controls in case of enhanced holistic/configural face processing. Conversely, these effects would be reduced if they rely more on facial features than hearing controls. As a result, we found that deaf individuals showed an increased inversion effect for faces, but not for non-face objects. They were also significantly slower than hearing controls to match inverted faces. However, the two populations did not differ regarding the overall size of their composite face effect. Altogether these results suggest that early deafness does not enhance or reduce the amount of holistic/configural processing devoted to faces but may increase the dependency on this mode of processing.
Collapse
Affiliation(s)
- Adélaïde de Heering
- Face Categorization Lab, Faculté de Psychologie et des Sciences de l'Education, Université Catholique de Louvain Louvain-la-Neuve, Belgium
| | | | | | | |
Collapse
|