1
|
Richoz AR, Stacchi L, Schaller P, Lao J, Papinutto M, Ticcinelli V, Caldara R. Recognizing facial expressions of emotion amid noise: A dynamic advantage. J Vis 2024; 24:7. [PMID: 38197738 PMCID: PMC10790674 DOI: 10.1167/jov.24.1.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 11/12/2023] [Indexed: 01/11/2024] Open
Abstract
Humans communicate internal states through complex facial movements shaped by biological and evolutionary constraints. Although real-life social interactions are flooded with dynamic signals, current knowledge on facial expression recognition mainly arises from studies using static face images. This experimental bias might stem from previous studies consistently reporting that young adults minimally benefit from the richer dynamic over static information, whereas children, the elderly, and clinical populations very strongly do (Richoz, Jack, Garrod, Schyns, & Caldara, 2015, Richoz, Jack, Garrod, Schyns, & Caldara, 2018b). These observations point to a near-optimal facial expression decoding system in young adults, almost insensitive to the advantage of dynamic over static cues. Surprisingly, no study has yet tested the idea that such evidence might be rooted in a ceiling effect. To this aim, we asked 70 healthy young adults to perform static and dynamic facial expression recognition of the six basic expressions while parametrically and randomly varying the low-level normalized phase and contrast signal (0%-100%) of the faces. As predicted, when 100% face signals were presented, static and dynamic expressions were recognized with equal efficiency with the exception of those with the most informative dynamics (i.e., happiness and surprise). However, when less signal was available, dynamic expressions were all better recognized than their static counterpart (peaking at ∼20%). Our data show that facial movements increase our ability to efficiently identify emotional states of others under the suboptimal visual conditions that can occur in everyday life. Dynamic signals are more effective and sensitive than static ones for decoding all facial expressions of emotion for all human observers.
Collapse
Affiliation(s)
- Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Lisa Stacchi
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Pauline Schaller
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Michael Papinutto
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Valentina Ticcinelli
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
2
|
Schaller P, Caldara R, Richoz AR. Prosopagnosia does not abolish other-race effects. Neuropsychologia 2023; 180:108479. [PMID: 36623806 DOI: 10.1016/j.neuropsychologia.2023.108479] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 12/28/2022] [Accepted: 01/05/2023] [Indexed: 01/09/2023]
Abstract
Healthy observers recognize more accurately same-than other-race faces (i.e., the Same-Race Recognition Advantage - SRRA) but categorize them by race more slowly than other-race faces (i.e., the Other-Race Categorization Advantage - ORCA). Several fMRI studies reported discrepant bilateral activations in the Fusiform Face Area (FFA) and Occipital Face Area (OFA) correlating with both effects. However, due to the very nature and limits of fMRI results, whether these face-sensitive regions play an unequivocal causal role in those other-race effects remains to be clarified. To this aim, we tested PS, a well-studied pure case of acquired prosopagnosia with lesions encompassing the left FFA and the right OFA. PS, healthy age-matched and young adults performed two recognition and three categorization by race tasks, respectively using Western Caucasian and East Asian faces normalized for their low-level properties with and without-external features, as well as in naturalistic settings. As expected, PS was slower and less accurate than the controls. Crucially, however, the magnitudes of her SRRA and ORCA were comparable to the controls in all the tasks. Our data show that prosopagnosia does not abolish other-race effects, as an intact face system, the left FFA and/or right OFA are not critical for eliciting the SRRA and ORCA. Race is a strong visual and social signal that is encoded in a large neural face-sensitive network, robustly tuned for processing same-race faces.
Collapse
Affiliation(s)
- Pauline Schaller
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland.
| |
Collapse
|
3
|
Schaller P, Richoz AR, Caldara R. Prosopagnosia does not abolish other-race effects. J Vis 2022. [DOI: 10.1167/jov.22.14.3478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
4
|
Saumure C, Richoz AR, Fiset D, Blais C, Caldara R. Pain decoding without mental representations of the eyes. J Vis 2022. [DOI: 10.1167/jov.22.14.3952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
5
|
Rodger H, Lao J, Stoll C, Richoz AR, Pascalis O, Dye M, Caldara R. The recognition of facial expressions of emotion in deaf and hearing individuals. Heliyon 2021; 7:e07018. [PMID: 34041389 PMCID: PMC8141778 DOI: 10.1016/j.heliyon.2021.e07018] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 03/25/2021] [Accepted: 05/04/2021] [Indexed: 11/25/2022] Open
Abstract
During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli fully controlled for their low-level visual properties, leaving the question of whether or not a dynamic advantage for deaf observers exists unresolved. We hypothesised, in line with the enhancement hypothesis, that the absence of auditory sensory information might have forced the visual system to better process visual (unimodal) signals, and predicted that this greater sensitivity to visual stimuli would result in better recognition performance for dynamic compared to static stimuli, and for deaf-signers compared to hearing non-signers in the dynamic condition. To this end, we performed a series of psychophysical studies with deaf signers with early-onset severe-to-profound deafness (dB loss >70) and hearing controls to estimate their ability to recognize the six basic facial expressions of emotion. Using static, dynamic, and shuffled (randomly permuted video frames of an expression) stimuli, we found that deaf observers showed similar categorization profiles and confusions across expressions compared to hearing controls (e.g., confusing surprise with fear). In contrast to our hypothesis, we found no recognition advantage for dynamic compared to static facial expressions for deaf observers. This observation shows that the decoding of dynamic facial expression emotional signals is not superior even in the deaf expert visual system, suggesting the existence of optimal signals in static facial expressions of emotion at the apex. Deaf individuals match hearing individuals in the recognition of facial expressions of emotion.
Collapse
Affiliation(s)
- Helen Rodger
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Junpeng Lao
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Chloé Stoll
- Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes, France
| | | | - Olivier Pascalis
- Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes, France
| | - Matthew Dye
- National Technical Institute for Deaf/Rochester Institute of Technology, Rochester, New York, USA
| | - Roberto Caldara
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
6
|
Stoll C, Rodger H, Lao J, Richoz AR, Pascalis O, Dye M, Caldara R. Quantifying Facial Expression Intensity and Signal Use in Deaf Signers. J Deaf Stud Deaf Educ 2019; 24:346-355. [PMID: 31271428 DOI: 10.1093/deafed/enz023] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 04/30/2019] [Accepted: 05/03/2019] [Indexed: 06/09/2023]
Abstract
We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.
Collapse
Affiliation(s)
- Chloé Stoll
- Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes
- Laboratory for Investigative Neurophysiology, Centre Hospitalier Universitaire Vaudois and University of Lausanne
| | - Helen Rodger
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| | - Olivier Pascalis
- Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes
| | - Matthew Dye
- National Technical Institute for Deaf/Rochester Institute of Technology
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| |
Collapse
|
7
|
Luisier AC, Petitpierre G, Bérod AC, Richoz AR, Lao J, Caldara R, Bensafi M. Visual and Hedonic Perception of Food Stimuli in Children with Autism Spectrum Disorders and their Relationship to Food Neophobia. Perception 2019; 48:197-213. [PMID: 30758252 DOI: 10.1177/0301006619828300] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The present study examined whether children with autism spectrum disorder (ASD) and typically developing (TD) children differed in visual perception of food stimuli at both sensorimotor and affective levels. A potential link between visual perception and food neophobia was also investigated. To these aims, 11 children with ASD and 11 TD children were tested. Visual pictures of food were used, and food neophobia was assessed by the parents. Results revealed that children with ASD explored visually longer food stimuli than TD children. Complementary analyses revealed that whereas TD children explored more multiple-item dishes (vs. simple-item dishes), children with ASD explored all the dishes in a similar way. In addition, children with ASD gave more negative appreciation in general. Moreover, hedonic rating was negatively correlated with food neophobia scores in children with ASD, but not in TD children. In sum, we show here that children with ASD have more difficulty than TD children in liking a food when presented visually. Our findings also suggest that a prominent factor that needs to be considered is time management during the food choice process. They also provide new ways of measuring and understanding food neophobia in children with ASD.
Collapse
Affiliation(s)
- Anne-Claude Luisier
- Research Center in Neurosciences of Lyon, Claude Bernard University Lyon 1, France; Institute of Special Education, University of Fribourg, Switzerland; Brocoli Factory, Sion, Switzerland
| | | | | | | | - Junpeng Lao
- Department of Psychology, University of Fribourg, Switzerland
| | - Roberto Caldara
- Department of Psychology, University of Fribourg, Switzerland
| | - Moustafa Bensafi
- Research Center in Neurosciences of Lyon, Claude Bernard University Lyon 1, France
| |
Collapse
|
8
|
Richoz AR, Lao J, Pascalis O, Caldara R. Tracking the recognition of static and dynamic facial expressions of emotion across the life span. J Vis 2018; 18:5. [PMID: 30208425 DOI: 10.1167/18.9.5] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The effective transmission and decoding of dynamic facial expressions of emotion is omnipresent and critical for adapted social interactions in everyday life. Thus, common intuition would suggest an advantage for dynamic facial expression recognition (FER) over the static snapshots routinely used in most experiments. However, although many studies reported an advantage in the recognition of dynamic over static expressions in clinical populations, results obtained from healthy participants are contrasted. To clarify this issue, we conducted a large cross-sectional study to investigate FER across the life span in order to determine if age is a critical factor to account for such discrepancies. More than 400 observers (age range 5-96) performed recognition tasks of the six basic expressions in static, dynamic, and shuffled (temporally randomized frames) conditions, normalized for the amount of energy sampled over time. We applied a Bayesian hierarchical step-linear model to capture the nonlinear relationship between age and FER for the different viewing conditions. Although replicating the typical accuracy profiles of FER, we determined the age at which peak efficiency was reached for each expression and found greater accuracy for most dynamic expressions across the life span. This advantage in the elderly population was driven by a significant decrease in performance for static images, which was twice as large as for the young adults. Our data posit the use of dynamic stimuli as being critical in the assessment of FER in the elderly population, inviting caution when drawing conclusions from the sole use of static face images to this aim.
Collapse
Affiliation(s)
- Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland.,LPNC, University of Grenoble Alpes, Grenoble, France
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | | | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
9
|
Fiset D, Blais C, Royer J, Richoz AR, Dugas G, Caldara R. Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia. Soc Cogn Affect Neurosci 2018; 12:1334-1341. [PMID: 28459990 PMCID: PMC5597863 DOI: 10.1093/scan/nsx068] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 04/23/2017] [Indexed: 12/01/2022] Open
Abstract
Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images.
Collapse
Affiliation(s)
- Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada.,Centre de Recherche en Neuropsychologie et Cognition, Montréal, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada.,Centre de Recherche en Neuropsychologie et Cognition, Montréal, Canada
| | - Jessica Royer
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada.,Centre de Recherche en Neuropsychologie et Cognition, Montréal, Canada
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Gabrielle Dugas
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada.,Centre de Recherche en Neuropsychologie et Cognition, Montréal, Canada
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
10
|
Turano MT, Lao J, Richoz AR, Lissa PD, Degosciu SBA, Viggiano MP, Caldara R. Corrigendum to: Fear boosts the early neural coding of faces. Soc Cogn Affect Neurosci 2017; 12:1993. [PMID: 29182719 PMCID: PMC5716158 DOI: 10.1093/scan/nsx136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Maria Teresa Turano
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland.,Department of Neuroscience, Psychology, Drug Research & Child's Health, University of Florence, Florence, Italy
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Peter de Lissa
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Sarah B A Degosciu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Maria Pia Viggiano
- Department of Neuroscience, Psychology, Drug Research & Child's Health, University of Florence, Florence, Italy
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
11
|
Turano MT, Lao J, Richoz AR, de Lissa P, Degosciu SBA, Viggiano MP, Caldara R. Fear boosts the early neural coding of faces. Soc Cogn Affect Neurosci 2017; 12:1959-1971. [PMID: 29040780 PMCID: PMC5716185 DOI: 10.1093/scan/nsx110] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2016] [Revised: 09/18/2017] [Accepted: 10/02/2017] [Indexed: 11/14/2022] Open
Abstract
The rapid extraction of facial identity and emotional expressions is critical for adapted social interactions. These biologically relevant abilities have been associated with early neural responses on the face sensitive N170 component. However, whether all facial expressions uniformly modulate the N170, and whether this effect occurs only when emotion categorization is task-relevant, is still unclear. To clarify this issue, we recorded high-resolution electrophysiological signals while 22 observers perceived the six basic expressions plus neutral. We used a repetition suppression paradigm, with an adaptor followed by a target face displaying the same identity and expression (trials of interest). We also included catch trials to which participants had to react, by varying identity (identity-task), expression (expression-task) or both (dual-task) on the target face. We extracted single-trial Repetition Suppression (stRS) responses using a data-driven spatiotemporal approach with a robust hierarchical linear model to isolate adaptation effects on the trials of interest. Regardless of the task, fear was the only expression modulating the N170, eliciting the strongest stRS responses. This observation was corroborated by distinct behavioral performance during the catch trials for this facial expression. Altogether, our data reinforce the view that fear elicits distinct neural processes in the brain, enhancing attention and facilitating the early coding of faces.
Collapse
Affiliation(s)
- Maria Teresa Turano
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
- Department of Neuroscience, Psychology, Drug Research & Child's Health, University of Florence, Florence, Italy
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Peter de Lissa
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Sarah B A Degosciu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Maria Pia Viggiano
- Department of Neuroscience, Psychology, Drug Research & Child's Health, University of Florence, Florence, Italy
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
12
|
Richoz AR, Lao J, Pascalis O, Caldara R. Tracking the recognition of static and dynamic facial expressions of emotion across life span. J Vis 2017. [DOI: 10.1167/17.10.1108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Olivier Pascalis
- Laboratoire de Psychologie et Neurocognition (CNRS), Université Grenoble-Alpes, Grenoble, France
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
13
|
Richoz AR, Quinn PC, Hillairet de Boisferon A, Berger C, Loevenbruck H, Lewkowicz DJ, Lee K, Dole M, Caldara R, Pascalis O. Audio-Visual Perception of Gender by Infants Emerges Earlier for Adult-Directed Speech. PLoS One 2017; 12:e0169325. [PMID: 28060872 PMCID: PMC5218491 DOI: 10.1371/journal.pone.0169325] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 12/15/2016] [Indexed: 11/18/2022] Open
Abstract
Early multisensory perceptual experiences shape the abilities of infants to perform socially-relevant visual categorization, such as the extraction of gender, age, and emotion from faces. Here, we investigated whether multisensory perception of gender is influenced by infant-directed (IDS) or adult-directed (ADS) speech. Six-, 9-, and 12-month-old infants saw side-by-side silent video-clips of talking faces (a male and a female) and heard either a soundtrack of a female or a male voice telling a story in IDS or ADS. Infants participated in only one condition, either IDS or ADS. Consistent with earlier work, infants displayed advantages in matching female relative to male faces and voices. Moreover, the new finding that emerged in the current study was that extraction of gender from face and voice was stronger at 6 months with ADS than with IDS, whereas at 9 and 12 months, matching did not differ for IDS versus ADS. The results indicate that the ability to perceive gender in audiovisual speech is influenced by speech manner. Our data suggest that infants may extract multisensory gender information developmentally earlier when looking at adults engaged in conversation with other adults (i.e., ADS) than when adults are directly talking to them (i.e., IDS). Overall, our findings imply that the circumstances of social interaction may shape early multisensory abilities to perceive gender.
Collapse
Affiliation(s)
- Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Paul C. Quinn
- Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware, United States of America
| | - Anne Hillairet de Boisferon
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Carole Berger
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Hélène Loevenbruck
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - David J. Lewkowicz
- Department of Communication Sciences & Disorders, Northeastern University, Boston, Massachusetts, United States of America
| | - Kang Lee
- Institute of Child Study University of Toronto, Toronto, Ontario, Canada
| | - Marjorie Dole
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Olivier Pascalis
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| |
Collapse
|
14
|
Lao J, Richoz AR, Stoll C, Pascalis O, Dye M, Caldara R. Mapping the recognition of facial expression of emotions in deafness. J Vis 2016. [DOI: 10.1167/16.12.1391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|