1
|
Gainotti G. Human Recognition: The Utilization of Face, Voice, Name and Interactions-An Extended Editorial. Brain Sci 2024; 14:345. [PMID: 38671996 PMCID: PMC11048321 DOI: 10.3390/brainsci14040345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 03/21/2024] [Indexed: 04/28/2024] Open
Abstract
The many stimulating contributions to this Special Issue of Brain Science focused on some basic issues of particular interest in current research, with emphasis on human recognition using faces, voices, and names [...].
Collapse
Affiliation(s)
- Guido Gainotti
- Institute of Neurology, Università Cattolica del Sacro Cuore, Fondazione Policlinico A. Gemelli, Istituto di Ricovero e Cura a Carattere Scientifico, 00168 Rome, Italy
| |
Collapse
|
2
|
Cheng Y, Yuan X, Jiang Y. Eye pupil signals life motion perception. Atten Percept Psychophys 2024; 86:579-586. [PMID: 37258891 DOI: 10.3758/s13414-023-02729-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/05/2023] [Indexed: 06/02/2023]
Abstract
The ability to readily detect and recognize biological motion (BM) is fundamental to survival and interpersonal communication. However, perception of BM is strongly disrupted when it is shown upside down. This well-known inversion effect is proposed to be caused by a life motion detection mechanism highly tuned to gravity-compatible motion cues. In the current study, we assessed the inversion effect in BM perception using a no-report pupillometry. We found that the pupil size was significantly enlarged when observers viewed upright BMs (gravity-compatible) compared with the inverted counterparts (gravity-incompatible). Importantly, such an effect critically depended on the dynamic biological characteristics, and could be extended to local feet motion signals. These findings demonstrate that the eye pupil can signal gravity-dependent life motion perception. More importantly, with the convenience, objectivity, and noninvasiveness of pupillometry, the current study paves the way for the potential application of pupillary responses in detecting the deficiency of life motion perception in individuals with socio-cognitive disorders.
Collapse
Affiliation(s)
- Yuhui Cheng
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
- Chinese Institute for Brain Research, Beijing, 102206, China
| | - Xiangyong Yuan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| |
Collapse
|
3
|
Nizamoglu H, Urgen BA. Neural processing of bottom-up perception of biological motion under attentional load. Vision Res 2024; 214:108328. [PMID: 37926626 DOI: 10.1016/j.visres.2023.108328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/07/2023]
Abstract
Considering its importance for one's survival and social significance, biological motion (BM) perception is assumed to occur automatically. Previous behavioral results showed that task-irrelevant BM in the periphery interfered with task performance at the fovea. Under selective attention, BM perception is supported by a network of regions including the occipito-temporal (OTC), parietal, and premotor cortices. Retinotopy studies that use BM stimulus showed distinct maps for its processing under and away from selective attention. Based on these findings, we investigated how bottom-up perception of BM would be processed in the human brain under attentional load when it was shown away from the focus of attention as a task-irrelevant stimulus. Participants (N = 31) underwent an fMRI study in which they performed an attentionally demanding visual detection task at the fovea while intact or scrambled point light displays of BM were shown at the periphery. Our results showed the main effect of attentional load in fronto-parietal regions and both univariate activity maps and multivariate pattern analysis results support the attentional load modulation on the task-irrelevant peripheral stimuli. However, this effect was not specific to intact BM stimuli and was generalized to motion stimuli as evidenced by the motion-sensitive OTC involvement during the presence of dynamic stimuli in the periphery. These results confirm and extend previous work by showing that task-irrelevant distractors can be processed by stimulus-specific regions when there are enough attentional resources available. We discussed the implications of these results for future studies.
Collapse
Affiliation(s)
- Hilal Nizamoglu
- Interdisciplinary Neuroscience Program, Bilkent University, Ankara, Turkey; Department of Psychology, Justus Liebig University in Giessen, Giessen, Germany.
| | - Burcu A Urgen
- Interdisciplinary Neuroscience Program, Bilkent University, Ankara, Turkey; Department of Psychology, Bilkent University, Ankara, Turkey; Aysel Sabuncu Brain Research Center and National Magnetic Resonance Imaging Center, Bilkent University, Ankara, Turkey.
| |
Collapse
|
4
|
Liu Y, Wang Z, Wei T, Zhou S, Yin Y, Mi Y, Liu X, Tang Y. Alterations of Audiovisual Integration in Alzheimer's Disease. Neurosci Bull 2023; 39:1859-1872. [PMID: 37812301 PMCID: PMC10661680 DOI: 10.1007/s12264-023-01125-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 06/22/2023] [Indexed: 10/10/2023] Open
Abstract
Audiovisual integration is a vital information process involved in cognition and is closely correlated with aging and Alzheimer's disease (AD). In this review, we evaluated the altered audiovisual integrative behavioral symptoms in AD. We further analyzed the relationships between AD pathologies and audiovisual integration alterations bidirectionally and suggested the possible mechanisms of audiovisual integration alterations underlying AD, including the imbalance between energy demand and supply, activity-dependent degeneration, disrupted brain networks, and cognitive resource overloading. Then, based on the clinical characteristics including electrophysiological and imaging data related to audiovisual integration, we emphasized the value of audiovisual integration alterations as potential biomarkers for the early diagnosis and progression of AD. We also highlighted that treatments targeted audiovisual integration contributed to widespread pathological improvements in AD animal models and cognitive improvements in AD patients. Moreover, investigation into audiovisual integration alterations in AD also provided new insights and comprehension about sensory information processes.
Collapse
Affiliation(s)
- Yufei Liu
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Zhibin Wang
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Tao Wei
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Shaojiong Zhou
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Yunsi Yin
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Yingxin Mi
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Xiaoduo Liu
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China
| | - Yi Tang
- Department of Neurology and Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, 100053, China.
| |
Collapse
|
5
|
Wang H, Lian Y, Wang A, Chen E, Liu C. Face motion form at learning influences the time course of face spatial frequency processing during test. Biol Psychol 2023; 183:108691. [PMID: 37748703 DOI: 10.1016/j.biopsycho.2023.108691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 09/05/2023] [Accepted: 09/21/2023] [Indexed: 09/27/2023]
Abstract
Studies that use static faces suggest that facial processing follows a coarse-to-fine sequence; i.e., holistic precedes featural processing, due to low and high spatial frequencies (LSF, HSF) transmitting holistic/global and featural/local information respectively. Although recent studies have focused on the role of facial movement in holistic facial processing, it is unclear whether moving faces have the same processing mechanism as static ones, especially in the time course of processing. The current study uses the event-related potential technique to investigate this issue by manipulating the facial format at study and face spatial frequency during the test. ERP results showed that the P1 amplitude was increased by LSF faces relative to HSF ones, using both moving and static study faces, with the former larger than the latter. The N170 amplitude was more sensitive to HSF than LSF faces when only static study faces were used, while the P2 amplitude was more sensitive to LSF faces regardless of the facial study format. The above results were not modulated by the race of the faces. These results favor the view that regardless of face race, moving study faces promote holistic processing during the earliest stage of face recognition. Furthermore, holistic processing is observed to be the same for both static and moving study faces at a later stage associated with more in-depth processing. It is evident that facial motion should be factored into further studies of face recognition, given the distinctions between holistic and featural processing for moving and static study faces.
Collapse
Affiliation(s)
- Hailing Wang
- School of Psychology, Shandong Normal University, Jinan 250358, China.
| | - Yujing Lian
- School of Psychology, Shandong Normal University, Jinan 250358, China
| | - Anqing Wang
- School of Psychology, Shandong Normal University, Jinan 250358, China
| | - Enguang Chen
- School of Psychology, Shandong Normal University, Jinan 250358, China
| | - Chengdong Liu
- School of Psychology, Shandong Normal University, Jinan 250358, China
| |
Collapse
|
6
|
Manippa V, Palmisano A, Ventura M, Rivolta D. The Neural Correlates of Developmental Prosopagnosia: Twenty-Five Years on. Brain Sci 2023; 13:1399. [PMID: 37891769 PMCID: PMC10605188 DOI: 10.3390/brainsci13101399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 09/21/2023] [Accepted: 09/29/2023] [Indexed: 10/29/2023] Open
Abstract
Faces play a crucial role in social interactions. Developmental prosopagnosia (DP) refers to the lifelong difficulty in recognizing faces despite the absence of obvious signs of brain lesions. In recent decades, the neural substrate of this condition has been extensively investigated. While early neuroimaging studies did not reveal significant functional and structural abnormalities in the brains of individuals with developmental prosopagnosia (DPs), recent evidence identifies abnormalities at multiple levels within DPs' face-processing networks. The current work aims to provide an overview of the convergent and contrasting findings by examining twenty-five years of neuroimaging literature on the anatomo-functional correlates of DP. We included 55 original papers, including 63 studies that compared the brain structure (MRI) and activity (fMRI, EEG, MEG) of healthy control participants and DPs. Despite variations in methods, procedures, outcomes, sample selection, and study design, this scoping review suggests that morphological, functional, and electrophysiological features characterize DPs' brains, primarily within the ventral visual stream. Particularly, the functional and anatomical connectivity between the Fusiform Face Area and the other face-sensitive regions seems strongly impaired. The cognitive and clinical implications as well as the limitations of these findings are discussed in light of the available knowledge and challenges in the context of DP.
Collapse
Affiliation(s)
- Valerio Manippa
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, 70122 Bari, Italy; (V.M.); (A.P.); (M.V.)
| | - Annalisa Palmisano
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, 70122 Bari, Italy; (V.M.); (A.P.); (M.V.)
- Chair of Lifespan Developmental Neuroscience, TUD Dresden University of Technology, 01069 Dresden, Germany
| | - Martina Ventura
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, 70122 Bari, Italy; (V.M.); (A.P.); (M.V.)
- The MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Sydney 2145, Australia
| | - Davide Rivolta
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, 70122 Bari, Italy; (V.M.); (A.P.); (M.V.)
| |
Collapse
|
7
|
Zucchini E, Borzelli D, Casile A. Representational momentum of biological motion in full-body, point-light and single-dot displays. Sci Rep 2023; 13:10488. [PMID: 37380666 DOI: 10.1038/s41598-023-36870-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Accepted: 06/12/2023] [Indexed: 06/30/2023] Open
Abstract
Observing the actions of others triggers, in our brain, an internal and automatic simulation of its unfolding in time. Here, we investigated whether the instantaneous internal representation of an observed action is modulated by the point of view under which an action is observed and the stimulus type. To this end, we motion captured the elliptical arm movement of a human actor and used these trajectories to animate a photorealistic avatar, a point-light stimulus or a single dot rendered either from an egocentric or an allocentric point of view. Crucially, the underlying physical characteristics of the movement were the same in all conditions. In a representational momentum paradigm, we then asked subjects to report the perceived last position of an observed movement at the moment in which the stimulus was randomly stopped. In all conditions, subjects tended to misremember the last configuration of the observed stimulus as being further forward than the veridical last showed position. This misrepresentation was however significantly smaller for full-body stimuli compared to point-light and single dot displays and it was not modulated by the point of view. It was also smaller when first-person full body stimuli were compared with a stimulus consisting of a solid shape moving with the same physical motion. We interpret these findings as evidence that full-body stimuli elicit a simulation process that is closer to the instantaneous veridical configuration of the observed movements while impoverished displays (both point-light and single-dot) elicit a prediction that is further forward in time. This simulation process seems to be independent from the point of view under which the actions are observed.
Collapse
Affiliation(s)
- Elena Zucchini
- Center for Translational Neurophysiology of Speech and Communication (CTNSC), Istituto Italiano di Tecnologia (IIT), Ferrara, Italy
| | - Daniele Borzelli
- Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Messina, Italy
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Antonino Casile
- Center for Translational Neurophysiology of Speech and Communication (CTNSC), Istituto Italiano di Tecnologia (IIT), Ferrara, Italy.
- Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Messina, Italy.
| |
Collapse
|
8
|
Wang R, Lu X, Jiang Y. Distributed and hierarchical neural encoding of multidimensional biological motion attributes in the human brain. Cereb Cortex 2023; 33:8510-8522. [PMID: 37118887 PMCID: PMC10786095 DOI: 10.1093/cercor/bhad136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/31/2023] [Accepted: 04/01/2023] [Indexed: 04/30/2023] Open
Abstract
The human visual system can efficiently extract distinct physical, biological, and social attributes (e.g. facing direction, gender, and emotional state) from biological motion (BM), but how these attributes are encoded in the brain remains largely unknown. In the current study, we used functional magnetic resonance imaging to investigate this issue when participants viewed multidimensional BM stimuli. Using multiple regression representational similarity analysis, we identified distributed brain areas, respectively, related to the processing of facing direction, gender, and emotional state conveyed by BM. These brain areas are governed by a hierarchical structure in which the respective neural encoding of facing direction, gender, and emotional state is modulated by each other in descending order. We further revealed that a portion of the brain areas identified in representational similarity analysis was specific to the neural encoding of each attribute and correlated with the corresponding behavioral results. These findings unravel the brain networks for encoding BM attributes in consideration of their interactions, and highlight that the processing of multidimensional BM attributes is recurrently interactive.
Collapse
Affiliation(s)
- Ruidi Wang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| | - Xiqian Lu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| |
Collapse
|
9
|
Bognár A, Raman R, Taubert N, Zafirova Y, Li B, Giese M, De Gelder B, Vogels R. The contribution of dynamics to macaque body and face patch responses. Neuroimage 2023; 269:119907. [PMID: 36717042 PMCID: PMC9986793 DOI: 10.1016/j.neuroimage.2023.119907] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 12/20/2022] [Accepted: 01/26/2023] [Indexed: 01/29/2023] Open
Abstract
Previous functional imaging studies demonstrated body-selective patches in the primate visual temporal cortex, comparing activations to static bodies and static images of other categories. However, the use of static instead of dynamic displays of moving bodies may have underestimated the extent of the body patch network. Indeed, body dynamics provide information about action and emotion and may be processed in patches not activated by static images. Thus, to map with fMRI the full extent of the macaque body patch system in the visual temporal cortex, we employed dynamic displays of natural-acting monkey bodies, dynamic monkey faces, objects, and scrambled versions of these videos, all presented during fixation. We found nine body patches in the visual temporal cortex, starting posteriorly in the superior temporal sulcus (STS) and ending anteriorly in the temporal pole. Unlike for static images, body patches were present consistently in both the lower and upper banks of the STS. Overall, body patches showed a higher activation by dynamic displays than by matched static images, which, for identical stimulus displays, was less the case for the neighboring face patches. These data provide the groundwork for future single-unit recording studies to reveal the spatiotemporal features the neurons of these body patches encode. These fMRI findings suggest that dynamics have a stronger contribution to population responses in body than face patches.
Collapse
Affiliation(s)
- A Bognár
- Deparment of Neurosciences, KU Leuven, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - R Raman
- Deparment of Neurosciences, KU Leuven, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - N Taubert
- Department of Cognitive Neurology, University of Tuebingen, Tuebingen, Germany
| | - Y Zafirova
- Deparment of Neurosciences, KU Leuven, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - B Li
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - M Giese
- Department of Cognitive Neurology, University of Tuebingen, Tuebingen, Germany
| | - B De Gelder
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, the Netherlands; Department of Computer Science, University College London, London, UK
| | - R Vogels
- Deparment of Neurosciences, KU Leuven, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium.
| |
Collapse
|
10
|
Familiarity Facilitates Detection of Angry Expressions. Brain Sci 2023; 13:brainsci13030509. [PMID: 36979319 PMCID: PMC10046299 DOI: 10.3390/brainsci13030509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/10/2023] [Accepted: 03/16/2023] [Indexed: 03/22/2023] Open
Abstract
Personal familiarity facilitates rapid and optimized detection of faces. In this study, we investigated whether familiarity associated with faces can also facilitate the detection of facial expressions. Models of face processing propose that face identity and face expression detection are mediated by distinct pathways. We used a visual search paradigm to assess if facial expressions of emotion (anger and happiness) were detected more rapidly when produced by familiar as compared to unfamiliar faces. We found that participants detected an angry expression 11% more accurately and 135 ms faster when produced by familiar as compared to unfamiliar faces while happy expressions were detected with equivalent accuracies and at equivalent speeds for familiar and unfamiliar faces. These results suggest that detectors in the visual system dedicated to processing features of angry expressions are optimized for familiar faces.
Collapse
|
11
|
Torricelli F, Tomassini A, Pezzulo G, Pozzo T, Fadiga L, D'Ausilio A. Motor invariants in action execution and perception. Phys Life Rev 2023; 44:13-47. [PMID: 36462345 DOI: 10.1016/j.plrev.2022.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
The nervous system is sensitive to statistical regularities of the external world and forms internal models of these regularities to predict environmental dynamics. Given the inherently social nature of human behavior, being capable of building reliable predictive models of others' actions may be essential for successful interaction. While social prediction might seem to be a daunting task, the study of human motor control has accumulated ample evidence that our movements follow a series of kinematic invariants, which can be used by observers to reduce their uncertainty during social exchanges. Here, we provide an overview of the most salient regularities that shape biological motion, examine the role of these invariants in recognizing others' actions, and speculate that anchoring socially-relevant perceptual decisions to such kinematic invariants provides a key computational advantage for inferring conspecifics' goals and intentions.
Collapse
Affiliation(s)
- Francesco Torricelli
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185 Rome, Italy
| | - Thierry Pozzo
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, F-21000, Dijon, France
| | - Luciano Fadiga
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alessandro D'Ausilio
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy.
| |
Collapse
|
12
|
Cortical encoding of rhythmic kinematic structures in biological motion. Neuroimage 2023; 268:119893. [PMID: 36693597 DOI: 10.1016/j.neuroimage.2023.119893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 01/04/2023] [Accepted: 01/20/2023] [Indexed: 01/22/2023] Open
Abstract
Biological motion (BM) perception is of great survival value to human beings. The critical characteristics of BM information lie in kinematic cues containing rhythmic structures. However, how rhythmic kinematic structures of BM are dynamically represented in the brain and contribute to visual BM processing remains largely unknown. Here, we probed this issue in three experiments using electroencephalogram (EEG). We found that neural oscillations of observers entrained to the hierarchical kinematic structures of the BM sequences (i.e., step-cycle and gait-cycle for point-light walkers). Notably, only the cortical tracking of the higher-level rhythmic structure (i.e., gait-cycle) exhibited a BM processing specificity, manifested by enhanced neural responses to upright over inverted BM stimuli. This effect could be extended to different motion types and tasks, with its strength positively correlated with the perceptual sensitivity to BM stimuli at the right temporal brain region dedicated to visual BM processing. Modeling results further suggest that the neural encoding of spatiotemporally integrative kinematic cues, in particular the opponent motions of bilateral limbs, drives the selective cortical tracking of BM information. These findings underscore the existence of a cortical mechanism that encodes periodic kinematic features of body movements, which underlies the dynamic construction of visual BM perception.
Collapse
|
13
|
Li B, Solanas MP, Marrazzo G, Raman R, Taubert N, Giese M, Vogels R, de Gelder B. A large-scale brain network of species-specific dynamic human body perception. Prog Neurobiol 2023; 221:102398. [PMID: 36565985 DOI: 10.1016/j.pneurobio.2022.102398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 11/25/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
This ultrahigh field 7 T fMRI study addressed the question of whether there exists a core network of brain areas at the service of different aspects of body perception. Participants viewed naturalistic videos of monkey and human faces, bodies, and objects along with mosaic-scrambled videos for control of low-level features. Independent component analysis (ICA) based network analysis was conducted to find body and species modulations at both the voxel and the network levels. Among the body areas, the highest species selectivity was found in the middle frontal gyrus and amygdala. Two large-scale networks were highly selective to bodies, dominated by the lateral occipital cortex and right superior temporal sulcus (STS) respectively. The right STS network showed high species selectivity, and its significant human body-induced node connectivity was focused around the extrastriate body area (EBA), STS, temporoparietal junction (TPJ), premotor cortex, and inferior frontal gyrus (IFG). The human body-specific network discovered here may serve as a brain-wide internal model of the human body serving as an entry point for a variety of processes relying on body descriptions as part of their more specific categorization, action, or expression recognition functions.
Collapse
Affiliation(s)
- Baichen Li
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands
| | - Marta Poyo Solanas
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands
| | - Giuseppe Marrazzo
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands
| | - Rajani Raman
- Laboratory for Neuro, and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| | - Nick Taubert
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen 72076, Germany
| | - Martin Giese
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen 72076, Germany
| | - Rufin Vogels
- Laboratory for Neuro, and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands; Department of Computer Science, University College London, London WC1E 6BT, UK.
| |
Collapse
|
14
|
Hu Y, O'Toole AJ. First impressions: Integrating faces and bodies in personality trait perception. Cognition 2023; 231:105309. [PMID: 36347653 DOI: 10.1016/j.cognition.2022.105309] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 10/05/2022] [Accepted: 10/12/2022] [Indexed: 11/07/2022]
Abstract
Faces and bodies spontaneously elicit personality trait judgments (e.g., trustworthy, dominant, lazy). We examined how trait information from the face and body combine to form first impressions of the whole person and whether trait judgments from the face and body are affected by seeing the whole person. Consistent with the trait-dependence hypothesis, Experiment 1 showed that the relative contribution of the face and body to whole-person perception varied with the trait judged. Agreeableness traits (e.g., warm, aggressive, sympathetic, trustworthy) were inferred primarily from the face, conscientiousness traits (e.g., dependable, careless) from the body, and extraversion traits (e.g., dominant, quiet, confident) from the whole person. A control experiment showed that both clothing and body shape contributed to whole-person judgments. In Experiment 2, we found that a face (body) rated in the whole person elicited a different rating than when it was rated in isolation. Specifically, when trait ratings differed for an isolated face and body of the same identity, the whole-person context biased in-context ratings of the faces and bodies towards the ratings of the context. These results showed that face and body trait perception interact more than previously assumed. We combine current and established findings to propose a novel framework to account for face-body integration in trait perception. This framework incorporates basic elements such as perceptual determinants, nonperceptual determinants, trait formation, and integration, as well as predictive factors such as the rater, the person rated, and the situation.
Collapse
Affiliation(s)
- Ying Hu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | | |
Collapse
|
15
|
Liu W, Cheng Y, Yuan X, Jiang Y. Looking more masculine among females: Spatial context modulates gender perception of face and biological motion. Br J Psychol 2023; 114:194-208. [PMID: 36302701 DOI: 10.1111/bjop.12605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 09/12/2022] [Accepted: 10/11/2022] [Indexed: 01/11/2023]
Abstract
Perception of visual information highly depends on spatial context. For instance, perception of a low-level visual feature, such as orientation, can be shifted away from its surrounding context, exhibiting a simultaneous contrast effect. Although previous studies have demonstrated the adaptation aftereffect of gender, a high-level visual feature, it remains largely unknown whether gender perception can also be shaped by a simultaneously presented context. In the present study, we found that the gender perception of a central face or a point-light walker was repelled away from the gender of its surrounding faces or walkers. A norm-based opponent model of lateral inhibition, which accounts for the adaptation aftereffect of high-level features, can also excellently fit the simultaneous contrast effect. But different from the reported contextual effect of low-level features, the simultaneous contrast effect of gender cannot be observed when the centre and the surrounding stimuli are from different categories, or when the surrounding stimuli are suppressed from awareness. These findings on one hand reveal a resemblance between the simultaneous contrast effect and the adaptation aftereffect of high-level features, on the other hand highlight different biological mechanisms underlying the contextual effects of low- and high-level visual features.
Collapse
Affiliation(s)
- Wenjie Liu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| | - Yuhui Cheng
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| | - Xiangyong Yuan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| |
Collapse
|
16
|
Dunn JD, Towler A, Kemp RI, White D. Selecting police super-recognisers. PLoS One 2023; 18:e0283682. [PMID: 37195905 DOI: 10.1371/journal.pone.0283682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/14/2023] [Indexed: 05/19/2023] Open
Abstract
People vary in their ability to recognise faces. These individual differences are consistent over time, heritable and associated with brain anatomy. This implies that face identity processing can be improved in applied settings by selecting high performers-'super-recognisers' (SRs)-but these selection processes are rarely available for scientific scrutiny. Here we report an 'end-to-end' selection process used to establish an SR 'unit' in a large police force. Australian police officers (n = 1600) completed 3 standardised face identification tests and we recruited 38 SRs from this cohort to complete 10 follow-up tests. As a group, SRs were 20% better than controls in lab-based tests of face memory and matching, and equalled or surpassed accuracy of forensic specialists that currently perform face identification tasks for police. Individually, SR accuracy was variable but this problem was mitigated by adopting strict selection criteria. SRs' superior abilities transferred only partially to body identity decisions where the face was not visible, and they were no better than controls at deciding which visual scene that faces had initially been encountered in. Notwithstanding these important qualifications, we conclude that super-recognisers are an effective solution to improving face identity processing in applied settings.
Collapse
Affiliation(s)
- James D Dunn
- School of Psychology, UNSW Sydney, Sydney, Australia
| | - Alice Towler
- School of Psychology, UNSW Sydney, Sydney, Australia
- School of Psychology, University of Queensland, Brisbane, Australia
| | | | - David White
- School of Psychology, UNSW Sydney, Sydney, Australia
| |
Collapse
|
17
|
Greene L, Barker LA, Reidy J, Morton N, Atherton A. Emotion recognition and eye tracking of static and dynamic facial affect: Acomparison of individuals with and without traumatic brain injury. J Clin Exp Neuropsychol 2022; 44:461-477. [PMID: 36205649 DOI: 10.1080/13803395.2022.2128066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2022]
Abstract
Diminished social functioning is often seen after traumatic brain injury (TBI). Mechanisms contributing to these deficits are poorly understood but thought to relate to impaired ability to recognize facial expressions. Static stimuli are often used to investigate ability post-TBI, and there is less evidence using more real-life dynamic stimuli. In addition, most studies rely on behavioral responses alone. The present study investigated the performance of a TBI group and matched non-TBI group on static and dynamic tasks using eye-tracking technology alongside behavioral measures. This is the first study to use eye tracking methodology alongside behavioral measures in emotion recognition tasks in people with brain injury. Eighteen individuals with heterogeneous TBI and 18 matched non-TBI participants were recruited. Stimuli representing six core emotions (Anger, Disgust, Fear, Happy, Sad, and Surprise faces) were selected from the Amsterdam Dynamic Facial Expression Set (ADFES). Participants were instructed to identify the emotion displayed correctly whilst eye movement metrics were recorded. RESULTS Results of analyses showed that TBI patients had First Fixation to nose for all emotion stimuli, shorter Fixation Duration and lower Fixation Count to eyes, were generally slower to classify stimuli, and less accurate than non-TBI group for the static task. Those with TBI were also less accurate at identifying Angry, Disgust, and Fear stimulus faces compared to the non-TBI group during the dynamic unfolding of an emotion. CONCLUSION In the present study, those with TBI had atypical eye scan patterns during emotion identification in the static emotion recognition task compared to the non-TBI group and were associated with lower identification accuracy on behavioral measures in both static and dynamic tasks. Findings suggest potential disruption to oculomotor systems vital for first stage perceptual processing. Arguably, these impairments may contribute to diminished social functioning.
Collapse
Affiliation(s)
- L Greene
- Centre for Behavioural Science and Applied Psychology, Department of Psychology,Sociology & Politics, Sheffield Hallam University, Sheffield, UK
| | - L A Barker
- Centre for Behavioural Science and Applied Psychology, Department of Psychology,Sociology & Politics, Sheffield Hallam University, Sheffield, UK
| | - J Reidy
- Centre for Behavioural Science and Applied Psychology, Department of Psychology,Sociology & Politics, Sheffield Hallam University, Sheffield, UK
| | - N Morton
- Neuro Rehabilitation Outreach Team, Rotherham, Doncaster and South Humber NHS Trust, Doncaster, UK
| | - A Atherton
- Atherton Neuropsychological Consultancy Ltd, Yorkshire, UK
| |
Collapse
|
18
|
Steel KA, Robbins RA, Nijhuis P. Trainability of novel person recognition based on brief exposure to form and motion cues. Front Psychol 2022; 13:933723. [PMID: 36248463 PMCID: PMC9554208 DOI: 10.3389/fpsyg.2022.933723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 08/29/2022] [Indexed: 11/17/2022] Open
Abstract
Fast and accurate recognition of teammates is crucial in contexts as varied as fast-moving sports, the military, and law enforcement engagements; misrecognition can result in lost scoring opportunities in sport or friendly fire in combat contexts. Initial studies on teammate recognition in sport suggests that athletes are adept at this perceptual ability but still susceptible to errors. The purpose of the current proof-of-concept study was to explore the trainability of teammate recognition from very brief exposure to vision of the whole-body form and motion of a previously unknown individual. Participants were divided into three groups: a 4-week training group who were also the actors for the test and training footage, a 2-week training group, and a no-training group. Findings revealed significant differences between the training groups and their improvement from the pre-to post-test on Response Accuracy and Movement Time. The current study found the best performance in the 4-week Training group. The biggest improvement was found in the 2-week training group, whilst no significant improvement was made in the Control group. These results suggest that training was effective, but also indicate that having initially performed the movements as actors may have led to improvements in baseline testing and ultimately the best results, thus physical performance of skills combined with video-based training may reduce the amount of time needed to improve teammate identification.
Collapse
Affiliation(s)
- Kylie Ann Steel
- School of Health Science, Western Sydney University, Penrith, NSW, Australia
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
- *Correspondence: Kylie Ann Steel, ;
| | - Rachel A. Robbins
- Research School of Psychology, College of Health and Medicine, Australian National University, Canberra, ACT, Australia
| | - Patti Nijhuis
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
| |
Collapse
|
19
|
Dunn JD, Varela VPL, Nicholls VI, Papinutto M, White D, Miellet S. Face-Information Sampling in Super-Recognizers. Psychol Sci 2022; 33:1615-1630. [PMID: 36044042 DOI: 10.1177/09567976221096320] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Perceptual processes underlying individual differences in face-recognition ability remain poorly understood. We compared visual sampling of 37 adult super-recognizers-individuals with superior face-recognition ability-with that of 68 typical adult viewers by measuring gaze position as they learned and recognized unfamiliar faces. In both phases, participants viewed faces through "spotlight" apertures that varied in size, with face information restricted in real time around their point of fixation. We found higher accuracy in super-recognizers at all aperture sizes-showing that their superiority does not rely on global sampling of face information but is also evident when they are forced to adopt piecemeal sampling. Additionally, super-recognizers made more fixations, focused less on eye region, and distributed their gaze more than typical viewers. These differences were most apparent when learning faces and were consistent with trends we observed across the broader ability spectrum, suggesting that they are reflective of factors that vary dimensionally in the broader population.
Collapse
Affiliation(s)
- James D Dunn
- School of Psychology, University of New South Wales
| | | | - Victoria I Nicholls
- Faculty of Science & Technology, Bournemouth University.,Department of Psychology, University of Cambridge
| | | | - David White
- School of Psychology, University of New South Wales
| | | |
Collapse
|
20
|
Lu X, Dai A, Guo Y, Shen M, Gao Z. Is the social chunking of agent actions in working memory resource-demanding? Cognition 2022; 229:105249. [PMID: 35961161 DOI: 10.1016/j.cognition.2022.105249] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 07/20/2022] [Accepted: 08/03/2022] [Indexed: 12/01/2022]
Abstract
Retaining social interactions in working memory (WM) for further social activities is vital for a successful social life. Researchers have noted a social chunking phenomenon in WM: WM involuntarily uses the social interaction cues embedded in the individual actions and chunks them as one unit. Our study is the first to examine whether the social chunking in WM is an automatic process, by asking whether social chunking of agent actions in WM is resource-demanding, a key hallmark of automaticity. We achieved this by probing whether retaining agent interactions in WM as a chunk required more attention than retaining actions without interaction. We employed a WM change-detection task with actions containing social interaction cues as memory stimuli, and required participants only memorizing individual actions. As domain-general attention and object-based attention are suggested playing a key role in retaining chunks in WM, a secondary task was inserted in the WM maintenance phase to consume these two types of attention. We reestablished the fact that the social chunking in WM required no voluntary control (Experiments 1 and 2). Critically, we demonstrated substantial evidence that social chunking in WM did not require extra domain-general attention (Experiment 1) or object-based attention (Experiment 2). These findings imply that the social chunking of agent actions in WM is not resource-demanding, supporting an automatic view of social chunking in WM.
Collapse
Affiliation(s)
- Xiqian Lu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hang Zhou, China
| | - Alessandro Dai
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hang Zhou, China
| | - Yang Guo
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hang Zhou, China
| | - Mowei Shen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hang Zhou, China.
| | - Zaifeng Gao
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hang Zhou, China.
| |
Collapse
|
21
|
Tidoni E, Holle H, Scandola M, Schindler I, Hill L, Cross ES. Human but not robotic gaze facilitates action prediction. iScience 2022; 25:104462. [PMID: 35707718 PMCID: PMC9189121 DOI: 10.1016/j.isci.2022.104462] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/05/2022] [Accepted: 05/17/2022] [Indexed: 01/08/2023] Open
Abstract
Do people ascribe intentions to humanoid robots as they would to humans or non-human-like animated objects? In six experiments, we compared people’s ability to extract non-mentalistic (i.e., where an agent is looking) and mentalistic (i.e., what an agent is looking at; what an agent is going to do) information from gaze and directional cues performed by humans, human-like robots, and a non-human-like object. People were faster to infer the mental content of human agents compared to robotic agents. Furthermore, although the absence of differences in control conditions rules out the use of non-mentalizing strategies, the human-like appearance of non-human agents may engage mentalizing processes to solve the task. Overall, results suggest that human-like robotic actions may be processed differently from humans’ and objects’ behavior. These findings inform our understanding of the relevance of an object’s physical features in triggering mentalizing abilities and its relevance for human–robot interaction. People differently ascribe mental content to human-like and non-human-like agents A human-like shape may automatically engage mentalizing processes Human actions are interpreted faster than non-human actions
Collapse
|
22
|
Abstract
Visual representations of bodies, in addition to those of faces, contribute to the recognition of con- and heterospecifics, to action recognition, and to nonverbal communication. Despite its importance, the neural basis of the visual analysis of bodies has been less studied than that of faces. In this article, I review what is known about the neural processing of bodies, focusing on the macaque temporal visual cortex. Early single-unit recording work suggested that the temporal visual cortex contains representations of body parts and bodies, with the dorsal bank of the superior temporal sulcus representing bodily actions. Subsequent functional magnetic resonance imaging studies in both humans and monkeys showed several temporal cortical regions that are strongly activated by bodies. Single-unit recordings in the macaque body patches suggest that these represent mainly body shape features. More anterior patches show a greater viewpoint-tolerant selectivity for body features, which may reflect a processing principle shared with other object categories, including faces. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Rufin Vogels
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Belgium; .,Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
23
|
Simonet M, Ruggeri P, Sallard E, Barral J. The field of expertise modulates the time course of neural processes associated with inhibitory control in a sport decision-making task. Sci Rep 2022; 12:7657. [PMID: 35538089 PMCID: PMC9090811 DOI: 10.1038/s41598-022-11580-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 04/25/2022] [Indexed: 11/09/2022] Open
Abstract
Inhibitory control (IC), the ability to suppress inappropriate actions, can be improved by regularly facing complex and dynamic situations requiring flexible behaviors, such as in the context of intensive sport practice. However, researchers have not clearly determined whether and how this improvement in IC transfers to ecological and nonecological computer-based tasks. We explored the spatiotemporal dynamics of changes in the brain activity of three groups of athletes performing sport-nonspecific and sport-specific Go/NoGo tasks with video footages of table tennis situations to address this question. We compared table tennis players (n = 20), basketball players (n = 20) and endurance athletes (n = 17) to identify how years of practicing a sport in an unpredictable versus predictable environment shape the IC brain networks and increase the transfer effects to untrained tasks. Overall, the table tennis group responded faster than the two other groups in both Go/NoGo tasks. The electrical neuroimaging analyses performed in the sport-specific Go/NoGo task revealed that this faster response time was supported by an early engagement of brain structures related to decision-making processes in a time window where inhibition processes typically occur. Our collective findings have relevant applied perspectives, as they highlight the importance of designing more ecological domain-related tasks to effectively capture the complex decision-making processes acquired in real-life situations. Finally, the limited effects from sport practice to laboratory-based tasks found in this study question the utility of cognitive training intervention, whose effects would remain specific to the practice environment.
Collapse
Affiliation(s)
- Marie Simonet
- Institute of Sport Sciences, University of Lausanne, Lausanne, Switzerland.
| | - Paolo Ruggeri
- Brain Electrophysiology Attention Movement Laboratory, Institute of Psychology, University of Lausanne, Lausanne, Switzerland
| | - Etienne Sallard
- Brain Electrophysiology Attention Movement Laboratory, Institute of Psychology, University of Lausanne, Lausanne, Switzerland
| | - Jérôme Barral
- Institute of Sport Sciences, University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
24
|
Two Means Together? Effects of Response Bias and Sensitivity on Communicative Action Detection. JOURNAL OF NONVERBAL BEHAVIOR 2022; 46:281-298. [PMID: 35431380 PMCID: PMC9005026 DOI: 10.1007/s10919-022-00398-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/03/2022] [Indexed: 11/23/2022]
Abstract
Numerous lines of research suggest that communicative dyadic actions elicit preferential processing and more accurate detection compared to similar but individual actions. However, it is unclear whether the presence of the second agent provides additional cues that allow for more accurate discriminability between communicative and individual intentions or whether it lowers the threshold for perceiving third-party encounters as interactive. We performed a series of studies comparing the recognition of communicative actions from single and dyadic displays in healthy individuals. A decreased response threshold for communicative actions was observed for dyadic vs. single-agent animations across all three studies, providing evidence for the dyadic communicative bias. Furthermore, consistent with the facilitated recognition hypothesis, congruent response to a communicative gesture increased the ability to accurately interpret the actions. In line with dual-process theory, we propose that both mechanisms may be perceived as complementary rather than competitive and affect different stages of stimuli processing.
Collapse
|
25
|
Cerebellar Contribution to Emotional Body Language Perception. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1378:141-153. [DOI: 10.1007/978-3-030-99550-8_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
26
|
Kobayashi M, Kanazawa S, Yamaguchi MK, O'Toole AJ. Cortical processing of dynamic bodies in the superior occipito-temporal regions of the infants' brain: Difference from dynamic faces and inversion effect. Neuroimage 2021; 244:118598. [PMID: 34587515 DOI: 10.1016/j.neuroimage.2021.118598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 11/17/2022] Open
Abstract
Previous functional neuroimaging studies imply a crucial role of the superior temporal regions (e.g., superior temporal sulcus: STS) for processing of dynamic faces and bodies. However, little is known about the cortical processing of moving faces and bodies in infancy. The current study used functional near-infrared spectroscopy (fNIRS) to directly compare cortical hemodynamic responses to dynamic faces (videos of approaching people with blurred bodies) and dynamic bodies (videos of approaching people with blurred faces) in infants' brain. We also examined the body-inversion effect in 5- to 8-month-old infants using hemodynamic responses as a measure. We found significant brain activity for the dynamic faces and bodies in the superior area of bilateral temporal cortices in both 5- to 6-month-old and 7- to 8-month-old infants. The hemodynamic responses to dynamic faces occurred across a broader area of cortex in 7- to 8-month-olds than in 5- to 6-month-olds, but we did not find a developmental change for dynamic bodies. There was no significant activation when the stimuli were presented upside down, indicating that these activation patterns did not result from the low-level visual properties of dynamic faces and bodies. Additionally, we found that the superior temporal regions showed a body inversion effect in infants aged over 5 months: the upright dynamic body stimuli induced stronger activation compared to the inverted stimuli. The most important contribution of the present study is that we identified cortical areas responsive to dynamic bodies and faces in two groups of infants (5-6-months and 7-8-months of age) and we found different developmental trends for the processing of bodies and faces.
Collapse
Affiliation(s)
- Megumi Kobayashi
- Department of Functioning and Disability, Institute for Developmental Research, Aichi Developmental Disability Center, Japan.
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Japan
| | | | - Alice J O'Toole
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
| |
Collapse
|
27
|
Abstract
Deep learning models currently achieve human levels of performance on real-world face recognition tasks. We review scientific progress in understanding human face processing using computational approaches based on deep learning. This review is organized around three fundamental advances. First, deep networks trained for face identification generate a representation that retains structured information about the face (e.g., identity, demographics, appearance, social traits, expression) and the input image (e.g., viewpoint, illumination). This forces us to rethink the universe of possible solutions to the problem of inverse optics in vision. Second, deep learning models indicate that high-level visual representations of faces cannot be understood in terms of interpretable features. This has implications for understanding neural tuning and population coding in the high-level visual cortex. Third, learning in deep networks is a multistep process that forces theoretical consideration of diverse categories of learning that can overlap, accumulate over time, and interact. Diverse learning types are needed to model the development of human face processing skills, cross-race effects, and familiarity with individual faces.
Collapse
Affiliation(s)
- Alice J O'Toole
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080, USA;
| | - Carlos D Castillo
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, USA;
| |
Collapse
|
28
|
Maguinness C, von Kriegstein K. Visual mechanisms for voice-identity recognition flexibly adjust to auditory noise level. Hum Brain Mapp 2021; 42:3963-3982. [PMID: 34043249 PMCID: PMC8288083 DOI: 10.1002/hbm.25532] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 04/26/2021] [Accepted: 05/02/2021] [Indexed: 11/24/2022] Open
Abstract
Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so‐called ‘face‐benefit’ is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face‐benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face‐sensitive regions while participants recognised the identity of auditory‐only speakers (previously learned by face) in high (SNR −4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face‐benefit in both noise levels, for most participants (16 of 21). In high‐noise, the recognition of face‐learned speakers engaged the right posterior superior temporal sulcus motion‐sensitive face area (pSTS‐mFA), a region implicated in the processing of dynamic facial cues. The face‐benefit in high‐noise also correlated positively with increased functional connectivity between this region and voice‐sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face‐benefit. In low‐noise, the face‐benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS‐mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice‐identity recognition in auditory‐only listening conditions.
Collapse
Affiliation(s)
- Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
29
|
Foster C, Zhao M, Bolkart T, Black MJ, Bartels A, Bülthoff I. Separated and overlapping neural coding of face and body identity. Hum Brain Mapp 2021; 42:4242-4260. [PMID: 34032361 PMCID: PMC8356992 DOI: 10.1002/hbm.25544] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 04/15/2021] [Accepted: 05/13/2021] [Indexed: 11/25/2022] Open
Abstract
Recognising a person's identity often relies on face and body information, and is tolerant to changes in low‐level visual input (e.g., viewpoint changes). Previous studies have suggested that face identity is disentangled from low‐level visual input in the anterior face‐responsive regions. It remains unclear which regions disentangle body identity from variations in viewpoint, and whether face and body identity are encoded separately or combined into a coherent person identity representation. We trained participants to recognise three identities, and then recorded their brain activity using fMRI while they viewed face and body images of these three identities from different viewpoints. Participants' task was to respond to either the stimulus identity or viewpoint. We found consistent decoding of body identity across viewpoint in the fusiform body area, right anterior temporal cortex, middle frontal gyrus and right insula. This finding demonstrates a similar function of fusiform and anterior temporal cortex for bodies as has previously been shown for faces, suggesting these regions may play a general role in extracting high‐level identity information. Moreover, we could decode identity across fMRI activity evoked by faces and bodies in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, revealing a distributed network that encodes person identity abstractly. Lastly, identity decoding was consistently better when participants attended to identity, indicating that attention to identity enhances its neural representation. These results offer new insights into how the brain develops an abstract neural coding of person identity, shared by faces and bodies.
Collapse
Affiliation(s)
- Celia Foster
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Max Planck Institute for Intelligent Systems, Tübingen, Germany.,International Max Planck Research School for Cognitive and Systems Neuroscience, University of Tübingen, Tübingen, Germany
| | - Mintao Zhao
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,School of Psychology, University of East Anglia, UK
| | - Timo Bolkart
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Michael J Black
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Andreas Bartels
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Centre for Integrative Neuroscience, Tübingen, Germany.,Department of Psychology, University of Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | | |
Collapse
|
30
|
Nagle F, Johnston A. Recognising the dynamic form of fire. Sci Rep 2021; 11:10566. [PMID: 34011973 PMCID: PMC8134437 DOI: 10.1038/s41598-021-89453-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Accepted: 04/13/2021] [Indexed: 11/25/2022] Open
Abstract
Encoding and recognising complex natural sequences provides a challenge for human vision. We found that observers could recognise a previously presented segment of a video of a hearth fire when embedded in a longer sequence. Recognition performance declined when the test video was spatially inverted, but not when it was hue reversed or temporally reversed. Sampled motion degraded forwards/reversed playback discrimination, indicating observers were sensitive to the asymmetric pattern of motion of flames. For brief targets, performance increased with target length. More generally, performance depended on the relative lengths of the target and embedding sequence. Increased errors with embedded sequence length were driven by positive responses to non-target sequences (false alarms) rather than omissions. Taken together these observations favour interpreting performance in terms of an incremental decision-making model based on a sequential statistical analysis in which evidence accrues for one of two alternatives. We also suggest that prediction could provide a means of providing and evaluating evidence in a sequential analysis model.
Collapse
Affiliation(s)
- Fintan Nagle
- CoMPLEX, University College London, London, WC1E 6BT, UK. .,Imperial College, Exhibition Road, London, SW7 2AZ, UK.
| | - Alan Johnston
- CoMPLEX, University College London, London, WC1E 6BT, UK.,School of Psychology, University of Nottingham, Nottingham, NG7 2RD, UK
| |
Collapse
|
31
|
Koelsch S, Cheung VKM, Jentschke S, Haynes JD. Neocortical substrates of feelings evoked with music in the ACC, insula, and somatosensory cortex. Sci Rep 2021; 11:10119. [PMID: 33980876 PMCID: PMC8115666 DOI: 10.1038/s41598-021-89405-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 04/21/2021] [Indexed: 12/01/2022] Open
Abstract
Neurobiological models of emotion focus traditionally on limbic/paralimbic regions as neural substrates of emotion generation, and insular cortex (in conjunction with isocortical anterior cingulate cortex, ACC) as the neural substrate of feelings. An emerging view, however, highlights the importance of isocortical regions beyond insula and ACC for the subjective feeling of emotions. We used music to evoke feelings of joy and fear, and multivariate pattern analysis (MVPA) to decode representations of feeling states in functional magnetic resonance (fMRI) data of n = 24 participants. Most of the brain regions providing information about feeling representations were neocortical regions. These included, in addition to granular insula and cingulate cortex, primary and secondary somatosensory cortex, premotor cortex, frontal operculum, and auditory cortex. The multivoxel activity patterns corresponding to feeling representations emerged within a few seconds, gained in strength with increasing stimulus duration, and replicated results of a hypothesis-generating decoding analysis from an independent experiment. Our results indicate that several neocortical regions (including insula, cingulate, somatosensory and premotor cortices) are important for the generation and modulation of feeling states. We propose that secondary somatosensory cortex, which covers the parietal operculum and encroaches on the posterior insula, is of particular importance for the encoding of emotion percepts, i.e., preverbal representations of subjective feeling.
Collapse
Affiliation(s)
- Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway. .,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Vincent K M Cheung
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Institute of Information Science, Academia Sinica, Taipei, Taiwan
| | | | - John-Dylan Haynes
- Berlin Center for Advanced Neuroimaging, Charité - Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
32
|
Colón YI, Castillo CD, O'Toole AJ. Facial expression is retained in deep networks trained for face identification. J Vis 2021; 21:4. [PMID: 33821927 PMCID: PMC8039571 DOI: 10.1167/jov.21.4.4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Facial expressions distort visual cues for identification in two-dimensional images. Face processing systems in the brain must decouple image-based information from multiple sources to operate in the social world. Deep convolutional neural networks (DCNN) trained for face identification retain identity-irrelevant, image-based information (e.g., viewpoint). We asked whether a DCNN trained for identity also retains expression information that generalizes over viewpoint change. DCNN representations were generated for a controlled dataset containing images of 70 actors posing 7 facial expressions (happy, sad, angry, surprised, fearful, disgusted, neutral), from 5 viewpoints (frontal, 90° and 45° left and right profiles). Two-dimensional visualizations of the DCNN representations revealed hierarchical groupings by identity, followed by viewpoint, and then by facial expression. Linear discriminant analysis of full-dimensional representations predicted expressions accurately, mean 76.8% correct for happiness, followed by surprise, disgust, anger, neutral, sad, and fearful at 42.0%; chance ≈14.3%. Expression classification was stable across viewpoints. Representational similarity heatmaps indicated that image similarities within identities varied more by viewpoint than by expression. We conclude that an identity-trained, deep network retains shape-deformable information about expression and viewpoint, along with identity, in a unified form—consistent with a recent hypothesis for ventral visual stream processing.
Collapse
Affiliation(s)
- Y Ivette Colón
- Behavioral and Brain Sciences, The University of Texas at Dallas, TX, USA.,
| | - Carlos D Castillo
- University of Maryland Institute for Advanced Computer Studies, MD, USA.,
| | - Alice J O'Toole
- Behavioral and Brain Sciences, The University of Texas at Dallas, TX, USA.,
| |
Collapse
|
33
|
Weatherford DR, Roberson D, Erickson WB. When experience does not promote expertise: security professionals fail to detect low prevalence fake IDs. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:25. [PMID: 33792842 PMCID: PMC8017042 DOI: 10.1186/s41235-021-00288-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Accepted: 03/07/2021] [Indexed: 11/10/2022]
Abstract
Professional screeners frequently verify photograph IDs in such industries as professional security, bar tending, and sales of age-restricted materials. Moreover, security screening is a vital tool for law enforcement in the search for missing or wanted persons. Nevertheless, previous research demonstrates that novice participants fail to spot fake IDs when they are rare (i.e., the low prevalence effect; LPE). To address whether this phenomenon also occurs with professional screeners, we conducted three experiments. Experiment 1 compared security professional and non-professionals. Experiment 2 compared bar-security professionals, access-security professionals, and non-professionals. Finally, Experiment 3 added a newly created Professional Identity Training Questionnaire to determine whether and how aspects of professionals’ employment predict ID-matching accuracy. Across all three experiments, all participants were susceptible to the LPE regardless of professional status. Neither length/type of professional experience nor length/type of training experience affected ID verification performance. We discuss task performance and survey responses with aims to acknowledge and address this potential problem in real-world screening scenarios.
Collapse
Affiliation(s)
- Dawn R Weatherford
- Psychology Program, Department of Life Sciences, Texas A&M University-San Antonio, 1 University Way, San Antonio, TX, 78224, USA.
| | - Devin Roberson
- Psychology Program, Department of Life Sciences, Texas A&M University-San Antonio, 1 University Way, San Antonio, TX, 78224, USA
| | - William Blake Erickson
- Psychology Program, Department of Life Sciences, Texas A&M University-San Antonio, 1 University Way, San Antonio, TX, 78224, USA
| |
Collapse
|
34
|
Johnston A, Brown BB, Elson R. Synchronous facial action binds dynamic facial features. Sci Rep 2021; 11:7191. [PMID: 33785856 PMCID: PMC8010062 DOI: 10.1038/s41598-021-86725-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 02/22/2021] [Indexed: 11/09/2022] Open
Abstract
We asked how dynamic facial features are perceptually grouped. To address this question, we varied the timing of mouth movements relative to eyebrow movements, while measuring the detectability of a small temporal misalignment between a pair of oscillating eyebrows-an eyebrow wave. We found eyebrow wave detection performance was worse for synchronous movements of the eyebrows and mouth. Subsequently, we found this effect was specific to stimuli presented to the right visual field, implicating the involvement of left lateralised visual speech areas. Adaptation has been used as a tool in low-level vision to establish the presence of separable visual channels. Adaptation to moving eyebrows and mouths with various relative timings reduced eyebrow wave detection but only when the adapting mouth and eyebrows moved asynchronously. Inverting the face led to a greater reduction in detection after adaptation particularly for asynchronous facial motion at test. We conclude that synchronous motion binds dynamic facial features whereas asynchronous motion releases them, allowing adaptation to impair eyebrow wave detection.
Collapse
Affiliation(s)
- Alan Johnston
- School of Psychology, University Park, The University of Nottingham, Nottingham, NG7 2RD, UK.
| | - Ben B Brown
- School of Psychology, University Park, The University of Nottingham, Nottingham, NG7 2RD, UK
| | - Ryan Elson
- School of Psychology, University Park, The University of Nottingham, Nottingham, NG7 2RD, UK
| |
Collapse
|
35
|
Rubínová E, Fitzgerald RJ, Juncu S, Ribbers E, Hope L, Sauer JD. Live presentation for eyewitness identification is not superior to photo or video presentation. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2021. [DOI: 10.1016/j.jarmac.2020.08.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
36
|
FFA and OFA Encode Distinct Types of Face Identity Information. J Neurosci 2021; 41:1952-1969. [PMID: 33452225 DOI: 10.1523/jneurosci.1449-20.2020] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 12/18/2020] [Accepted: 12/22/2020] [Indexed: 01/11/2023] Open
Abstract
Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). In a sample of 30 human participants (22 females, 8 males), we used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. We built diverse candidate models, ranging from low-level image-computable properties (pixel-wise, GIST, and Gabor-Jet dissimilarities), through higher-level image-computable descriptions (OpenFace deep neural network, trained to cluster faces by identity), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by the FFA and OFA. Dissimilarities between face identities in FFA were accounted for by differences in perceived similarity, Social Traits, Gender, and by the OpenFace network. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-Jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.SIGNIFICANCE STATEMENT Recent studies using fMRI have shown that several face-responsive brain regions can distinguish between different face identities. It is however unclear whether these different face-responsive regions distinguish between identities in similar or different ways. We used representational similarity analysis to investigate the computations within three brain regions in response to naturalistically varying videos of face identities. Our results revealed that two regions, the fusiform face area and the occipital face area, encode distinct identity information about faces. Although identity can be decoded from both regions, identity representations in fusiform face area primarily contained information about social traits, gender, and high-level visual features, whereas occipital face area primarily represented lower-level image features.
Collapse
|
37
|
Kovács G. Getting to Know Someone: Familiarity, Person Recognition, and Identification in the Human Brain. J Cogn Neurosci 2020; 32:2205-2225. [DOI: 10.1162/jocn_a_01627] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Abstract
In our everyday life, we continuously get to know people, dominantly through their faces. Several neuroscientific experiments showed that familiarization changes the behavioral processing and underlying neural representation of faces of others. Here, we propose a model of the process of how we actually get to know someone. First, the purely visual familiarization of unfamiliar faces occurs. Second, the accumulation of associated, nonsensory information refines person representation, and finally, one reaches a stage where the effortless identification of very well-known persons occurs. We offer here an overview of neuroimaging studies, first evaluating how and in what ways the processing of unfamiliar and familiar faces differs and, second, by analyzing the fMRI adaptation and multivariate pattern analysis results we estimate where identity-specific representation is found in the brain. The available neuroimaging data suggest that different aspects of the information emerge gradually as one gets more and more familiar with a person within the same network. We propose a novel model of familiarity and identity processing, where the differential activation of long-term memory and emotion processing areas is essential for correct identification.
Collapse
|
38
|
Zäske R, Skuk VG, Schweinberger SR. Attractiveness and distinctiveness between speakers' voices in naturalistic speech and their faces are uncorrelated. ROYAL SOCIETY OPEN SCIENCE 2020; 7:201244. [PMID: 33489273 PMCID: PMC7813223 DOI: 10.1098/rsos.201244] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Accepted: 11/20/2020] [Indexed: 05/28/2023]
Abstract
Facial attractiveness has been linked to the averageness (or typicality) of a face and, more tentatively, to a speaker's vocal attractiveness, via the 'honest signal' hypothesis, holding that attractiveness signals good genes. In four experiments, we assessed ratings for attractiveness and two common measures of distinctiveness ('distinctiveness-in-the-crowd', DITC and 'deviation-based distinctiveness', DEV) for faces and voices (simple vowels, or more naturalistic sentences) from 64 young adult speakers (32 female). Consistent and substantial negative correlations between attractiveness and DEV generally supported the averageness account of attractiveness, for both voices and faces. By contrast, and indicating that both measures of distinctiveness reflect different constructs, correlations between attractiveness and DITC were numerically positive for faces (though small and non-significant), and significant for voices in sentence stimuli. Between faces and voices, distinctiveness ratings were uncorrelated. Remarkably, and at variance with the honest signal hypothesis, vocal and facial attractiveness were also uncorrelated in all analyses involving naturalistic, i.e. sentence-based, speech. This result pattern was confirmed using a new set of stimuli and raters (experiment 5). Overall, while our findings strongly support an averageness account of attractiveness for both domains, they provide no evidence for an honest signal account of facial and vocal attractiveness in complex naturalistic speech.
Collapse
Affiliation(s)
- Romi Zäske
- Department for General Psychology and Cognitive Neuroscience & DFG Research Unit Person Perception, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
- Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
| | - Verena Gabriele Skuk
- Department for General Psychology and Cognitive Neuroscience & DFG Research Unit Person Perception, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
- Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience & DFG Research Unit Person Perception, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
- International Max Planck Research School (IMPRS) for the Science of Human History, Max Planck Institute for the Science of Human History, Kahlaische Strasse 10, 07745 Jena, Germany
| |
Collapse
|
39
|
Williams EH, Bilbao-Broch L, Downing PE, Cross ES. Examining the value of body gestures in social reward contexts. Neuroimage 2020; 222:117276. [PMID: 32818616 PMCID: PMC7779365 DOI: 10.1016/j.neuroimage.2020.117276] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 08/09/2020] [Accepted: 08/10/2020] [Indexed: 11/23/2022] Open
Abstract
Brain regions associated with the processing of tangible rewards (such as money, food, or sex) are also involved in anticipating social rewards and avoiding social punishment. To date, studies investigating the neural underpinnings of social reward have presented feedback via static or dynamic displays of faces to participants. However, research demonstrates that participants find another type of social stimulus, namely, biological motion, rewarding as well, and exert effort to engage with this type of stimulus. Here we examine whether feedback presented via body gestures in the absence of facial cues also acts as a rewarding stimulus and recruits reward-related brain regions. To achieve this, we investigated the neural underpinnings of anticipating social reward and avoiding social disapproval presented via gestures alone, using a social incentive delay task. As predicted, the anticipation of social reward and avoidance of social disapproval engaged reward-related brain regions, including the nucleus accumbens, in a manner similar to previous studies' reports of feedback presented via faces and money. This study provides the first evidence that human body motion alone engages brain regions associated with reward processing in a similar manner to other social (i.e. faces) and non-social (i.e. money) rewards. The findings advance our understanding of social motivation in human perception and behavior.
Collapse
Affiliation(s)
- Elin H Williams
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, England
| | - Laura Bilbao-Broch
- Korea Institute for Science and Technology, University of Science and Technology, Seoul, South Korea
| | - Paul E Downing
- Wales Institute for Cognitive Neuroscience, Bangor University, Bangor, Wales
| | - Emily S Cross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland; Department of Cognitive Science, Macquarie University, Sydney, Australia.
| |
Collapse
|
40
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
41
|
Abstract
The accurate perception of human crowds is integral to social understanding and interaction. Previous studies have shown that observers are sensitive to several crowd characteristics such as average facial expression, gender, identity, joint attention, and heading direction. In two experiments, we examined ensemble perception of crowd speed using standard point-light walkers (PLW). Participants were asked to estimate the average speed of a crowd consisting of 12 figures moving at different speeds. In Experiment 1, trials of intact PLWs alternated with trials of scrambled PLWs with a viewing duration of 3 seconds. We found that ensemble processing of crowd speed could rely on local motion alone, although a globally intact configuration enhanced performance. In Experiment 2, observers estimated the average speed of intact-PLW crowds that were displayed at reduced viewing durations across five blocks of trials (between 2500 ms and 500 ms). Estimation of fast crowds was precise and accurate regardless of viewing duration, and we estimated that three to four walkers could still be integrated at 500 ms. For slow crowds, we found a systematic deterioration in performance as viewing time reduced, and performance at 500 ms could not be distinguished from a single-walker response strategy. Overall, our results suggest that rapid and accurate ensemble perception of crowd speed is possible, although sensitive to the precise speed range examined.
Collapse
|
42
|
Schweinberger SR, Dobel C. Why twos in human visual perception? A possible role of prediction from dynamic synchronization in interaction. Cortex 2020; 135:355-357. [PMID: 33234236 DOI: 10.1016/j.cortex.2020.09.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 09/23/2020] [Indexed: 12/01/2022]
Affiliation(s)
- Stefan R Schweinberger
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University of Jena, Germany; Swiss Center for Affective Sciences, University of Geneva, Switzerland. http://www.allgpsy.uni-jena.de
| | - Christian Dobel
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Friedrich Schiller University of Jena, Germany
| |
Collapse
|
43
|
Processing communicative facial and vocal cues in the superior temporal sulcus. Neuroimage 2020; 221:117191. [PMID: 32711066 DOI: 10.1016/j.neuroimage.2020.117191] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 07/14/2020] [Accepted: 07/19/2020] [Indexed: 11/20/2022] Open
Abstract
Facial and vocal cues provide critical social information about other humans, including their emotional and attentional states and the content of their speech. Recent work has shown that the face-responsive region of posterior superior temporal sulcus ("fSTS") also responds strongly to vocal sounds. Here, we investigate the functional role of this region and the broader STS by measuring responses to a range of face movements, vocal sounds, and hand movements using fMRI. We find that the fSTS responds broadly to different types of audio and visual face action, including both richly social communicative actions, as well as minimally social noncommunicative actions, ruling out hypotheses of specialization for processing speech signals, or communicative signals more generally. Strikingly, however, responses to hand movements were very low, whether communicative or not, indicating a specific role in the analysis of face actions (facial and vocal), not a general role in the perception of any human action. Furthermore, spatial patterns of response in this region were able to decode communicative from noncommunicative face actions, both within and across modality (facial/vocal cues), indicating sensitivity to an abstract social dimension. These functional properties of the fSTS contrast with a region of middle STS that has a selective, largely unimodal auditory response to speech sounds over both communicative and noncommunicative vocal nonspeech sounds, and nonvocal sounds. Region of interest analyses were corroborated by a data-driven independent component analysis, identifying face-voice and auditory speech responses as dominant sources of voxelwise variance across the STS. These results suggest that the STS contains separate processing streams for the audiovisual analysis of face actions and auditory speech processing.
Collapse
|
44
|
Independent contributions of the face, body, and gait to the representation of the whole person. Atten Percept Psychophys 2020; 83:199-214. [PMID: 33083987 DOI: 10.3758/s13414-020-02110-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Most studies on person perception have primarily investigated static images of faces. However, real-life person perception also involves the body and often the gait of the whole person. Whereas some studies indicated that the face dominates the representation of the whole person, others have emphasized the additional contribution of the body and gait. Here, we compared models of whole-person perception by asking whether a model that includes the body for static whole-person stimuli and also the gait for dynamic whole-person stimuli accounts better for the representation of the whole person than a model that takes into account the face alone. Participants rated the distinctiveness of static or dynamic displays of different people based on either the whole person, face, body, or gait. By fitting a linear regression model to the representation of the whole person based on the face, body, and gait, we revealed that the face and body contribute uniquely and independently to the representation of the static whole person, and that gait further contributes to the representation of the dynamic person. A complementary analysis examined whether these components are also valid dimensions of a whole-person representational space. This analysis further confirmed that the body in addition to the face as well as the gait are valid dimensions of the static and dynamic whole-person representations, respectively. These data clearly show that whole-person perception goes beyond the face and is significantly influenced by the body and gait.
Collapse
|
45
|
Vandewouw MM, Choi EJ, Hammill C, Lerch JP, Anagnostou E, Taylor MJ. Changing Faces: Dynamic Emotional Face Processing in Autism Spectrum Disorder Across Childhood and Adulthood. BIOLOGICAL PSYCHIATRY: COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2020; 6:825-836. [PMID: 33279458 DOI: 10.1016/j.bpsc.2020.09.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 08/17/2020] [Accepted: 09/04/2020] [Indexed: 11/19/2022]
Abstract
BACKGROUND Autism spectrum disorder (ASD) is classically associated with poor emotional face processing. Few studies, however, have used more ecological dynamic stimuli. We contrasted functional magnetic resonance imaging measures of dynamic emotional face processing in ASD and typically developing (TD) cohorts across a wide age range to determine if the processing and age-related trajectories differed between participants with and without ASD. METHODS Functional magnetic resonance imaging data collected from 200 participants (5-42 years old; 107 in ASD cohort, 93 in TD cohort) during the presentation of dynamic emotional faces (neutral-to-happy, neutral-to-angry) and dynamic flowers (closed-to-open) were analyzed. Group differences and group-by-age interactions in the faces versus flowers and between emotion contrasts were investigated. RESULTS Differences in activation between dynamic faces and flowers in occipital regions, including the fusiform gyri, were reduced in the ASD group. Contrasting the two emotions, ASD compared with TD participants showed increased engagement of the precentral, postcentral, and superior temporal gyri to happy faces and increased activation to angry faces occipitally. Emotion processing regions, such as insula, temporal pole, and frontal regions, showed increased recruitment with age to happy faces compared with both angry faces and flowers in the TD group, but decreased recruitment with age in the ASD group. CONCLUSIONS Using dynamic stimuli, we demonstrated that participants with ASD processed faces similarly to nonface stimuli, and age-related atypicalities were more pronounced to happy faces in participants with ASD. We demonstrated emotion-specific atypicalities in a large group of participants with ASD that underscore persistent difficulties from childhood into mid-adulthood.
Collapse
Affiliation(s)
- Marlee M Vandewouw
- Department of Diagnostic Imaging, Hospital for Sick Children, Toronto, Ontario, Canada; Program in Neurosciences & Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada; Autism Research Center, Bloorview Research Institute, Holland Bloorview Kids Rehabiliation Hospital, Toronto, Ontario, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada.
| | - Eun Jung Choi
- Department of Diagnostic Imaging, Hospital for Sick Children, Toronto, Ontario, Canada; Program in Neurosciences & Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada; Autism Research Center, Bloorview Research Institute, Holland Bloorview Kids Rehabiliation Hospital, Toronto, Ontario, Canada
| | - Christopher Hammill
- Program in Neurosciences & Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada
| | - Jason P Lerch
- Program in Neurosciences & Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada; Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Evdokia Anagnostou
- Program in Neurosciences & Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada; Autism Research Center, Bloorview Research Institute, Holland Bloorview Kids Rehabiliation Hospital, Toronto, Ontario, Canada; Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
| | - Margot J Taylor
- Department of Diagnostic Imaging, Hospital for Sick Children, Toronto, Ontario, Canada; Program in Neurosciences & Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada; Department of Psychology, University of Toronto, Toronto, Ontario, Canada; Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
46
|
Simhi N, Yovel G. Dissociating gait from static appearance: A virtual reality study of the role of dynamic identity signatures in person recognition. Cognition 2020; 205:104445. [PMID: 32920344 DOI: 10.1016/j.cognition.2020.104445] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2019] [Revised: 08/16/2020] [Accepted: 08/20/2020] [Indexed: 11/18/2022]
Abstract
Studies on person recognition have primarily examined recognition of static faces, presented on a computer screen at a close distance. Nevertheless, in naturalistic situations we typically see the whole dynamic person, often approaching from a distance. In such cases, facial information may be less clear, and the motion pattern of an individual, their dynamic identity signature (DIS), may be used for person recognition. Studies that examined the role of motion in person recognition, presented videos of people in motion. However, such stimuli do not allow for the dissociation of gait from face and body form, as different identities differ both in their gait and static appearance. To examine the contribution of gait in person recognition, independently from static appearance, we used a virtual environment, and presented across participants, the same face and body form with different gaits. The virtual environment also enabled us to assess the distance at which a person is recognized as a continuous variable. Using this setting, we assessed the accuracy and distance at which identities are recognized based on their gait, as a function of gait distinctiveness. We find that the accuracy and distance at which people were recognized increased with gait distinctiveness. Importantly, these effects were found when recognizing identities in motion but not from static displays, indicating that DIS rather than attention, enabled more accurate person recognition. Overall these findings highlight that gait contributes to person recognition beyond the face and body and stress an important role for gait in real-life person recognition.
Collapse
Affiliation(s)
- Noa Simhi
- The School of Psychological Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel.
| | - Galit Yovel
- The School of Psychological Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel; The Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv 69978, Israel.
| |
Collapse
|
47
|
Sliwinska MW, Bearpark C, Corkhill J, McPhillips A, Pitcher D. Dissociable pathways for moving and static face perception begin in early visual cortex: Evidence from an acquired prosopagnosic. Cortex 2020; 130:327-339. [DOI: 10.1016/j.cortex.2020.03.033] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Revised: 02/14/2020] [Accepted: 03/13/2020] [Indexed: 11/25/2022]
|
48
|
Tummon HM, Allen J, Bindemann M. Body Language Influences on Facial Identification at Passport Control: An Exploration in Virtual Reality. Iperception 2020; 11:2041669520958033. [PMID: 33149876 PMCID: PMC7580167 DOI: 10.1177/2041669520958033] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 08/18/2020] [Indexed: 11/16/2022] Open
Abstract
Person identification at airports requires the matching of a passport photograph to its bearer. One aim of this process is to find identity impostors, who use valid identity documents of similar-looking people to avoid detection. In psychology, this process has been studied extensively with static pairs of face photographs that require identity match (same person shown) versus mismatch (two different people) decisions. However, this approach provides a limited proxy for studying how other factors, such as nonverbal behaviour, affect this task. The current study investigated the influence of body language on facial identity matching within a virtual reality airport environment, by manipulating activity levels of person avatars queueing at passport control. In a series of six experiments, detection of identity mismatches was unaffected when observers were not instructed to utilise body language. By contrast, under explicit instruction to look out for unusual body language, these cues enhanced detection of mismatches but also increased false classification of matches. This effect was driven by increased activity levels rather than body language that simply differed from the behaviour of the majority of passengers. The implications and limitations of these findings are discussed.
Collapse
Affiliation(s)
- Hannah M. Tummon
- School of Psychology, University of Kent, Canterbury, United Kingdom
| | - John Allen
- School of Psychology, University of Kent, Canterbury, United Kingdom
| | - Markus Bindemann
- School of Psychology, University of Kent, Canterbury, United Kingdom
| |
Collapse
|
49
|
Maylott SE, Paukner A, Ahn YA, Simpson EA. Human and monkey infant attention to dynamic social and nonsocial stimuli. Dev Psychobiol 2020; 62:841-857. [PMID: 32424813 PMCID: PMC7944642 DOI: 10.1002/dev.21979] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 03/23/2020] [Accepted: 03/31/2020] [Indexed: 12/14/2022]
Abstract
The present study explored behavioral norms for infant social attention in typically developing human and nonhuman primate infants. We examined the normative development of attention to dynamic social and nonsocial stimuli longitudinally in macaques (Macaca mulatta) at 1, 3, and 5 months of age (N = 75) and humans at 2, 4, 6, 8, and 13 months of age (N = 69) using eye tracking. All infants viewed concurrently played silent videos-one social video and one nonsocial video. Both macaque and human infants were faster to look to the social than the nonsocial stimulus, and both species grew faster to orient to the social stimulus with age. Further, macaque infants' social attention increased linearly from 1 to 5 months. In contrast, human infants displayed a nonlinear pattern of social interest, with initially greater attention to the social stimulus, followed by a period of greater interest in the nonsocial stimulus, and then a rise in social interest from 6 to 13 months. Overall, human infants looked longer than macaque infants, suggesting humans have more sustained attention in the first year of life. These findings highlight potential species similarities and differences, and reflect a first step in establishing baseline patterns of early social attention development.
Collapse
Affiliation(s)
- Sarah E. Maylott
- Department of Psychology, University of Miami, Coral Gables, Florida, USA
| | - Annika Paukner
- Department of Psychology, Nottingham Trent University, Nottingham, England
| | - Yeojin A. Ahn
- Department of Psychology, University of Miami, Coral Gables, Florida, USA
| | | |
Collapse
|
50
|
Simhi N, Yovel G. Can we recognize people based on their body-alone? The roles of body motion and whole person context. Vision Res 2020; 176:91-99. [PMID: 32827880 DOI: 10.1016/j.visres.2020.07.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 07/22/2020] [Accepted: 07/29/2020] [Indexed: 11/25/2022]
Abstract
While most studies on person recognition examine the face alone, recent studies have shown evidence for the contribution of the body and gait to person recognition beyond the face. Nevertheless, little is known on whether person recognition can be performed based on the body alone. In this study, we examined two sources of information that may enhance body-based person recognition: body motion and whole person context. Body motion has been shown to contribute to person recognition especially when facial information is unclear. Additionally, generating whole person context, by attaching faceless heads to bodies, has been shown to activate face processing mechanisms and may therefore enhance body-based person recognition. To assess body-based person recognition, participants performed a sequential matching task in which they studied a video of a person walking followed by a headless image of the same or different identity. The role of body motion was examined by comparing recognition from dynamic vs. static headless bodies. The role of whole person context was examined by comparing bodies with and without faceless heads. Our findings show that person recognition from the body alone was better in dynamic vs. static displays indicating that body motion contributed to body-based person recognition. In addition, whole person context contributed to body-based person recognition when recognition was performed in static displays. Overall these findings show that recognizing people based on their body alone is challenging but can be performed under certain circumstances that enhance the processing of the body when seeing the whole dynamic person.
Collapse
Affiliation(s)
- Noa Simhi
- The School of Psychological Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel.
| | - Galit Yovel
- The School of Psychological Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel; The Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv 69978, Israel.
| |
Collapse
|