1
|
Jakobsen KV, Hickman CM, Simpson EA. A happy face advantage for pareidolic faces in children and adults. J Exp Child Psychol 2025; 251:106127. [PMID: 39603157 DOI: 10.1016/j.jecp.2024.106127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 09/23/2024] [Accepted: 10/17/2024] [Indexed: 11/29/2024]
Abstract
Pareidolic faces-illusory faces in objects-offer a unique context for studying biases in the development of facial processing because they are visually diverse (e.g., color, shape) while lacking key elements of real faces (e.g., race, species). In an online study, 7- and 8-year-old children (n = 32) and adults (n = 32) categorized happy and angry expressions in human and pareidolic face images. We found that children have a robust, adult-like happy face advantage for human and pareidolic faces, reflected in speed and accuracy. These results suggest that the happy face advantage is not unique to human faces, supporting the hypothesis that humans employ comparable face templates for processing pareidolic and human faces. Our findings add to a growing list of other processing similarities between human and pareidolic faces and suggest that children may likewise show these similarities.
Collapse
Affiliation(s)
- Krisztina V Jakobsen
- Department of Psychology, James Madison University, Harrisonburg, VA 22807, USA.
| | - Cate M Hickman
- Department of Psychology, James Madison University, Harrisonburg, VA 22807, USA
| | | |
Collapse
|
2
|
Koyano KW, Taubert J, Robison W, Waidmann EN, Leopold DA. Face pareidolia minimally engages macaque face selective neurons. Prog Neurobiol 2025; 245:102709. [PMID: 39755201 PMCID: PMC11781954 DOI: 10.1016/j.pneurobio.2024.102709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Revised: 12/08/2024] [Accepted: 12/29/2024] [Indexed: 01/06/2025]
Abstract
The macaque cerebral cortex contains concentrations of neurons that prefer faces over inanimate objects. Although these so-called face patches are thought to be specialized for the analysis of facial signals, their exact tuning properties remain unclear. For example, what happens when an object by chance resembles a face? Everyday objects can sometimes, through the accidental positioning of their internal components, appear as faces. This phenomenon is known as face pareidolia. Behavioral experiments have suggested that macaques, like humans, perceive illusory faces in such objects. However, it is an open question whether such stimuli would naturally stimulate neurons residing in cortical face patches. To address this question, we recorded single unit activity from four fMRI-defined face-selective regions: the anterior medial (AM), anterior fundus (AF), prefrontal orbital (PO), and perirhinal cortex (PRh) face patches. We compared neural responses elicited by images of real macaque faces, pareidolia-evoking objects, and matched control objects. Contrary to expectations, we found no evidence of a general preference for pareidolia-evoking objects over control objects. Although a subset of neurons exhibited stronger responses to pareidolia-evoking objects, the population responses to both categories of objects were similar, and collectively much less than to real macaque faces. These results suggest that neural responses in the four regions we tested are principally concerned with the analysis of realistic facial characteristics, whereas the special attention afforded to face-like pareidolia stimuli is supported by activity elsewhere in the brain.
Collapse
Affiliation(s)
- Kenji W Koyano
- Section on Cognitive Neurophysiology and Imaging, Systems Neurodevelopment Laboratory, National Institute of Mental Health, Bethesda, MD, USA.
| | - Jessica Taubert
- Section on Neurocircuitry, National Institutes of Mental Health, Bethesda, MD, USA; School of Psychology, The University of Queensland, St Lucia, Queensland, Australia
| | - William Robison
- Section on Cognitive Neurophysiology and Imaging, Systems Neurodevelopment Laboratory, National Institute of Mental Health, Bethesda, MD, USA
| | - Elena N Waidmann
- Section on Cognitive Neurophysiology and Imaging, Systems Neurodevelopment Laboratory, National Institute of Mental Health, Bethesda, MD, USA
| | - David A Leopold
- Section on Cognitive Neurophysiology and Imaging, Systems Neurodevelopment Laboratory, National Institute of Mental Health, Bethesda, MD, USA; Neurophysiology Imaging Facility, National Institute of Mental Health, National Institute of Neurological Disorders and Stroke, National Eye Institute, Bethesda, MD, USA.
| |
Collapse
|
3
|
Tomonaga M. I've just seen a face: further search for face pareidolia in chimpanzees ( Pan troglodytes). Front Psychol 2025; 15:1508867. [PMID: 39936109 PMCID: PMC11810910 DOI: 10.3389/fpsyg.2024.1508867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Accepted: 12/11/2024] [Indexed: 02/13/2025] Open
Abstract
Introduction Seeing faces in random patterns, such as in clouds, is known as pareidolia. Two possible mechanisms can cause pareidolia: a bottom-up mechanism that automatically detects inverted triangle or top-heavy patterns, and a top-down mechanism that actively seeks out faces. Pareidolia has been reported in nonhuman animals as well. In chimpanzees, it has been suggested that the bottom-up mechanism is involved in their pareidolic perception, but the extent of the contribution of the top-down mechanism remains unclear. This study investigated the role of topdown control in face detection in chimpanzees. Methods After being trained on an oddity task in which they had to select a noise pattern where a face (either human or chimpanzee) or a letter (Kanji characters) was superimposed among three patterns, they were tested with noise patterns that did not contain any target stimuli. Results When the average images of the patterns selected by the chimpanzees in these test trials were analyzed and compared with those that were not selected (i.e., difference images), a clear non-random structure was found in the difference images. In contrast, such structures were not evident in the difference images obtained by assuming that one of the three patterns was randomly selected. Discussion These results suggest that chimpanzees may have been attempting to find "faces" or "letters"in random patterns possibly through some form of top-down processing.
Collapse
Affiliation(s)
- Masaki Tomonaga
- School of Psychological Sciences, University of Human Environments, Matsuyama, Ehime, Japan
| |
Collapse
|
4
|
Liu J, Chen H, Wang H, Wang Z. Neural correlates of facial recognition deficits in autism spectrum disorder: a comprehensive review. Front Psychiatry 2025; 15:1464142. [PMID: 39834575 PMCID: PMC11743606 DOI: 10.3389/fpsyt.2024.1464142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2024] [Accepted: 10/22/2024] [Indexed: 01/22/2025] Open
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental condition characterized by significant impairments in social interaction, often manifested in facial recognition deficits. These deficits hinder individuals with ASD from recognizing facial identities and interpreting emotions, further complicating social communication. This review explores the neural mechanisms underlying these deficits, focusing on both functional anomalies and anatomical differences in key brain regions such as the fusiform gyrus (FG), amygdala, superior temporal sulcus (STS), and prefrontal cortex (PFC). It has been found that the reduced activation in the FG and atypical activation of the amygdala and STS contribute to difficulties in processing facial cues, while increased reliance on the PFC for facial recognition tasks imposes a cognitive load. Additionally, disrupted functional and structural connectivity between these regions further exacerbates facial recognition challenges. Future research should emphasize longitudinal, multimodal neuroimaging approaches to better understand developmental trajectories and design personalized interventions, leveraging AI and machine learning to optimize therapeutic outcomes for individuals with ASD.
Collapse
Affiliation(s)
- Jianmei Liu
- School of Public Policy and Management, China University of Mining and Technology, Xuzhou, China
- School of Education Science, Jiangsu Normal University, Xuzhou, China
| | - Huihui Chen
- School of Education Science, Jiangsu Normal University, Xuzhou, China
| | - Haijing Wang
- School of Education Science, Jiangsu Normal University, Xuzhou, China
| | - Zhidan Wang
- School of Education Science, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
5
|
Lenc T, Lenoir C, Keller PE, Polak R, Mulders D, Nozaradan S. Measuring self-similarity in empirical signals to understand musical beat perception. Eur J Neurosci 2025; 61:e16637. [PMID: 39853878 PMCID: PMC11760665 DOI: 10.1111/ejn.16637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 10/15/2024] [Accepted: 11/26/2024] [Indexed: 01/26/2025]
Abstract
Experiencing music often entails the perception of a periodic beat. Despite being a widespread phenomenon across cultures, the nature and neural underpinnings of beat perception remain largely unknown. In the last decade, there has been a growing interest in developing methods to probe these processes, particularly to measure the extent to which beat-related information is contained in behavioral and neural responses. Here, we propose a theoretical framework and practical implementation of an analytic approach to capture beat-related periodicity in empirical signals using frequency-tagging. We highlight its sensitivity in measuring the extent to which the periodicity of a perceived beat is represented in a range of continuous time-varying signals with minimal assumptions. We also discuss a limitation of this approach with respect to its specificity when restricted to measuring beat-related periodicity only from the magnitude spectrum of a signal and introduce a novel extension of the approach based on autocorrelation to overcome this issue. We test the new autocorrelation-based method using simulated signals and by re-analyzing previously published data and show how it can be used to process measurements of brain activity as captured with surface EEG in adults and infants in response to rhythmic inputs. Taken together, the theoretical framework and related methodological advances confirm and elaborate the frequency-tagging approach as a promising window into the processes underlying beat perception and, more generally, temporally coordinated behaviors.
Collapse
Affiliation(s)
- Tomas Lenc
- Institute of Neuroscience (IONS), UCLouvainBrusselsBelgium
- Basque Center on Cognition, Brain and Language (BCBL)Donostia‐San SebastianSpain
| | - Cédric Lenoir
- Institute of Neuroscience (IONS), UCLouvainBrusselsBelgium
| | - Peter E. Keller
- MARCS Institute for Brain, Behaviour and DevelopmentWestern Sydney UniversitySydneyAustralia
- Center for Music in the Brain & Department of Clinical MedicineAarhus UniversityAarhusDenmark
| | - Rainer Polak
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and MotionUniversity of OsloOsloNorway
- Department of MusicologyUniversity of OsloOsloNorway
| | - Dounia Mulders
- Institute of Neuroscience (IONS), UCLouvainBrusselsBelgium
- Computational and Biological Learning Unit, Department of EngineeringUniversity of CambridgeCambridgeUK
- Institute for Information and Communication TechnologiesElectronics and Applied Mathematics, UCLouvainLouvain‐la‐NeuveBelgium
- Department of Brain and Cognitive Sciences and McGovern InstituteMassachusetts Institute of Technology (MIT)CambridgeMassachusettsUSA
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), UCLouvainBrusselsBelgium
- International Laboratory for Brain, Music and Sound Research (BRAMS)MontrealCanada
| |
Collapse
|
6
|
Gupta P, Dobs K. Human-like face pareidolia emerges in deep neural networks optimized for face and object recognition. PLoS Comput Biol 2025; 21:e1012751. [PMID: 39869654 PMCID: PMC11790231 DOI: 10.1371/journal.pcbi.1012751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 02/03/2025] [Accepted: 12/24/2024] [Indexed: 01/29/2025] Open
Abstract
The human visual system possesses a remarkable ability to detect and process faces across diverse contexts, including the phenomenon of face pareidolia--seeing faces in inanimate objects. Despite extensive research, it remains unclear why the visual system employs such broadly tuned face detection capabilities. We hypothesized that face pareidolia results from the visual system's optimization for recognizing both faces and objects. To test this hypothesis, we used task-optimized deep convolutional neural networks (CNNs) and evaluated their alignment with human behavioral signatures and neural responses, measured via magnetoencephalography (MEG), related to pareidolia processing. Specifically, we trained CNNs on tasks involving combinations of face identification, face detection, object categorization, and object detection. Using representational similarity analysis, we found that CNNs that included object categorization in their training tasks represented pareidolia faces, real faces, and matched objects more similarly to neural responses than those that did not. Although these CNNs showed similar overall alignment with neural data, a closer examination of their internal representations revealed that specific training tasks had distinct effects on how pareidolia faces were represented across layers. Finally, interpretability methods revealed that only a CNN trained for both face identification and object categorization relied on face-like features-such as 'eyes'-to classify pareidolia stimuli as faces, mirroring findings in human perception. Our results suggest that human-like face pareidolia may emerge from the visual system's optimization for face identification within the context of generalized object categorization.
Collapse
Affiliation(s)
- Pranjul Gupta
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Katharina Dobs
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain, and Behavior, Universities of Marburg, Giessen and Darmstadt, Marburg, Germany
| |
Collapse
|
7
|
Duyck S, Costantino AI, Bracci S, Op de Beeck H. A computational deep learning investigation of animacy perception in the human brain. Commun Biol 2024; 7:1718. [PMID: 39741161 DOI: 10.1038/s42003-024-07415-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 12/18/2024] [Indexed: 01/02/2025] Open
Abstract
The functional organization of the human object vision pathway distinguishes between animate and inanimate objects. To understand animacy perception, we explore the case of zoomorphic objects resembling animals. While the perception of these objects as animal-like seems obvious to humans, such "Animal bias" is a striking discrepancy between the human brain and deep neural networks (DNNs). We computationally investigated the potential origins of this bias. We successfully induced this bias in DNNs trained explicitly with zoomorphic objects. Alternative training schedules failed to cause an Animal bias. We considered the superordinate distinction between animate and inanimate classes, the sensitivity for faces and bodies, the bias for shape over texture, the role of ecologically valid categories, recurrent connections, and language-informed visual processing. These findings provide computational support that the Animal bias for zoomorphic objects is a unique property of human perception yet can be explained by human learning history.
Collapse
Affiliation(s)
- Stefanie Duyck
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Andrea I Costantino
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium.
| | - Stefania Bracci
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Hans Op de Beeck
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
8
|
Sarovic D, Schneiderman J, Lundström S, Riaz B, Orekhova E, Khan S, Gillberg C. Differential late-stage face processing in autism: a magnetoencephalographic study of fusiform gyrus activation. BMC Psychiatry 2024; 24:900. [PMID: 39695511 DOI: 10.1186/s12888-024-06400-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 12/11/2024] [Indexed: 12/20/2024] Open
Abstract
BACKGROUND Autism is associated with alterations of social communication, such as during face-to-face interactions. This study aimed to probe face processing in autistics with normal IQ utilizing magnetoencephalography to examine event-related fields within the fusiform gyrus during face perception. METHODS A case-control cohort of 22 individuals diagnosed with autism and 20 age-matched controls (all male, age 29.3 ± 6.9 years) underwent magnetoencephalographic scanning during an active task while observing neutral faces, face-like pareidolic objects, and non-face objects. The fusiform face area was identified using a face localizer for each participant, and the cortical activation pattern was normalized onto an average brain for subsequent analysis. RESULTS Early post-stimulus activation amplitudes (before 100-200 ms) indicated differentiation between stimuli containing fundamental facial features and non-face objects in both groups. In contrast, later activation (400-550 ms) differentiated real faces from both pareidolic and non-face objects across both groups and faces from objects in controls but not in autistics. There was no effect of autistic-like traits. CONCLUSIONS The absence of group differences in early activation suggest intact face detection in autistics possessing a normal IQ. Later activation captures a greater degree of the complexity and social information from actual faces. Although both groups distinguished faces from pareidolic and non-face objects, the control group exhibited a slightly heightened differentiation at this latency, indicating a potential disadvantage for autistics in real face processing. The subtle difference in late-stage face processing observed in autistic individuals may reflect specific cognitive mechanisms related to face perception in autism.
Collapse
Affiliation(s)
- Darko Sarovic
- Gillberg Neuropsychiatry Centre, Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
- Department of Radiology, Sahlgrenska University Hospital, Bruna Straket 11B, Gothenburg, 413 45, Sweden.
- Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, USA.
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Justin Schneiderman
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
| | - Sebastian Lundström
- Gillberg Neuropsychiatry Centre, Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Bushra Riaz
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
| | - Elena Orekhova
- Gillberg Neuropsychiatry Centre, Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, USA
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Christopher Gillberg
- Gillberg Neuropsychiatry Centre, Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
9
|
Conti D, Bechi Gabrielli G, Panigutti M, Zazzaro G, Bruno G, Galati G, D'Antonio F. Neuroanatomical and clinical correlates of prodromal dementia with Lewy bodies: a systematic literature review of neuroimaging findings. J Neurol 2024; 272:38. [PMID: 39666108 DOI: 10.1007/s00415-024-12726-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2024] [Revised: 11/14/2024] [Accepted: 11/15/2024] [Indexed: 12/13/2024]
Abstract
Prodromal Dementia with Lewy bodies (pro-DLB) has been recently defined; however, the neuroanatomical and functional correlates of this stage have not yet been univocally established. This study aimed to systematically review neuroimaging findings focused on pro-DLB. A literature search of works employing MRI, PET, and SPECT was performed. Forty records were included: 15 studies assessed gray matter (GM) and white matter (WM) integrity, and 31 investigated metabolism, perfusion, and resting-state connectivity. Results showed that, in pro-DLB, frontal lobe areas were characterized by decreased function, cortical atrophy, and WM damage. Volumetric reductions were found in the insula, which also showed heightened metabolism. A pattern of hypofunction and structural damage was observed in the lateral and ventral temporal lobe; instead, the parahippocampal cortex and hippocampus exhibited greater function. Hypofunction marked parietal and occipital regions, with additional atrophy in the medial occipital lobe and posterior parietal cortex. Subcortically, atrophy and microstructural damage in the nucleus basalis of Meynert were reported, and dopamine transporter uptake was reduced in the basal ganglia. Overall, structural and functional damage was already present in pro-DLB and was coherent with the possible clinical onset. Frontal and parieto-occipital alterations may be associated with deficits in attention and executive functions and in visuo-perceptual/visuo-spatial abilities, respectively. Degeneration of cholinergic and dopaminergic transmission appeared substantial at this disease stage. This review provided an updated and more precise depiction of the brain alterations that are specific to pro-DLB and valuable to its differentiation from physiological aging and other dementias.
Collapse
Affiliation(s)
- Desirée Conti
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
- Brain Imaging Laboratory, Department of Psychology, Sapienza University of Rome, Rome, Italy
| | | | - Massimiliano Panigutti
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Giulia Zazzaro
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Giuseppe Bruno
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Gaspare Galati
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
- Brain Imaging Laboratory, Department of Psychology, Sapienza University of Rome, Rome, Italy
| | - Fabrizia D'Antonio
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy.
| |
Collapse
|
10
|
Xu W, Lyu B, Ru X, Li D, Gu W, Ma X, Zheng F, Li T, Liao P, Cheng H, Yang R, Song J, Jin Z, Li C, He K, Gao JH. Decoding the Temporal Structures and Interactions of Multiple Face Dimensions Using Optically Pumped Magnetometer Magnetoencephalography (OPM-MEG). J Neurosci 2024; 44:e2237232024. [PMID: 39358044 PMCID: PMC11580774 DOI: 10.1523/jneurosci.2237-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 09/18/2024] [Accepted: 09/25/2024] [Indexed: 10/04/2024] Open
Abstract
Humans possess a remarkable ability to rapidly access diverse information from others' faces with just a brief glance, which is crucial for intricate social interactions. While previous studies using event-related potentials/fields have explored various face dimensions during this process, the interplay between these dimensions remains unclear. Here, by applying multivariate decoding analysis to neural signals recorded with optically pumped magnetometer magnetoencephalography, we systematically investigated the temporal interactions between invariant and variable aspects of face stimuli, including race, gender, age, and expression. First, our analysis revealed unique temporal structures for each face dimension with high test-retest reliability. Notably, expression and race exhibited a dominant and stably maintained temporal structure according to temporal generalization analysis. Further exploration into the mutual interactions among face dimensions uncovered age effects on gender and race, as well as expression effects on race, during the early stage (∼200-300 ms postface presentation). Additionally, we observed a relatively late effect of race on gender representation, peaking ∼350 ms after the stimulus onset. Taken together, our findings provide novel insights into the neural dynamics underlying the multidimensional aspects of face perception and illuminate the promising future of utilizing OPM-MEG for exploring higher-level human cognition.
Collapse
Affiliation(s)
- Wei Xu
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
| | | | - Xingyu Ru
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Dongxu Li
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Wenyu Gu
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
| | - Xiao Ma
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Fufu Zheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Tingyue Li
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Pan Liao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
| | - Hao Cheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
| | - Rui Yang
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Jingqi Song
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Zeyu Jin
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | | | - Kaiyan He
- Changping Laboratory, Beijing 102206, China
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
- Changping Laboratory, Beijing 102206, China
- Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
- McGovern Institute for Brain Research, Peking University, Beijing 100871, China
- National Biomedical Imaging Center, Peking University, Beijing 100871, China
| |
Collapse
|
11
|
Sharma S, Vinken K, Jagadeesh AV, Livingstone MS. Face cells encode object parts more than facial configuration of illusory faces. Nat Commun 2024; 15:9879. [PMID: 39543127 PMCID: PMC11564726 DOI: 10.1038/s41467-024-54323-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Accepted: 11/06/2024] [Indexed: 11/17/2024] Open
Abstract
Humans perceive illusory faces in everyday objects with a face-like configuration, an illusion known as face pareidolia. Face-selective regions in humans and monkeys, believed to underlie face perception, have been shown to respond to face pareidolia images. Here, we investigated whether pareidolia selectivity in macaque inferotemporal cortex is explained by the face-like configuration that drives the human perception of illusory faces. We found that face cells responded selectively to pareidolia images. This selectivity did not correlate with human faceness ratings and did not require the face-like configuration. Instead, it was driven primarily by the "eye" parts of the illusory face, which are simply object parts when viewed in isolation. In contrast, human perceptual pareidolia relied primarily on the global configuration and could not be explained by "eye" parts. Our results indicate that face-cells encode local, generic features of illusory faces, in misalignment with human visual perception, which requires holistic information.
Collapse
Affiliation(s)
- Saloni Sharma
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
| | - Kasper Vinken
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | | | | |
Collapse
|
12
|
Rakhimzhanova T, Kuzdeuov A, Varol HA. AnyFace++: Deep Multi-Task, Multi-Domain Learning for Efficient Face AI. SENSORS (BASEL, SWITZERLAND) 2024; 24:5993. [PMID: 39338738 PMCID: PMC11436022 DOI: 10.3390/s24185993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 09/11/2024] [Accepted: 09/13/2024] [Indexed: 09/30/2024]
Abstract
Accurate face detection and subsequent localization of facial landmarks are mandatory steps in many computer vision applications, such as emotion recognition, age estimation, and gender identification. Thanks to advancements in deep learning, numerous facial applications have been developed for human faces. However, most have to employ multiple models to accomplish several tasks simultaneously. As a result, they require more memory usage and increased inference time. Also, less attention is paid to other domains, such as animals and cartoon characters. To address these challenges, we propose an input-agnostic face model, AnyFace++, to perform multiple face-related tasks concurrently. The tasks are face detection and prediction of facial landmarks for human, animal, and cartoon faces, including age estimation, gender classification, and emotion recognition for human faces. We trained the model using deep multi-task, multi-domain learning with a heterogeneous cost function. The experimental results demonstrate that AnyFace++ generates outcomes comparable to cutting-edge models designed for specific domains.
Collapse
Affiliation(s)
| | | | - Huseyin Atakan Varol
- Institute of Smart Systems and Artificial Intelligence, Nazarbayev University, Astana 010000, Kazakhstan; (T.R.); (A.K.)
| |
Collapse
|
13
|
Lin Y, Hsu YY, Cheng T, Hsiung PC, Wu CW, Hsieh PJ. Neural representations of perspectival shapes and attentional effects: Evidence from fMRI and MEG. Cortex 2024; 176:129-143. [PMID: 38781910 DOI: 10.1016/j.cortex.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 02/14/2024] [Accepted: 04/05/2024] [Indexed: 05/25/2024]
Abstract
Does the human brain represent perspectival shapes, i.e., viewpoint-dependent object shapes, especially in relatively higher-level visual areas such as the lateral occipital cortex? What is the temporal profile of the appearance and disappearance of neural representations of perspectival shapes? And how does attention influence these neural representations? To answer these questions, we employed functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and multivariate decoding techniques to investigate spatiotemporal neural representations of perspectival shapes. Participants viewed rotated objects along with the corresponding objective shapes and perspectival shapes (i.e., rotated round, round, and oval) while we measured their brain activities. Our results revealed that shape classifiers trained on the basic shapes (i.e., round and oval) consistently identified neural representations in the lateral occipital cortex corresponding to the perspectival shapes of the viewed objects regardless of attentional manipulations. Additionally, this classification tendency toward the perspectival shapes emerged approximately 200 ms after stimulus presentation. Moreover, attention influenced the spatial dimension as the regions showing the perspectival shape classification tendency propagated from the occipital lobe to the temporal lobe. As for the temporal dimension, attention led to a more robust and enduring classification tendency towards perspectival shapes. In summary, our study outlines a spatiotemporal neural profile for perspectival shapes that suggests a greater degree of perspectival representation than is often acknowledged.
Collapse
Affiliation(s)
- Yi Lin
- Taiwan International Graduate Program in Interdisciplinary Neuroscience, National Cheng Kung University and Academia Sinica, Nankan, Taipei, Taiwan; Research Unit Brain and Cognition, KU Leuven, Leuven, Belgium.
| | - Yung-Yi Hsu
- Department of Psychology, National Taiwan University, Da'an, Taipei, Taiwan
| | - Tony Cheng
- Waseda Institute for Advanced Study, Waseda University, Tokyo, Japan
| | - Pin-Cheng Hsiung
- Department of Psychology, National Taiwan University, Da'an, Taipei, Taiwan
| | - Chen-Wei Wu
- Department of Philosophy, Georgia State University, Atlanta, GA, USA
| | - Po-Jang Hsieh
- Department of Psychology, National Taiwan University, Da'an, Taipei, Taiwan.
| |
Collapse
|
14
|
Taubert J, Wardle SG, Patterson A, Baker CI. Beyond faces: the contribution of the amygdala to visual processing in the macaque brain. Cereb Cortex 2024; 34:bhae245. [PMID: 38864574 PMCID: PMC11485272 DOI: 10.1093/cercor/bhae245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 05/03/2024] [Accepted: 05/25/2024] [Indexed: 06/13/2024] Open
Abstract
The amygdala is present in a diverse range of vertebrate species, such as lizards, rodents, and primates; however, its structure and connectivity differs across species. The increased connections to visual sensory areas in primate species suggests that understanding the visual selectivity of the amygdala in detail is critical to revealing the principles underlying its function in primate cognition. Therefore, we designed a high-resolution, contrast-agent enhanced, event-related fMRI experiment, and scanned 3 adult rhesus macaques, while they viewed 96 naturalistic stimuli. Half of these stimuli were social (defined by the presence of a conspecific), the other half were nonsocial. We also nested manipulations of emotional valence (positive, neutral, and negative) and visual category (faces, nonfaces, animate, and inanimate) within the stimulus set. The results reveal widespread effects of emotional valence, with the amygdala responding more on average to inanimate objects and animals than faces, bodies, or social agents in this experimental context. These findings suggest that the amygdala makes a contribution to primate vision that goes beyond an auxiliary role in face or social perception. Furthermore, the results highlight the importance of stimulus selection and experimental design when probing the function of the amygdala and other visually responsive brain regions.
Collapse
Affiliation(s)
- Jessica Taubert
- Laboratory of Brain and Cognition, National Institute of Mental Health, 10 Center Dr, Bethesda, MD 20892 USA
- School of Psychology, Level 3, McElwain Building (24A), The University of Queensland, Brisbane, QLD 4072, Australia
| | - Susan G Wardle
- Laboratory of Brain and Cognition, National Institute of Mental Health, 10 Center Dr, Bethesda, MD 20892 USA
| | - Amanda Patterson
- Laboratory of Brain and Cognition, National Institute of Mental Health, 10 Center Dr, Bethesda, MD 20892 USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, 10 Center Dr, Bethesda, MD 20892 USA
| |
Collapse
|
15
|
Quek GL, de Heering A. Visual periodicity reveals distinct attentional signatures for face and non-face categories. Cereb Cortex 2024; 34:bhae228. [PMID: 38879816 PMCID: PMC11180377 DOI: 10.1093/cercor/bhae228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 03/19/2024] [Accepted: 05/14/2024] [Indexed: 06/19/2024] Open
Abstract
Observers can selectively deploy attention to regions of space, moments in time, specific visual features, individual objects, and even specific high-level categories-for example, when keeping an eye out for dogs while jogging. Here, we exploited visual periodicity to examine how category-based attention differentially modulates selective neural processing of face and non-face categories. We combined electroencephalography with a novel frequency-tagging paradigm capable of capturing selective neural responses for multiple visual categories contained within the same rapid image stream (faces/birds in Exp 1; houses/birds in Exp 2). We found that the pattern of attentional enhancement and suppression for face-selective processing is unique compared to other object categories: Where attending to non-face objects strongly enhances their selective neural signals during a later stage of processing (300-500 ms), attentional enhancement of face-selective processing is both earlier and comparatively more modest. Moreover, only the selective neural response for faces appears to be actively suppressed by attending towards an alternate visual category. These results underscore the special status that faces hold within the human visual system, and highlight the utility of visual periodicity as a powerful tool for indexing selective neural processing of multiple visual categories contained within the same image sequence.
Collapse
Affiliation(s)
- Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Westmead Innovation Quarter, 160 Hawkesbury Rd, Westmead NSW 2145, Australia
| | - Adélaïde de Heering
- Unité de Recherche en Neurosciences Cognitives (UNESCOG), ULB Neuroscience Institue (UNI), Center for Research in Cognition & Neurosciences (CRCN), Université libre de Bruxelles (ULB), Avenue Franklin Roosevelt, 50-CP191, 1050 Brussels, Belgium
| |
Collapse
|
16
|
Koenig-Robert R, Quek GL, Grootswagers T, Varlet M. Movement trajectories as a window into the dynamics of emerging neural representations. Sci Rep 2024; 14:11499. [PMID: 38769313 PMCID: PMC11106280 DOI: 10.1038/s41598-024-62135-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 05/14/2024] [Indexed: 05/22/2024] Open
Abstract
The rapid transformation of sensory inputs into meaningful neural representations is critical to adaptive human behaviour. While non-invasive neuroimaging methods are the de-facto method for investigating neural representations, they remain expensive, not widely available, time-consuming, and restrictive. Here we show that movement trajectories can be used to measure emerging neural representations with fine temporal resolution. By combining online computer mouse-tracking and publicly available neuroimaging data via representational similarity analysis (RSA), we show that movement trajectories track the unfolding of stimulus- and category-wise neural representations along key dimensions of the human visual system. We demonstrate that time-resolved representational structures derived from movement trajectories overlap with those derived from M/EEG (albeit delayed) and those derived from fMRI in functionally-relevant brain areas. Our findings highlight the richness of movement trajectories and the power of the RSA framework to reveal and compare their information content, opening new avenues to better understand human perception.
Collapse
Affiliation(s)
- Roger Koenig-Robert
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, 2751, Australia
- School of Psychology, University of New South Wales, Sydney, NSW, Australia
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, 2751, Australia
| | - Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, 2751, Australia
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Penrith, NSW, 2751, Australia
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, 2751, Australia.
- School of Psychology, Western Sydney University, Sydney, NSW, 2751, Australia.
| |
Collapse
|
17
|
Saurels BW, Peluso N, Taubert J. A behavioral advantage for the face pareidolia illusion in peripheral vision. Sci Rep 2024; 14:10040. [PMID: 38693189 PMCID: PMC11063176 DOI: 10.1038/s41598-024-60892-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 04/29/2024] [Indexed: 05/03/2024] Open
Abstract
Investigation of visual illusions helps us understand how we process visual information. For example, face pareidolia, the misperception of illusory faces in objects, could be used to understand how we process real faces. However, it remains unclear whether this illusion emerges from errors in face detection or from slower, cognitive processes. Here, our logic is straightforward; if examples of face pareidolia activate the mechanisms that rapidly detect faces in visual environments, then participants will look at objects more quickly when the objects also contain illusory faces. To test this hypothesis, we sampled continuous eye movements during a fast saccadic choice task-participants were required to select either faces or food items. During this task, pairs of stimuli were positioned close to the initial fixation point or further away, in the periphery. As expected, the participants were faster to look at face targets than food targets. Importantly, we also discovered an advantage for food items with illusory faces but, this advantage was limited to the peripheral condition. These findings are among the first to demonstrate that the face pareidolia illusion persists in the periphery and, thus, it is likely to be a consequence of erroneous face detection.
Collapse
Affiliation(s)
- Blake W Saurels
- School of Psychology, The University of Queensland, St Lucia, Queensland, Australia
| | - Natalie Peluso
- School of Psychology, The University of Queensland, St Lucia, Queensland, Australia
| | - Jessica Taubert
- School of Psychology, The University of Queensland, St Lucia, Queensland, Australia.
| |
Collapse
|
18
|
Romagnano V, Kubon J, Sokolov AN, Fallgatter AJ, Braun C, Pavlova MA. Dynamic brain communication underwriting face pareidolia. Proc Natl Acad Sci U S A 2024; 121:e2401196121. [PMID: 38588422 PMCID: PMC11032489 DOI: 10.1073/pnas.2401196121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 03/04/2024] [Indexed: 04/10/2024] Open
Abstract
Face pareidolia is a tendency to seeing faces in nonface images that reflects high tuning to a face scheme. Yet, studies of the brain networks underwriting face pareidolia are scarce. Here, we examined the time course and dynamic topography of gamma oscillatory neuromagnetic activity while administering a task with nonface images resembling a face. Images were presented either with canonical orientation or with display inversion that heavily impedes face pareidolia. At early processing stages, the peaks in gamma activity (40 to 45 Hz) to images either triggering or not face pareidolia originate mainly from the right medioventral and lateral occipital cortices, rostral and caudal cuneus gyri, and medial superior occipital gyrus. Yet, the difference occurred at later processing stages in the high-frequency range of 80 to 85 Hz over a set of the areas constituting the social brain. The findings speak rather for a relatively late neural network playing a key role in face pareidolia. Strikingly, a cutting-edge analysis of brain connectivity unfolding over time reveals mutual feedforward and feedback intra- and interhemispheric communication not only within the social brain but also within the extended large-scale network of down- and upstream regions. In particular, the superior temporal sulcus and insula strongly engage in communication with other brain regions either as signal transmitters or recipients throughout the whole processing of face-pareidolia images.
Collapse
Affiliation(s)
- Valentina Romagnano
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen72076, Germany
| | - Julian Kubon
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen72076, Germany
| | - Alexander N. Sokolov
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen72076, Germany
| | - Andreas J. Fallgatter
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen72076, Germany
| | - Christoph Braun
- Magnetoencephalography Center, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen72076, Germany
- Hertie Institute for Clinical Brain Research, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen72076, Germany
| | - Marina A. Pavlova
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen72076, Germany
| |
Collapse
|
19
|
Collyer L, Ireland J, Susilo T. A limited visual search advantage for illusory faces. Atten Percept Psychophys 2024; 86:717-730. [PMID: 38228847 DOI: 10.3758/s13414-023-02833-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2023] [Indexed: 01/18/2024]
Abstract
The human visual system is very sensitive to the presence of faces in the environment, so much so that it can produce the perception of illusory faces in everyday objects. Growing research suggests that illusory faces and real faces are processed by similar perceptual and neural mechanisms, but whether this similarity extends to visual attention is less clear. A visual search study showed that illusory faces have a search advantage over objects when the types of objects vary to match the objects in the illusory faces (e.g., chair, pepper, clock) (Keys et al., 2021). Here, we examine whether the search advantage for illusory faces over objects remains when compared against objects that belong to a single category (flowers). In three experiments, we compared visual search of illusory faces, real faces, variable objects, and uniform objects (flowers). Search for real faces was best compared with all other types of targets. In contrast, search for illusory faces was only better than search for variable objects, not uniform objects. This result shows a limited visual search advantage for illusory faces and suggests that illusory faces may not be processed like real faces in visual attention.
Collapse
Affiliation(s)
- Lizzie Collyer
- School of Psychology, Victoria University of Wellington, Kelburn, New Zealand
| | - Jake Ireland
- School of Psychology, Victoria University of Wellington, Kelburn, New Zealand
| | - Tirta Susilo
- School of Psychology, Victoria University of Wellington, Kelburn, New Zealand.
| |
Collapse
|
20
|
Camenzind M, Göbel N, Eberhard-Moscicka A, Knobel S, Hegi H, Single M, Kaufmann B, Schumacher R, Nyffeler T, Nef T, Müri R. The phenomenology of pareidolia in healthy subjects and patients with left- or right-hemispheric stroke. Heliyon 2024; 10:e27414. [PMID: 38468958 PMCID: PMC10926141 DOI: 10.1016/j.heliyon.2024.e27414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 02/21/2024] [Accepted: 02/28/2024] [Indexed: 03/13/2024] Open
Abstract
Pareidolia are perceptions of recognizable images or meaningful patterns where none exist. In recent years, this phenomenon has been increasingly studied in healthy subjects and patients with neurological or psychiatric diseases. The current study examined pareidolia production in a group of 53 stroke patients and 82 neurologically healthy controls who performed a natural images task. We found a significant reduction of absolute pareidolia production in left- and right-hemispheric stroke patients, with right-hemispheric patients producing overall fewest pareidolic output. Responses were categorized into 28 distinct categories, with 'Animal', 'Human', 'Face', and 'Body parts' being the most common, accounting for 72% of all pareidolia. Regarding the percentages of the different categories of pareidolia, we found a significant reduction for the percentage of "Body parts" pareidolia in the left-hemispheric patient group as compared to the control group, while the percentage of this pareidolia type was not significantly reduced in right-hemispheric patients compared to healthy controls. These results support the hypothesis that pareidolia production may be influenced by local-global visual processing with the left hemisphere being involved in local and detailed analytical visual processing to a greater extent. As such, a lesion to the right hemisphere, that is believed to be critical for global visual processing, might explain the overall fewest pareidolic output produced by the right-hemispheric patients.
Collapse
Affiliation(s)
- M. Camenzind
- Perception and Eye Movement Laboratory, Departments of Neurology and BioMedical Research, Inselspital, Bern University Hospital and University of Bern, Switzerland
| | - N. Göbel
- Perception and Eye Movement Laboratory, Departments of Neurology and BioMedical Research, Inselspital, Bern University Hospital and University of Bern, Switzerland
- Research and Analysis Services, University Hospital Basel and University of Basel, Basel, Switzerland
| | - A.K. Eberhard-Moscicka
- Perception and Eye Movement Laboratory, Departments of Neurology and BioMedical Research, Inselspital, Bern University Hospital and University of Bern, Switzerland
- Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Switzerland
- Department of Psychology, University of Bern, Bern, Switzerland
| | - S.E.J. Knobel
- Gerontechnology and Rehabilitation Group, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - H. Hegi
- Gerontechnology and Rehabilitation Group, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - M. Single
- Gerontechnology and Rehabilitation Group, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - B.C. Kaufmann
- Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland
| | - R. Schumacher
- Perception and Eye Movement Laboratory, Departments of Neurology and BioMedical Research, Inselspital, Bern University Hospital and University of Bern, Switzerland
- Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - T. Nyffeler
- Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland
| | - T. Nef
- Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Switzerland
- Gerontechnology and Rehabilitation Group, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - R.M. Müri
- Perception and Eye Movement Laboratory, Departments of Neurology and BioMedical Research, Inselspital, Bern University Hospital and University of Bern, Switzerland
- Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Switzerland
- Gerontechnology and Rehabilitation Group, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
21
|
Rossion B, Jacques C, Jonas J. The anterior fusiform gyrus: The ghost in the cortical face machine. Neurosci Biobehav Rev 2024; 158:105535. [PMID: 38191080 DOI: 10.1016/j.neubiorev.2024.105535] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 12/19/2023] [Accepted: 01/03/2024] [Indexed: 01/10/2024]
Abstract
Face-selective regions in the human ventral occipito-temporal cortex (VOTC) have been defined for decades mainly with functional magnetic resonance imaging. This face-selective VOTC network is traditionally divided in a posterior 'core' system thought to subtend face perception, and regions of the anterior temporal lobe as a semantic memory component of an extended general system. In between these two putative systems lies the anterior fusiform gyrus and surrounding sulci, affected by magnetic susceptibility artifacts. Here we suggest that this methodological gap overlaps with and contributes to a conceptual gap between (visual) perception and semantic memory for faces. Filling this gap with intracerebral recordings and direct electrical stimulation reveals robust face-selectivity in the anterior fusiform gyrus and a crucial role of this region, especially in the right hemisphere, in identity recognition for both familiar and unfamiliar faces. Based on these observations, we propose an integrated theoretical framework for human face (identity) recognition according to which face-selective regions in the anterior fusiform gyrus join the dots between posterior and anterior cortical face memories.
Collapse
Affiliation(s)
- Bruno Rossion
- Université de Lorraine, CNRS, IMoPA, F-54000 Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, F-54000 Nancy, France.
| | | | - Jacques Jonas
- Université de Lorraine, CNRS, IMoPA, F-54000 Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, F-54000 Nancy, France
| |
Collapse
|
22
|
Laamerad P, Awada A, Pack CC, Bakhtiari S. Asymmetric stimulus representations bias visual perceptual learning. J Vis 2024; 24:10. [PMID: 38285454 PMCID: PMC10829801 DOI: 10.1167/jov.24.1.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Accepted: 12/12/2023] [Indexed: 01/30/2024] Open
Abstract
The primate visual cortex contains various regions that exhibit specialization for different stimulus properties, such as motion, shape, and color. Within each region, there is often further specialization, such that particular stimulus features, such as horizontal and vertical orientations, are over-represented. These asymmetries are associated with well-known perceptual biases, but little is known about how they influence visual learning. Most theories would predict that learning is optimal, in the sense that it is unaffected by these asymmetries. However, other approaches to learning would result in specific patterns of perceptual biases. To distinguish between these possibilities, we trained human observers to discriminate between expanding and contracting motion patterns, which have a highly asymmetrical representation in the visual cortex. Observers exhibited biased percepts of these stimuli, and these biases were affected by training in ways that were often suboptimal. We simulated different neural network models and found that a learning rule that involved only adjustments to decision criteria, rather than connection weights, could account for our data. These results suggest that cortical asymmetries influence visual perception and that human observers often rely on suboptimal strategies for learning.
Collapse
Affiliation(s)
- Pooya Laamerad
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Asmara Awada
- Department of Psychology, Université de Montréal, Montreal, Canada
| | - Christopher C Pack
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Shahab Bakhtiari
- Department of Psychology, Université de Montréal, Montreal, Canada
- Mila - Quebec AI Institute, Montreal, Canada
| |
Collapse
|
23
|
Wang A, Sliwinska MW, Watson DM, Smith S, Andrews TJ. Distinct patterns of neural response to faces from different races in humans and deep networks. Soc Cogn Affect Neurosci 2023; 18:nsad059. [PMID: 37837305 PMCID: PMC10634630 DOI: 10.1093/scan/nsad059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 07/27/2023] [Accepted: 10/06/2023] [Indexed: 10/15/2023] Open
Abstract
Social categories such as the race or ethnicity of an individual are typically conveyed by the visual appearance of the face. The aim of this study was to explore how these differences in facial appearance are represented in human and artificial neural networks. First, we compared the similarity of faces from different races using a neural network trained to discriminate identity. We found that the differences between races were most evident in the fully connected layers of the network. Although these layers were also able to predict behavioural judgements of face identity from human participants, performance was biased toward White faces. Next, we measured the neural response in face-selective regions of the human brain to faces from different races in Asian and White participants. We found distinct patterns of response to faces from different races in face-selective regions. We also found that the spatial pattern of response was more consistent across participants for own-race compared to other-race faces. Together, these findings show that faces from different races elicit different patterns of response in human and artificial neural networks. These differences may underlie the ability to make categorical judgements and explain the behavioural advantage for the recognition of own-race faces.
Collapse
Affiliation(s)
- Ao Wang
- Department of Psychology, University of York, York YO10 5DD, UK
- Department of Psychology, University of Southampton, Southampton SO17 1BJ, UK
| | - Magdalena W Sliwinska
- Department of Psychology, University of York, York YO10 5DD, UK
- School of Psychology, Liverpool John Moores University, Liverpool L2 2QP, UK
| | - David M Watson
- Department of Psychology, University of York, York YO10 5DD, UK
| | - Sam Smith
- Department of Psychology, University of York, York YO10 5DD, UK
| | | |
Collapse
|
24
|
Sharma S, Vinken K, Livingstone MS. When the whole is only the parts: non-holistic object parts predominate face-cell responses to illusory faces. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.22.558887. [PMID: 37790322 PMCID: PMC10542491 DOI: 10.1101/2023.09.22.558887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Humans are inclined to perceive faces in everyday objects with a face-like configuration. This illusion, known as face pareidolia, is often attributed to a specialized network of 'face cells' in primates. We found that face cells in macaque inferotemporal cortex responded selectively to pareidolia images, but this selectivity did not require a holistic, face-like configuration, nor did it encode human faceness ratings. Instead, it was driven mostly by isolated object parts that are perceived as eyes only within a face-like context. These object parts lack usual characteristics of primate eyes, pointing to the role of lower-level features. Our results suggest that face-cell responses are dominated by local, generic features, unlike primate visual perception, which requires holistic information. These findings caution against interpreting neural activity through the lens of human perception. Doing so could impose human perceptual biases, like seeing faces where none exist, onto our understanding of neural activity.
Collapse
Affiliation(s)
- Saloni Sharma
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115
| | - Kasper Vinken
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115
| | | |
Collapse
|
25
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
26
|
Taubert J, Wally S, Dixson BJ. Preliminary evidence of an increased susceptibility to face pareidolia in postpartum women. Biol Lett 2023; 19:20230126. [PMID: 37700700 PMCID: PMC10498352 DOI: 10.1098/rsbl.2023.0126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 08/24/2023] [Indexed: 09/14/2023] Open
Abstract
As primates, we are hypersensitive to faces and face-like patterns in the visual environment, hence we often perceive illusory faces in otherwise inanimate objects, such as burnt pieces of toast and the surface of the moon. Although this phenomenon, known as face pareidolia, is a common experience, it is unknown whether our susceptibility to face pareidolia is static across our lifespan or what factors would cause it to change. Given the evidence that behaviour towards face stimuli is modulated by the neuropeptide oxytocin (OT), we reasoned that participants in stages of life associated with high levels of endogenous OT might be more susceptible to face pareidolia than participants in other stages of life. We tested this hypothesis by assessing pareidolia susceptibility in two groups of women; pregnant women (low endogenous OT) and postpartum women (high endogenous OT). We found evidence that postpartum women report seeing face pareidolia more easily than women who are currently pregnant. These data, collected online, suggest that our sensitivity to face-like patterns is not fixed and may change throughout adulthood, providing a crucial proof of concept that requires further research.
Collapse
Affiliation(s)
- Jessica Taubert
- School of Psychology, The University of Queensland, McElwain Building, St Lucia, 4072 Brisbane, Queensland, Australia
| | - Samantha Wally
- School of Psychology, The University of Queensland, McElwain Building, St Lucia, 4072 Brisbane, Queensland, Australia
| | - Barnaby J. Dixson
- School of Psychology, The University of Queensland, McElwain Building, St Lucia, 4072 Brisbane, Queensland, Australia
- Psychology and Social Sciences, The University of Sunshine Coast, Sippy Downs, Australia
| |
Collapse
|
27
|
Wardle SG, Ewing L, Malcolm GL, Paranjape S, Baker CI. Children perceive illusory faces in objects as male more often than female. Cognition 2023; 235:105398. [PMID: 36791506 PMCID: PMC10085858 DOI: 10.1016/j.cognition.2023.105398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 02/02/2023] [Accepted: 02/04/2023] [Indexed: 02/15/2023]
Abstract
Face pareidolia is the experience of seeing illusory faces in inanimate objects. While children experience face pareidolia, it is unknown whether they perceive gender in illusory faces, as their face evaluation system is still developing in the first decade of life. In a sample of 412 children and adults from 4 to 80 years of age we found that like adults, children perceived many illusory faces in objects to have a gender and had a strong bias to see them as male rather than female, regardless of their own gender identification. These results provide evidence that the male bias for face pareidolia emerges early in life, even before the ability to discriminate gender from facial cues alone is fully developed. Further, the existence of a male bias in children suggests that any social context that elicits the cognitive bias to see faces as male has remained relatively consistent across generations.
Collapse
Affiliation(s)
- Susan G Wardle
- Laboratory of Brain and Cognition, National Institutes of Health, Bethesda, MD, USA.
| | - Louise Ewing
- School of Psychology, University of East Anglia, UK
| | | | - Sanika Paranjape
- Laboratory of Brain and Cognition, National Institutes of Health, Bethesda, MD, USA; Department of Psychological and Brain Sciences, George Washington University, Washington, DC, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
28
|
Hadjikhani N, Åsberg Johnels J. Overwhelmed by the man in the moon? Pareidolic objects provoke increased amygdala activation in autism. Cortex 2023; 164:144-151. [PMID: 37209610 DOI: 10.1016/j.cortex.2023.03.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 01/27/2023] [Accepted: 03/28/2023] [Indexed: 05/22/2023]
Abstract
An interesting feature of the primate face detection system results in the perception of illusory faces in objects, or pareidolia. These illusory faces do not per se contain social information, such as eye-gaze or specific identities, yet they activate the cortical brain face-processing network, possibly via the subcortical route, including the amygdala. In autism spectrum disorder (ASD), aversion to eye-contact is commonly reported, and so are alterations in face processing more generally, yet the underlying reasons are not clear. Here we show that in autistic participants (N=37), but not in non-autistic controls (N=34), pareidolic objects increase amygdala activation bilaterally (right amygdala peak: X = 26, Y = -6, Z = -16; left amygdala peak X = -24, Y = -6, Z = -20). In addition, illusory faces engage the face-processing cortical network significantly more in ASD than in controls. An early imbalance in the excitatory and inhibitory systems in autism, affecting typical brain maturation, may be at the basis of an overresponsive reaction to face configuration and to eye contact. Our data add to the evidence of an oversensitive subcortical face processing system in ASD.
Collapse
Affiliation(s)
- Nouchine Hadjikhani
- Neurolimbic Research, Harvard/MGH Martinos Center for Biomedical Imaging, Boston, MA, USA; Gillberg Neuropsychiatry Centre, University of Gothenburg, Gothenburg, Sweden.
| | - Jakob Åsberg Johnels
- Gillberg Neuropsychiatry Centre, University of Gothenburg, Gothenburg, Sweden; Section of Speech and Language Pathology, Institute of Neuroscience and Physiology, University of Gothenburg, Sweden
| |
Collapse
|
29
|
Bracci S, Mraz J, Zeman A, Leys G, Op de Beeck H. The representational hierarchy in human and artificial visual systems in the presence of object-scene regularities. PLoS Comput Biol 2023; 19:e1011086. [PMID: 37115763 PMCID: PMC10171658 DOI: 10.1371/journal.pcbi.1011086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 05/10/2023] [Accepted: 04/09/2023] [Indexed: 04/29/2023] Open
Abstract
Human vision is still largely unexplained. Computer vision made impressive progress on this front, but it is still unclear to which extent artificial neural networks approximate human object vision at the behavioral and neural levels. Here, we investigated whether machine object vision mimics the representational hierarchy of human object vision with an experimental design that allows testing within-domain representations for animals and scenes, as well as across-domain representations reflecting their real-world contextual regularities such as animal-scene pairs that often co-occur in the visual environment. We found that DCNNs trained in object recognition acquire representations, in their late processing stage, that closely capture human conceptual judgements about the co-occurrence of animals and their typical scenes. Likewise, the DCNNs representational hierarchy shows surprising similarities with the representational transformations emerging in domain-specific ventrotemporal areas up to domain-general frontoparietal areas. Despite these remarkable similarities, the underlying information processing differs. The ability of neural networks to learn a human-like high-level conceptual representation of object-scene co-occurrence depends upon the amount of object-scene co-occurrence present in the image set thus highlighting the fundamental role of training history. Further, although mid/high-level DCNN layers represent the category division for animals and scenes as observed in VTC, its information content shows reduced domain-specific representational richness. To conclude, by testing within- and between-domain selectivity while manipulating contextual regularities we reveal unknown similarities and differences in the information processing strategies employed by human and artificial visual systems.
Collapse
Affiliation(s)
- Stefania Bracci
- Center for Mind/Brain Sciences-CIMeC, University of Trento, Rovereto, Italy
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Jakob Mraz
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Astrid Zeman
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Gaëlle Leys
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Hans Op de Beeck
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| |
Collapse
|
30
|
Long H, Peluso N, Baker CI, Japee S, Taubert J. A database of heterogeneous faces for studying naturalistic expressions. Sci Rep 2023; 13:5383. [PMID: 37012369 PMCID: PMC10070342 DOI: 10.1038/s41598-023-32659-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 03/30/2023] [Indexed: 04/05/2023] Open
Abstract
Facial expressions are thought to be complex visual signals, critical for communication between social agents. Most prior work aimed at understanding how facial expressions are recognized has relied on stimulus databases featuring posed facial expressions, designed to represent putative emotional categories (such as 'happy' and 'angry'). Here we use an alternative selection strategy to develop the Wild Faces Database (WFD); a set of one thousand images capturing a diverse range of ambient facial behaviors from outside of the laboratory. We characterized the perceived emotional content in these images using a standard categorization task in which participants were asked to classify the apparent facial expression in each image. In addition, participants were asked to indicate the intensity and genuineness of each expression. While modal scores indicate that the WFD captures a range of different emotional expressions, in comparing the WFD to images taken from other, more conventional databases, we found that participants responded more variably and less specifically to the wild-type faces, perhaps indicating that natural expressions are more multiplexed than a categorical model would predict. We argue that this variability can be employed to explore latent dimensions in our mental representation of facial expressions. Further, images in the WFD were rated as less intense and more genuine than images taken from other databases, suggesting a greater degree of authenticity among WFD images. The strong positive correlation between intensity and genuineness scores demonstrating that even the high arousal states captured in the WFD were perceived as authentic. Collectively, these findings highlight the potential utility of the WFD as a new resource for bridging the gap between the laboratory and real world in studies of expression recognition.
Collapse
Affiliation(s)
- Houqiu Long
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Natalie Peluso
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Shruti Japee
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Jessica Taubert
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia.
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA.
| |
Collapse
|
31
|
Pareidolic faces receive prioritized attention in the dot-probe task. Atten Percept Psychophys 2023; 85:1106-1126. [PMID: 36918509 DOI: 10.3758/s13414-023-02685-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2023] [Indexed: 03/16/2023]
Abstract
Face pareidolia occurs when random or ambiguous inanimate objects are perceived as faces. While real faces automatically receive prioritized attention compared with nonface objects, it is unclear whether pareidolic faces similarly receive special attention. We hypothesized that, given the evolutionary importance of broadly detecting animacy, pareidolic faces may have enough faceness to activate a broad face template, triggering prioritized attention. To test this hypothesis, and to explore where along the faceness continuum pareidolic faces fall, we conducted a series of dot-probe experiments in which we paired pareidolic faces with other images directly competing for attention: objects, animal faces, and human faces. We found that pareidolic faces elicited more prioritized attention than objects, a process that was disrupted by inversion, suggesting this prioritized attention was unlikely to be driven by low-level features. However, unexpectedly, pareidolic faces received more privileged attention compared with animal faces and showed similar prioritized attention to human faces. This attentional efficiency may be due to pareidolic faces being perceived as not only face-like, but also as human-like, and having larger facial features-eyes and mouths-compared with real faces. Together, our findings suggest that pareidolic faces appear automatically attentionally privileged, similar to human faces. Our findings are consistent with the proposal of a highly sensitive broad face detection system that is activated by pareidolic faces, triggering false alarms (i.e., illusory faces), which, evolutionarily, are less detrimental relative to missing potentially relevant signals (e.g., conspecific or heterospecific threats). In sum, pareidolic faces appear "special" in attracting attention.
Collapse
|
32
|
Behavioral and physiological sensitivity to natural sick faces. Brain Behav Immun 2023; 110:195-211. [PMID: 36893923 DOI: 10.1016/j.bbi.2023.03.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 03/03/2023] [Accepted: 03/03/2023] [Indexed: 03/11/2023] Open
Abstract
The capacity to rapidly detect and avoid sick people may be adaptive. Given that faces are reliably available, as well as rapidly detected and processed, they may provide health information that influences social interaction. Prior studies used faces that were manipulated to appear sick (e.g., editing photos, inducing inflammatory response); however, responses to naturally sick faces remain largely unexplored. We tested whether adults detected subtle cues of genuine, acute, potentially contagious illness in face photos compared to the same individuals when healthy. We tracked illness symptoms and severity with the Sickness Questionnaire and Common Cold Questionnaire. We also checked that sick and healthy photos were matched on low-level features. We found that participants (N = 109) rated sick faces, compared to healthy faces, as sicker, more dangerous, and eliciting more unpleasant feelings. Participants (N = 90) rated sick faces as more likely to be avoided, more tired, and more negative in expression than healthy faces. In a passive-viewing eye-tracking task, participants (N = 50) looked longer at healthy than sick faces, especially the eye region, suggesting people may be more drawn to healthy conspecifics. When making approach-avoidance decisions, participants (N = 112) had greater pupil dilation to sick than healthy faces, and more pupil dilation was associated with greater avoidance, suggesting elevated arousal to threat. Across all experiments, participants' behaviors correlated with the degree of sickness, as reported by the face donors, suggesting a nuanced, fine-tuned sensitivity. Together, these findings suggest that humans may detect subtle threats of contagion from sick faces, which may facilitate illness avoidance. By better understanding how humans naturally avoid illness in conspecifics, we may identify what information is used and ultimately improve public health.
Collapse
|
33
|
Alilović J, Lampers E, Slagter HA, van Gaal S. Illusory object recognition is either perceptual or cognitive in origin depending on decision confidence. PLoS Biol 2023; 21:e3002009. [PMID: 36862734 PMCID: PMC10013920 DOI: 10.1371/journal.pbio.3002009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 03/14/2023] [Accepted: 01/20/2023] [Indexed: 03/03/2023] Open
Abstract
We occasionally misinterpret ambiguous sensory input or report a stimulus when none is presented. It is unknown whether such errors have a sensory origin and reflect true perceptual illusions, or whether they have a more cognitive origin (e.g., are due to guessing), or both. When participants performed an error-prone and challenging face/house discrimination task, multivariate electroencephalography (EEG) analyses revealed that during decision errors (e.g., mistaking a face for a house), sensory stages of visual information processing initially represent the presented stimulus category. Crucially however, when participants were confident in their erroneous decision, so when the illusion was strongest, this neural representation flipped later in time and reflected the incorrectly reported percept. This flip in neural pattern was absent for decisions that were made with low confidence. This work demonstrates that decision confidence arbitrates between perceptual decision errors, which reflect true illusions of perception, and cognitive decision errors, which do not.
Collapse
Affiliation(s)
- Josipa Alilović
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, the Netherlands
| | - Eline Lampers
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Heleen A. Slagter
- Department of Applied and Experimental Psychology, Vrije Universiteit Amsterdam, the Netherlands
- Institute for Brain and Behavior, Vrije Universiteit Amsterdam, the Netherlands
| | - Simon van Gaal
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, the Netherlands
- * E-mail:
| |
Collapse
|
34
|
Hebart MN, Contier O, Teichmann L, Rockter AH, Zheng CY, Kidder A, Corriveau A, Vaziri-Pashkam M, Baker CI. THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior. eLife 2023; 12:e82580. [PMID: 36847339 PMCID: PMC10038662 DOI: 10.7554/elife.82580] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/25/2023] [Indexed: 03/01/2023] Open
Abstract
Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here, we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.
Collapse
Affiliation(s)
- Martin N Hebart
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Department of Medicine, Justus Liebig University GiessenGiessenGermany
| | - Oliver Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Max Planck School of Cognition, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Lina Teichmann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Adam H Rockter
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Charles Y Zheng
- Machine Learning Core, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Alexis Kidder
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Anna Corriveau
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Maryam Vaziri-Pashkam
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| |
Collapse
|
35
|
Palmisano A, Chiarantoni G, Bossi F, Conti A, D'Elia V, Tagliente S, Nitsche MA, Rivolta D. Face pareidolia is enhanced by 40 Hz transcranial alternating current stimulation (tACS) of the face perception network. Sci Rep 2023; 13:2035. [PMID: 36739325 PMCID: PMC9899232 DOI: 10.1038/s41598-023-29124-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
Pareidolia refers to the perception of ambiguous sensory patterns as carrying a specific meaning. In its most common form, pareidolia involves human-like facial features, where random objects or patterns are illusionary recognized as faces. The current study investigated the neurophysiological correlates of face pareidolia via transcranial alternating current stimulation (tACS). tACS was delivered at gamma (40 Hz) frequency over critical nodes of the "face perception" network (i.e., right lateral occipito-temporal and left prefrontal cortex) of 75 healthy participants while completing four face perception tasks ('Mooney test' for faces, 'Toast test', 'Noise pareidolia test', 'Pareidolia task') and an object perception task ('Mooney test' for objects). In this single-blind, sham-controlled between-subjects study, participants received 35 min of either Sham, Online, (40Hz-tACS_ON), or Offline (40Hz-tACS_PRE) stimulation. Results showed that face pareidolia was causally enhanced by 40Hz-tACS_PRE in the Mooney test for faces in which, as compared to sham, participants more often misperceived scrambled stimuli as faces. In addition, as compared to sham, participants receiving 40Hz-tACS_PRE showed similar reaction times (RTs) when perceiving illusory faces and correctly recognizing noise stimuli in the Toast test, thus not exhibiting hesitancy in identifying faces where there were none. Also, 40Hz-tACS_ON induced slower rejections of face pareidolia responses in the Noise pareidolia test. The current study indicates that 40 Hz tACS can enhance pareidolic illusions in healthy individuals and, thus, that high frequency (i.e., gamma band) oscillations are critical in forming coherent and meaningful visual perception.
Collapse
Affiliation(s)
- Annalisa Palmisano
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy.
| | - Giulio Chiarantoni
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | | | - Alessio Conti
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Vitiana D'Elia
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Serena Tagliente
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Michael A Nitsche
- Department of Psychology and Neurosciences, Leibniz Research Center for Working Environment and Human Factors (IfADo), Dortmund, Germany.,Department of Neurology, University Medical Hospital Bergmannsheil, Bochum, Germany
| | - Davide Rivolta
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy.,School of Psychology, University of East London (UEL), London, UK
| |
Collapse
|
36
|
Leung FYN, Stojanovik V, Micai M, Jiang C, Liu F. Emotion recognition in autism spectrum disorder across age groups: A cross-sectional investigation of various visual and auditory communicative domains. Autism Res 2023; 16:783-801. [PMID: 36727629 DOI: 10.1002/aur.2896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 01/19/2023] [Indexed: 02/03/2023]
Abstract
Previous research on emotion processing in autism spectrum disorder (ASD) has predominantly focused on human faces and speech prosody, with little attention paid to other domains such as nonhuman faces and music. In addition, emotion processing in different domains was often examined in separate studies, making it challenging to evaluate whether emotion recognition difficulties in ASD generalize across domains and age cohorts. The present study investigated: (i) the recognition of basic emotions (angry, scared, happy, and sad) across four domains (human faces, face-like objects, speech prosody, and song) in 38 autistic and 38 neurotypical (NT) children, adolescents, and adults in a forced-choice labeling task, and (ii) the impact of pitch and visual processing profiles on this ability. Results showed similar recognition accuracy between the ASD and NT groups across age groups for all domains and emotion types, although processing speed was slower in the ASD compared to the NT group. Age-related differences were seen in both groups, which varied by emotion, domain, and performance index. Visual processing style was associated with facial emotion recognition speed and pitch perception ability with auditory emotion recognition in the NT group but not in the ASD group. These findings suggest that autistic individuals may employ different emotion processing strategies compared to NT individuals, and that emotion recognition difficulties as manifested by slower response times may result from a generalized, rather than a domain-specific underlying mechanism that governs emotion recognition processes across domains in ASD.
Collapse
Affiliation(s)
- Florence Y N Leung
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.,Department of Psychology, University of Bath, Bath, UK
| | - Vesna Stojanovik
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Martina Micai
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
37
|
Bracci S, Op de Beeck HP. Understanding Human Object Vision: A Picture Is Worth a Thousand Representations. Annu Rev Psychol 2023; 74:113-135. [PMID: 36378917 DOI: 10.1146/annurev-psych-032720-041031] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Objects are the core meaningful elements in our visual environment. Classic theories of object vision focus upon object recognition and are elegant and simple. Some of their proposals still stand, yet the simplicity is gone. Recent evolutions in behavioral paradigms, neuroscientific methods, and computational modeling have allowed vision scientists to uncover the complexity of the multidimensional representational space that underlies object vision. We review these findings and propose that the key to understanding this complexity is to relate object vision to the full repertoire of behavioral goals that underlie human behavior, running far beyond object recognition. There might be no such thing as core object recognition, and if it exists, then its importance is more limited than traditionally thought.
Collapse
Affiliation(s)
- Stefania Bracci
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy;
| | - Hans P Op de Beeck
- Leuven Brain Institute, Research Unit Brain & Cognition, KU Leuven, Leuven, Belgium;
| |
Collapse
|
38
|
Corriveau A, Kidder A, Teichmann L, Wardle SG, Baker CI. Sustained neural representations of personally familiar people and places during cued recall. Cortex 2023; 158:71-82. [PMID: 36459788 PMCID: PMC9840701 DOI: 10.1016/j.cortex.2022.08.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/28/2022] [Accepted: 08/29/2022] [Indexed: 01/18/2023]
Abstract
The recall and visualization of people and places from memory is an everyday occurrence, yet the neural mechanisms underpinning this phenomenon are not well understood. In particular, the temporal characteristics of the internal representations generated by active recall are unclear. Here, we used magnetoencephalography (MEG) and multivariate pattern analysis to measure the evolving neural representation of familiar places and people across the whole brain when human participants engage in active recall. To isolate self-generated imagined representations, we used a retro-cue paradigm in which participants were first presented with two possible labels before being cued to recall either the first or second item. We collected personalized labels for specific locations and people familiar to each participant. Importantly, no visual stimuli were presented during the recall period, and the retro-cue paradigm allowed the dissociation of responses associated with the labels from those corresponding to the self-generated representations. First, we found that following the retro-cue it took on average ∼1000 ms for distinct neural representations of freely recalled people or places to develop. Second, we found distinct representations of personally familiar concepts throughout the 4 s recall period. Finally, we found that these representations were highly stable and generalizable across time. These results suggest that self-generated visualizations and recall of familiar places and people are subserved by a stable neural mechanism that operates relatively slowly when under conscious control.
Collapse
Affiliation(s)
- Anna Corriveau
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA; Department of Psychology, The University of Chicago, Chicago, IL 60637, USA.
| | - Alexis Kidder
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| | - Lina Teichmann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| | - Susan G Wardle
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| |
Collapse
|
39
|
Walker DL, Palermo R, Callis Z, Gignac GE. The association between intelligence and face processing abilities: A conceptual and meta-analytic review. INTELLIGENCE 2023. [DOI: 10.1016/j.intell.2022.101718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
40
|
Do chimpanzees see a face on Mars? A search for face pareidolia in chimpanzees. Anim Cogn 2022; 26:885-905. [PMID: 36583802 DOI: 10.1007/s10071-022-01739-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 12/05/2022] [Accepted: 12/15/2022] [Indexed: 12/31/2022]
Abstract
We sometimes perceive meaningful patterns or images in random arrangements of colors and shapes. This phenomenon is called pareidolia and has recently been studied intensively, especially face pareidolia. In contrast, there are few comparative-cognitive studies on face pareidolia with nonhuman primates. This study explored behavioral evidence for face pareidolia in chimpanzees using visual search and matching tasks. Faces are processed in a configural manner, and their perception and recognition are hampered by inversion and misalignment of top and bottom parts. We investigated whether the same effect occurs in a visual search for face-like objects. The results showed an effect of misalignment. On the other hand, consistent results were not obtained with the photographs of fruits. When only the top or bottom half of the face-like object was presented, chimpanzees showed better performance for the top-half condition, suggesting the importance of the eye area in face pareidolia. In the positive-control experiments, chimpanzees received the same experiment using human faces and human participants with face-like objects and fruits. As a result, chimpanzees showed an inefficient search for inverted and misaligned faces and humans for manipulated face-like objects. Finally, to examine the role of face awareness, we tested matching a human face to a face-like object in chimpanzees but obtained no substantial evidence that they saw the face-like object as a "face." Based on these results, we discussed the extents and limits of face pareidolia in chimpanzees.
Collapse
|
41
|
Lee J, Jo J, Lee B, Lee JH, Yoon S. Brain-inspired Predictive Coding Improves the Performance of Machine Challenging Tasks. Front Comput Neurosci 2022; 16:1062678. [PMID: 36465966 PMCID: PMC9709416 DOI: 10.3389/fncom.2022.1062678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 10/28/2022] [Indexed: 09/19/2023] Open
Abstract
Backpropagation has been regarded as the most favorable algorithm for training artificial neural networks. However, it has been criticized for its biological implausibility because its learning mechanism contradicts the human brain. Although backpropagation has achieved super-human performance in various machine learning applications, it often shows limited performance in specific tasks. We collectively referred to such tasks as machine-challenging tasks (MCTs) and aimed to investigate methods to enhance machine learning for MCTs. Specifically, we start with a natural question: Can a learning mechanism that mimics the human brain lead to the improvement of MCT performances? We hypothesized that a learning mechanism replicating the human brain is effective for tasks where machine intelligence is difficult. Multiple experiments corresponding to specific types of MCTs where machine intelligence has room to improve performance were performed using predictive coding, a more biologically plausible learning algorithm than backpropagation. This study regarded incremental learning, long-tailed, and few-shot recognition as representative MCTs. With extensive experiments, we examined the effectiveness of predictive coding that robustly outperformed backpropagation-trained networks for the MCTs. We demonstrated that predictive coding-based incremental learning alleviates the effect of catastrophic forgetting. Next, predictive coding-based learning mitigates the classification bias in long-tailed recognition. Finally, we verified that the network trained with predictive coding could correctly predict corresponding targets with few samples. We analyzed the experimental result by drawing analogies between the properties of predictive coding networks and those of the human brain and discussing the potential of predictive coding networks in general machine learning.
Collapse
Affiliation(s)
- Jangho Lee
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Jeonghee Jo
- Institute of New Media and Communications, Seoul National University, Seoul, South Korea
| | - Byounghwa Lee
- CybreBrain Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Jung-Hoon Lee
- CybreBrain Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Sungroh Yoon
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
- Interdisciplinary Program in Artificial Intelligence, Seoul National University, Seoul, South Korea
| |
Collapse
|
42
|
Bellemare-Pepin A, Harel Y, O’Byrne J, Mageau G, Dietrich A, Jerbi K. Processing visual ambiguity in fractal patterns: Pareidolia as a sign of creativity. iScience 2022; 25:105103. [PMID: 36164655 PMCID: PMC9508550 DOI: 10.1016/j.isci.2022.105103] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 06/18/2022] [Accepted: 09/05/2022] [Indexed: 11/21/2022] Open
Abstract
Creativity is a highly valued and beneficial skill that empirical research typically probes using "divergent thinking" (DT) tasks such as problem solving and novel idea generation. Here, in contrast, we examine the perceptual aspect of creativity by asking whether creative individuals are more likely to perceive recognizable forms in ambiguous stimuli -a phenomenon known as pareidolia. To this end, we designed a visual task in which participants were asked to identify as many recognizable forms as possible in cloud-like fractal images. We found that pareidolic perceptions arise more often and more rapidly in highly creative individuals. Furthermore, high-creatives report pareidolia across a broader range of image contrasts and fractal dimensions than do low creatives. These results extend the established body of work on DT by introducing divergent perception as a complementary manifestation of the creative mind, thus clarifying the perception-creation link while opening new paths for studying creative behavior in humans.
Collapse
Affiliation(s)
- Antoine Bellemare-Pepin
- Department of Psychology, Université de Montréal, Montréal, H2V 2S9 Québec, Canada
- Department of Music, Concordia University, Montréal, H4B1R6 Québec, Canada
| | - Yann Harel
- Department of Psychology, Université de Montréal, Montréal, H2V 2S9 Québec, Canada
| | - Jordan O’Byrne
- Department of Psychology, Université de Montréal, Montréal, H2V 2S9 Québec, Canada
| | - Geneviève Mageau
- Department of Psychology, Université de Montréal, Montréal, H2V 2S9 Québec, Canada
| | - Arne Dietrich
- Department of Psychology, American University of Beirut, Beirut 1107-2020, Lebanon
| | - Karim Jerbi
- Department of Psychology, Université de Montréal, Montréal, H2V 2S9 Québec, Canada
- MILA (Quebec Artificial Intelligence Institute), Montreal, Quebec, Canada
- UNIQUE Center (Quebec Neuro-AI Research Center), Montreal, Quebec, Canada
| |
Collapse
|
43
|
Kapsetaki ME, Zeki S. Human faces and face-like stimuli are more memorable. Psych J 2022; 11:715-719. [PMID: 35666065 PMCID: PMC9796299 DOI: 10.1002/pchj.564] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 04/15/2022] [Indexed: 01/01/2023]
Abstract
We have previously suggested a distinction in the brain processes governing biological and artifactual stimuli. One of the best examples of the biological category consists of human faces, the perception of which appears to be determined by inherited mechanisms or ones rapidly acquired after birth. In extending this work, we inquire here whether there is a higher memorability for images of human faces and whether memorability declines with increasing departure from human faces; if so, the implication would add to the growing evidence of differences in the processing of biological versus artifactual stimuli. To do so, we used images and memorability scores from a large data set of 58,741 images to compare the relative memorability of the following image categories: real human faces versus buildings, and extending this to a comparison of real human faces with five image categories that differ in their grade of resemblance to a real human face. Our findings show that, in general, when we compare the biological category of faces to the artifactual category of buildings, the former is more memorable. Furthermore, there is a gradient in which the more an image resembles a real human face the more memorable it is. Thus, the previously identified differences in biological and artifactual images extend to the field of memory.
Collapse
Affiliation(s)
- Marianna E. Kapsetaki
- Laboratory of Neurobiology, Department of Cell & Developmental BiologyUniversity College LondonLondonUK
| | - Semir Zeki
- Laboratory of Neurobiology, Department of Cell & Developmental BiologyUniversity College LondonLondonUK
| |
Collapse
|
44
|
Rahman M, van Boxtel JJ. Seeing faces where there are none: Pareidolia correlates with age but not autism traits. Vision Res 2022; 199:108071. [DOI: 10.1016/j.visres.2022.108071] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 04/28/2022] [Accepted: 05/09/2022] [Indexed: 01/22/2023]
|
45
|
Taubert J, Wardle SG, Tardiff CT, Patterson A, Yu D, Baker CI. Clutter Substantially Reduces Selectivity for Peripheral Faces in the Macaque Brain. J Neurosci 2022; 42:6739-6750. [PMID: 35868861 PMCID: PMC9436017 DOI: 10.1523/jneurosci.0232-22.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 04/29/2022] [Accepted: 06/06/2022] [Indexed: 11/21/2022] Open
Abstract
According to a prominent view in neuroscience, visual stimuli are coded by discrete cortical networks that respond preferentially to specific categories, such as faces or objects. However, it remains unclear how these category-selective networks respond when viewing conditions are cluttered, i.e., when there is more than one stimulus in the visual field. Here, we asked three questions: (1) Does clutter reduce the response and selectivity for faces as a function of retinal location? (2) Is the preferential response to faces uniform across the visual field? And (3) Does the ventral visual pathway encode information about the location of cluttered faces? We used fMRI to measure the response of the face-selective network in awake, fixating macaques (two female, five male). Across a series of four experiments, we manipulated the presence and absence of clutter, as well as the location of the faces relative to the fovea. We found that clutter reduces the response to peripheral faces. When presented in isolation, without clutter, the selectivity for faces is fairly uniform across the visual field, but, when clutter is present, there is a marked decrease in the selectivity for peripheral faces. We also found no evidence of a contralateral visual field bias when faces were presented in clutter. Nonetheless, multivariate analyses revealed that the location of cluttered faces could be decoded from the multivoxel response of the face-selective network. Collectively, these findings demonstrate that clutter blunts the selectivity of the face-selective network to peripheral faces, although information about their retinal location is retained.SIGNIFICANCE STATEMENT Numerous studies that have measured brain activity in macaques have found visual regions that respond preferentially to faces. Although these regions are thought to be essential for social behavior, their responses have typically been measured while faces were presented in isolation, a situation atypical of the real world. How do these regions respond when faces are presented with other stimuli? We report that, when clutter is present, the preferential response to foveated faces is spared but preferential response to peripheral faces is reduced. Our results indicate that the presence of clutter changes the response of the face-selective network.
Collapse
Affiliation(s)
- Jessica Taubert
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
- School of Psychology, The University of Queensland, Brisbane, Queensland 4072, Australia
| | - Susan G Wardle
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
| | - Clarissa T Tardiff
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
| | - Amanda Patterson
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
| | - David Yu
- Neurophysiology Imaging Facility, National Institutes of Health, Bethesda, Maryland 20814
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
| |
Collapse
|
46
|
Laurence S, Baker KA, Proietti VM, Mondloch CJ. What happens to our representation of identity as familiar faces age? Evidence from priming and identity aftereffects. Br J Psychol 2022; 113:677-695. [PMID: 35277854 PMCID: PMC9544931 DOI: 10.1111/bjop.12560] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 02/07/2022] [Indexed: 11/28/2022]
Abstract
Matching identity in images of unfamiliar faces is error prone, but we can easily recognize highly variable images of familiar faces - even images taken decades apart. Recent theoretical development based on computational modelling can account for how we recognize extremely variable instances of the same identity. We provide complementary behavioural data by examining older adults' representation of older celebrities who were also famous when young. In Experiment 1, participants completed a long-lag repetition priming task in which primes and test stimuli were the same age or different ages. In Experiment 2, participants completed an identity after effects task in which the adapting stimulus was an older or young photograph of one celebrity and the test stimulus was a morph between the adapting identity and a different celebrity; the adapting stimulus was the same age as the test stimulus on some trials (e.g., both old) or a different age (e.g., adapter young, test stimulus old). The magnitude of priming and identity after effects were not influenced by whether the prime and adapting stimulus were the same age or different age as the test face. Collectively, our findings suggest that humans have one common mental representation for a familiar face (e.g., Paul McCartney) that incorporates visual changes across decades, rather than multiple age-specific representations. These findings make novel predictions for state-of-the-art algorithms (e.g., Deep Convolutional Neural Networks).
Collapse
Affiliation(s)
- Sarah Laurence
- School of Psychology & CounsellingOpen UniversityMilton KeynesUK
| | - Kristen A. Baker
- Department of PsychologyBrock UniversityCanada UniversitySt. CatharinesOntarioCanada
| | | | - Catherine J. Mondloch
- Department of PsychologyBrock UniversityCanada UniversitySt. CatharinesOntarioCanada
| |
Collapse
|
47
|
Moshel ML, Robinson AK, Carlson TA, Grootswagers T. Are you for real? Decoding realistic AI-generated faces from neural activity. Vision Res 2022; 199:108079. [PMID: 35749833 DOI: 10.1016/j.visres.2022.108079] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 05/30/2022] [Accepted: 06/06/2022] [Indexed: 11/17/2022]
Abstract
Can we trust our eyes? Until recently, we rarely had to question whether what we see is indeed what exists, but this is changing. Artificial neural networks can now generate realistic images that challenge our perception of what is real. This new reality can have significant implications for cybersecurity, counterfeiting, fake news, and border security. We investigated how the human brain encodes and interprets realistic artificially generated images using behaviour and brain imaging. We found that we could reliably decode AI generated faces using people's neural activity. However, while at a group level people performed near chance classifying real and realistic fakes, participants tended to interchange the labels, classifying real faces as realistic fakes and vice versa. Understanding this difference between brain and behavioural responses may be key in determining the 'real' in our new reality. Stimuli, code, and data for this study can be found at https://osf.io/n2z73/.
Collapse
Affiliation(s)
- Michoel L Moshel
- School of Psychology, University of Sydney, NSW, Australia; School of Psychology, Macquarie University, NSW, Australia.
| | - Amanda K Robinson
- School of Psychology, University of Sydney, NSW, Australia; Queensland Brain Institute, The University of Queensland, QLD, Australia
| | | | - Tijl Grootswagers
- School of Psychology, University of Sydney, NSW, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| |
Collapse
|
48
|
Li Y, Zhang M, Liu S, Luo W. EEG decoding of multidimensional information from emotional faces. Neuroimage 2022; 258:119374. [PMID: 35700944 DOI: 10.1016/j.neuroimage.2022.119374] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 06/03/2022] [Accepted: 06/10/2022] [Indexed: 10/18/2022] Open
Abstract
Humans can detect and recognize faces quickly, but there has been little research on the temporal dynamics of the different dimensional face information that is extracted. The present study aimed to investigate the time course of neural responses to the representation of different dimensional face information, such as age, gender, emotion, and identity. We used support vector machine decoding to obtain representational dissimilarity matrices of event-related potential responses to different faces for each subject over time. In addition, we performed representational similarity analysis with the model representational dissimilarity matrices that contained different dimensional face information. Three significant findings were observed. First, the extraction process of facial emotion occurred before that of facial identity and lasted for a long time, which was specific to the right frontal region. Second, arousal was preferentially extracted before valence during the processing of facial emotional information. Third, different dimensional face information exhibited representational stability during different periods. In conclusion, these findings reveal the precise temporal dynamics of multidimensional information processing in faces and provide powerful support for computational models on emotional face perception.
Collapse
Affiliation(s)
- Yiwen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
49
|
Taubert J, Wardle SG, Tardiff CT, Koele EA, Kumar S, Messinger A, Ungerleider LG. The cortical and subcortical correlates of face pareidolia in the macaque brain. Soc Cogn Affect Neurosci 2022; 17:965-976. [PMID: 35445247 PMCID: PMC9629476 DOI: 10.1093/scan/nsac031] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 03/27/2022] [Accepted: 04/19/2022] [Indexed: 01/12/2023] Open
Abstract
Face detection is a foundational social skill for primates. This vital function is thought to be supported by specialized neural mechanisms; however, although several face-selective regions have been identified in both humans and nonhuman primates, there is no consensus about which region(s) are involved in face detection. Here, we used naturally occurring errors of face detection (i.e. objects with illusory facial features referred to as examples of 'face pareidolia') to identify regions of the macaque brain implicated in face detection. Using whole-brain functional magnetic resonance imaging to test awake rhesus macaques, we discovered that a subset of face-selective patches in the inferior temporal cortex, on the lower lateral edge of the superior temporal sulcus, and the amygdala respond more to objects with illusory facial features than matched non-face objects. Multivariate analyses of the data revealed differences in the representation of illusory faces across the functionally defined regions of interest. These differences suggest that the cortical and subcortical face-selective regions contribute uniquely to the detection of facial features. We conclude that face detection is supported by a multiplexed system in the primate brain.
Collapse
Affiliation(s)
- Jessica Taubert
- Correspondence should be addressed to Jessica Taubert, School of Psychology, The University of Queensland, Building 24A, St Lucia, QLD 4067, Australia. E-mail:
| | - Susan G Wardle
- Laboratory of Brain and Cognition, The National Institute of Mental Health, NIH, Bethesda, MD 20892, USA
| | - Clarissa T Tardiff
- Laboratory of Brain and Cognition, The National Institute of Mental Health, NIH, Bethesda, MD 20892, USA
| | - Elissa A Koele
- Laboratory of Brain and Cognition, The National Institute of Mental Health, NIH, Bethesda, MD 20892, USA
| | - Susheel Kumar
- Laboratory of Brain and Cognition, The National Institute of Mental Health, NIH, Bethesda, MD 20892, USA
| | - Adam Messinger
- Laboratory of Brain and Cognition, The National Institute of Mental Health, NIH, Bethesda, MD 20892, USA
| | | |
Collapse
|
50
|
Bardon A, Xiao W, Ponce CR, Livingstone MS, Kreiman G. Face neurons encode nonsemantic features. Proc Natl Acad Sci U S A 2022; 119:e2118705119. [PMID: 35377737 PMCID: PMC9169805 DOI: 10.1073/pnas.2118705119] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 02/17/2022] [Indexed: 11/18/2022] Open
Abstract
The primate inferior temporal cortex contains neurons that respond more strongly to faces than to other objects. Termed “face neurons,” these neurons are thought to be selective for faces as a semantic category. However, face neurons also partly respond to clocks, fruits, and single eyes, raising the question of whether face neurons are better described as selective for visual features related to faces but dissociable from them. We used a recently described algorithm, XDream, to evolve stimuli that strongly activated face neurons. XDream leverages a generative neural network that is not limited to realistic objects. Human participants assessed images evolved for face neurons and for nonface neurons and natural images depicting faces, cars, fruits, etc. Evolved images were consistently judged to be distinct from real faces. Images evolved for face neurons were rated as slightly more similar to faces than images evolved for nonface neurons. There was a correlation among natural images between face neuron activity and subjective “faceness” ratings, but this relationship did not hold for face neuron–evolved images, which triggered high activity but were rated low in faceness. Our results suggest that so-called face neurons are better described as tuned to visual features rather than semantic categories.
Collapse
Affiliation(s)
- Alexandra Bardon
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA91125
| | - Will Xiao
- Department of Neurobiology, Harvard Medical School, Boston, MA02115
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA02134
| | - Carlos R. Ponce
- Department of Neurobiology, Harvard Medical School, Boston, MA02115
| | | | - Gabriel Kreiman
- Boston Children’s Hospital, Harvard Medical School, Boston, MA02115
- Center for Brains, Minds and Machines, Cambridge, MA02115
| |
Collapse
|