1
|
Ventura P, Pascual M, Cruz F, Araújo S. From Perugino to Picasso revisited: Electrophysiological responses to faces in paintings from different art styles. Neuropsychologia 2024; 193:108742. [PMID: 38056623 DOI: 10.1016/j.neuropsychologia.2023.108742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/27/2023] [Accepted: 11/30/2023] [Indexed: 12/08/2023]
Abstract
Behavioral research (Ventura, et al., 2023) suggested that pictorial representations of faces varying along a realism-distortion spectrum elicit holistic processing as natural faces. Whether holistic face neural responses are engaged similarly remains, however, underexplored. In the present study, we evaluated the neural correlates of naturalist and artistic face processing, by exploring electrophysiological responses to faces in photographs versus in four major painting styles. The N170 response to faces in photographs was indistinguishable from that elicited by faces in the renaissance art style (depicting the most realistic faces), whilst both categories elicited larger N170 than faces in other art styles (post-impressionism, expressionism, and cubism), with a gradation in brain activity. The present evidence suggest that visual processing may become finer grained the more the realistic nature of the face. Despite behavioral equivalence, the neural mechanisms for holistic processing of natural faces and faces in diverse art styles are not equivalent.
Collapse
Affiliation(s)
- Paulo Ventura
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal.
| | - Mariona Pascual
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Francisco Cruz
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Susana Araújo
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| |
Collapse
|
2
|
Linnunsalo S, Küster D, Yrttiaho S, Peltola MJ, Hietanen JK. Psychophysiological responses to eye contact with a humanoid robot: Impact of perceived intentionality. Neuropsychologia 2023; 189:108668. [PMID: 37619935 DOI: 10.1016/j.neuropsychologia.2023.108668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 06/20/2023] [Accepted: 08/21/2023] [Indexed: 08/26/2023]
Abstract
Eye contact with a social robot has been shown to elicit similar psychophysiological responses to eye contact with another human. However, it is becoming increasingly clear that the attention- and affect-related psychophysiological responses differentiate between direct (toward the observer) and averted gaze mainly when viewing embodied faces that are capable of social interaction, whereas pictorial or pre-recorded stimuli have no such capability. It has been suggested that genuine eye contact, as indicated by the differential psychophysiological responses to direct and averted gaze, requires a feeling of being watched by another mind. Therefore, we measured event-related potentials (N170 and frontal P300) with EEG, facial electromyography, skin conductance, and heart rate deceleration responses to seeing a humanoid robot's direct versus averted gaze, while manipulating the impression of the robot's intentionality. The results showed that the N170 and the facial zygomatic responses were greater to direct than to averted gaze of the robot, and independent of the robot's intentionality, whereas the frontal P300 responses were more positive to direct than to averted gaze only when the robot appeared intentional. The study provides further evidence that the gaze behavior of a social robot elicits attentional and affective responses and adds that the robot's seemingly autonomous social behavior plays an important role in eliciting higher-level socio-cognitive processing.
Collapse
Affiliation(s)
- Samuli Linnunsalo
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland.
| | - Dennis Küster
- Cognitive Systems Lab, Department of Computer Science, University of Bremen, Bremen, Germany
| | - Santeri Yrttiaho
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland
| | - Mikko J Peltola
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland; Tampere Institute for Advanced Study, Tampere University, Tampere, Finland
| | - Jari K Hietanen
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland.
| |
Collapse
|
3
|
Palmisano A, Chiarantoni G, Bossi F, Conti A, D'Elia V, Tagliente S, Nitsche MA, Rivolta D. Face pareidolia is enhanced by 40 Hz transcranial alternating current stimulation (tACS) of the face perception network. Sci Rep 2023; 13:2035. [PMID: 36739325 PMCID: PMC9899232 DOI: 10.1038/s41598-023-29124-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
Pareidolia refers to the perception of ambiguous sensory patterns as carrying a specific meaning. In its most common form, pareidolia involves human-like facial features, where random objects or patterns are illusionary recognized as faces. The current study investigated the neurophysiological correlates of face pareidolia via transcranial alternating current stimulation (tACS). tACS was delivered at gamma (40 Hz) frequency over critical nodes of the "face perception" network (i.e., right lateral occipito-temporal and left prefrontal cortex) of 75 healthy participants while completing four face perception tasks ('Mooney test' for faces, 'Toast test', 'Noise pareidolia test', 'Pareidolia task') and an object perception task ('Mooney test' for objects). In this single-blind, sham-controlled between-subjects study, participants received 35 min of either Sham, Online, (40Hz-tACS_ON), or Offline (40Hz-tACS_PRE) stimulation. Results showed that face pareidolia was causally enhanced by 40Hz-tACS_PRE in the Mooney test for faces in which, as compared to sham, participants more often misperceived scrambled stimuli as faces. In addition, as compared to sham, participants receiving 40Hz-tACS_PRE showed similar reaction times (RTs) when perceiving illusory faces and correctly recognizing noise stimuli in the Toast test, thus not exhibiting hesitancy in identifying faces where there were none. Also, 40Hz-tACS_ON induced slower rejections of face pareidolia responses in the Noise pareidolia test. The current study indicates that 40 Hz tACS can enhance pareidolic illusions in healthy individuals and, thus, that high frequency (i.e., gamma band) oscillations are critical in forming coherent and meaningful visual perception.
Collapse
Affiliation(s)
- Annalisa Palmisano
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy.
| | - Giulio Chiarantoni
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | | | - Alessio Conti
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Vitiana D'Elia
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Serena Tagliente
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Michael A Nitsche
- Department of Psychology and Neurosciences, Leibniz Research Center for Working Environment and Human Factors (IfADo), Dortmund, Germany.,Department of Neurology, University Medical Hospital Bergmannsheil, Bochum, Germany
| | - Davide Rivolta
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy.,School of Psychology, University of East London (UEL), London, UK
| |
Collapse
|
4
|
Yu Z, Kritikos A, Pegna AJ. Up close and emotional: Electrophysiological dynamics of approaching angry faces. Biol Psychol 2023; 176:108479. [PMID: 36566011 DOI: 10.1016/j.biopsycho.2022.108479] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 12/13/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Recent evidence suggests that looming emotional faces are processed rapidly by the neural system, and that this apparent approach further interacts with emotion, causing an enhanced neural response for angry expressions. However, previous research has not demonstrated unequivocally if these effects are due to low-level visual features, or if they are indeed due to the emotional content of the stimuli. To address this question, the current study presented upright and inverted angry and neutral faces, which either expanded or contracted in size on a constant depth-cued background, such that they appeared to approach or retreat from the viewer. EEG/ERP measures were used to identify the time course of brain activity for these stimuli. The results showed that when faces were upright, both the P1 and N170 were enhanced for angry expressions, with the P1 being further increased with looming angry faces. The inversion of the faces caused an increase in both the P1 and N170 amplitudes, but no modulation was found for emotions. These findings show an early modulation of brain activity for upright looming angry faces and rule out the influence of low-level visual features as a contributing factor.
Collapse
Affiliation(s)
- Zhou Yu
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4068, Australia.
| | - Ada Kritikos
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4068, Australia
| | - Alan J Pegna
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4068, Australia.
| |
Collapse
|
5
|
Schiano Lomoriello A, Sessa P, Doro M, Konvalinka I. Shared Attention Amplifies the Neural Processing of Emotional Faces. J Cogn Neurosci 2022; 34:917-932. [PMID: 35258571 DOI: 10.1162/jocn_a_01841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sharing an experience, without communicating, affects people's subjective perception of the experience, often by intensifying it. We investigated the neural mechanisms underlying shared attention by implementing an EEG study where participants attended to and rated the intensity of emotional faces, simultaneously or independently. Participants performed the task in three experimental conditions: (a) alone; (b) simultaneously next to each other in pairs, without receiving feedback of the other's responses (shared without feedback); and (c) simultaneously while receiving the feedback (shared with feedback). We focused on two face-sensitive ERP components: The amplitude of the N170 was greater in the "shared with feedback" condition compared to the alone condition, reflecting a top-down effect of shared attention on the structural encoding of faces, whereas the EPN was greater in both shared context conditions compared to the alone condition, reflecting an enhanced attention allocation in the processing of emotional content of faces, modulated by the social context. Taken together, these results suggest that shared attention amplifies the neural processing of faces, regardless of the valence of facial expressions.
Collapse
|
6
|
Yu Z, Kritikos A, Pegna AJ. Enhanced early ERP responses to looming angry faces. Biol Psychol 2022; 170:108308. [PMID: 35271956 DOI: 10.1016/j.biopsycho.2022.108308] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 03/01/2022] [Accepted: 03/04/2022] [Indexed: 11/02/2022]
Abstract
Although the brain is known to process threatening emotional stimuli and looming motion rapidly, little is known about how the emotion and motion interact. To address this question, two experiments were carried out which presented angry and neutral emotional faces on a depth-cued background that induced the perception of distance, or a non-cued background. Furthermore, faces either expanded or contracted in size such that they appeared to approach or recede from the viewer. EEG/ERP measures were used to identify the time course of brain activity for these looming and receding, angry and neutral emotional faces. The results of both experiments revealed that the P1 was enhanced by looming angry faces on the depth-cued background, compared to neutral approaching faces, as well as all receding faces, indicating an early interaction of emotion and motion within 100 ms of presentation. Angry expressions were also found to enhance the N170 regardless of movement. These findings suggest that processing of threat and looming motion interact at the very early stages of visual processing. Furthermore, as the modulating effect of looming motion on angry expressions only arose on the depth-cued background, the findings highlight the importance of approaching movements rather than sole increases in the retinal size of the stimuli.
Collapse
Affiliation(s)
- Zhou Yu
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4068, Australia
| | - Ada Kritikos
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4068, Australia
| | - Alan J Pegna
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4068, Australia.
| |
Collapse
|
7
|
Rekow D, Baudouin JY, Brochard R, Rossion B, Leleu A. Rapid neural categorization of facelike objects predicts the perceptual awareness of a face (face pareidolia). Cognition 2022; 222:105016. [PMID: 35030358 DOI: 10.1016/j.cognition.2022.105016] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 12/31/2021] [Accepted: 01/05/2022] [Indexed: 11/19/2022]
Abstract
The human brain rapidly and automatically categorizes faces vs. other visual objects. However, whether face-selective neural activity predicts the subjective experience of a face - perceptual awareness - is debated. To clarify this issue, here we use face pareidolia, i.e., the illusory perception of a face, as a proxy to relate the neural categorization of a variety of facelike objects to conscious face perception. In Experiment 1, scalp electroencephalogram (EEG) is recorded while pictures of human faces or facelike objects - in different stimulation sequences - are interleaved every second (i.e., at 1 Hz) in a rapid 6-Hz train of natural images of nonface objects. Participants do not perform any explicit face categorization task during stimulation, and report whether they perceived illusory faces post-stimulation. A robust categorization response to facelike objects is identified at 1 Hz and harmonics in the EEG frequency spectrum with a facelike occipito-temporal topography. Across all individuals, the facelike categorization response is of about 20% of the response to human faces, but more strongly right-lateralized. Critically, its amplitude is much larger in participants who report having perceived illusory faces. In Experiment 2, facelike or matched nonface objects from the same categories appear at 1 Hz in sequences of nonface objects presented at variable stimulation rates (60 Hz to 12 Hz) and participants explicitly report after each sequence whether they perceived illusory faces. The facelike categorization response already emerges at the shortest stimulus duration (i.e., 17 ms at 60 Hz) and predicts the behavioral report of conscious perception. Strikingly, neural facelike-selectivity emerges exclusively when participants report illusory faces. Collectively, these experiments characterize a neural signature of face pareidolia in the context of rapid categorization, supporting the view that face-selective brain activity reliably predicts the subjective experience of a face from a single glance at a variety of stimuli.
Collapse
Affiliation(s)
- Diane Rekow
- Laboratoire Éthologie Développementale et Psychologie Cognitive, Centre des Sciences du Goût et de l'Alimentation, Université Bourgogne Franche-Comté, CNRS, Inrae, AgroSup Dijon, F-21000 Dijon, France.
| | - Jean-Yves Baudouin
- Laboratoire Développement, Individu, Processus, Handicap, Éducation (DIPHE), Département Psychologie du Développement, de l'Éducation et des Vulnérabilités (PsyDÉV), Institut de psychologie, Université de Lyon (Lumière Lyon 2), 69676 Bron, cedex, France
| | - Renaud Brochard
- Laboratoire Éthologie Développementale et Psychologie Cognitive, Centre des Sciences du Goût et de l'Alimentation, Université Bourgogne Franche-Comté, CNRS, Inrae, AgroSup Dijon, F-21000 Dijon, France
| | - Bruno Rossion
- Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, F-54000 Nancy, France
| | - Arnaud Leleu
- Laboratoire Éthologie Développementale et Psychologie Cognitive, Centre des Sciences du Goût et de l'Alimentation, Université Bourgogne Franche-Comté, CNRS, Inrae, AgroSup Dijon, F-21000 Dijon, France.
| |
Collapse
|
8
|
Human face and gaze perception is highly context specific and involves bottom-up and top-down neural processing. Neurosci Biobehav Rev 2021; 132:304-323. [PMID: 34861296 DOI: 10.1016/j.neubiorev.2021.11.042] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 11/24/2021] [Accepted: 11/24/2021] [Indexed: 11/21/2022]
Abstract
This review summarizes human perception and processing of face and gaze signals. Face and gaze signals are important means of non-verbal social communication. The review highlights that: (1) some evidence is available suggesting that the perception and processing of facial information starts in the prenatal period; (2) the perception and processing of face identity, expression and gaze direction is highly context specific, the effect of race and culture being a case in point. Culture affects by means of experiential shaping and social categorization the way in which information on face and gaze is collected and perceived; (3) face and gaze processing occurs in the so-called 'social brain'. Accumulating evidence suggests that the processing of facial identity, facial emotional expression and gaze involves two parallel and interacting pathways: a fast and crude subcortical route and a slower cortical pathway. The flow of information is bi-directional and includes bottom-up and top-down processing. The cortical networks particularly include the fusiform gyrus, superior temporal sulcus (STS), intraparietal sulcus, temporoparietal junction and medial prefrontal cortex.
Collapse
|
9
|
Burra N, Kerzel D. Meeting another's gaze shortens subjective time by capturing attention. Cognition 2021; 212:104734. [PMID: 33887652 DOI: 10.1016/j.cognition.2021.104734] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 04/09/2021] [Accepted: 04/10/2021] [Indexed: 01/01/2023]
Abstract
Gaze directed at the observer (direct gaze) is an important and highly salient social signal with multiple effects on cognitive processes and behavior. It is disputed whether the effect of direct gaze is caused by attentional capture or increased arousal. Time estimation may provide an answer because attentional capture predicts an underestimation of time whereas arousal predicts an overestimation. In a temporal bisection task, observers were required to classify the duration of a stimulus as short or long. Stimulus duration was selected randomly between 988 and 1479 ms. When gaze was directed at the observer, participants underestimated stimulus duration, suggesting that effects of direct gaze are caused by attentional capture, not increased arousal. Critically, this effect was limited to dynamic stimuli where gaze appeared to move toward the participant. The underestimation was present with stimuli showing a full face, but also with stimuli showing only the eye region, inverted faces and high-contrast eye-like stimuli. However, it was absent with static pictures of full faces and dynamic nonfigurative stimuli. Because the effect of direct gaze depended on motion, which is common in naturalistic scenes, more consideration needs to be given to the ecological validity of stimuli in the study of social attention.
Collapse
Affiliation(s)
- Nicolas Burra
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland.
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland
| |
Collapse
|
10
|
Abstract
OBJECTIVE The purpose of this study was to investigate the effects of semantic congruence and incongruence on sign identification by using event-related potentials (ERPs). BACKGROUND Sign systems have crucial roles in public spaces and traffic facilities. Poorly designed signs can easily confuse pedestrians and drivers and reduce the efficiency of public activities and urban administration. METHOD Thirty-one participants completed a sign identification experiment independently in a laboratory setting. Experimental materials were selected from GB/T 10001, a Chinese national recommendation standard that is officially named Public Information Graphical Symbols for Use on Signs. All ERP data were processed using MATLAB 13b, and behavioral data were analyzed using Stata 14. RESULTS N170, P200, N300, and N400 components were induced during semantic processing. Statistical analysis revealed that semantic congruence has a main effect on N300 in the frontal region and has a main effect on N400 at FZ in the frontal region, CPZ in the parietal-central region, and PZ in the parietal region. Amplitudes of N300 induced by picture-word matching were considerably different between the two experimental conditions at electrodes FZ and FCZ. Amplitudes of N400 were significantly larger in the incongruent condition than in the congruent condition. CONCLUSION The study demonstrated that N300 and N400 are promising indicators for measuring semantic congruence in future sign design. APPLICATION Our findings provide ERP indicators for measuring the semantic congruence of sign design, which can be easily applied to improve the efficiency of sign design and sign comprehension.
Collapse
|
11
|
Roles of Category, Shape, and Spatial Frequency in Shaping Animal and Tool Selectivity in the Occipitotemporal Cortex. J Neurosci 2020; 40:5644-5657. [PMID: 32527983 PMCID: PMC7363473 DOI: 10.1523/jneurosci.3064-19.2020] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2019] [Revised: 05/29/2020] [Accepted: 06/02/2020] [Indexed: 11/21/2022] Open
Abstract
Does the nature of representation in the category-selective regions in the occipitotemporal cortex reflect visual or conceptual properties? Previous research showed that natural variability in visual features across categories, quantified by image gist statistics, is highly correlated with the different neural responses observed in the occipitotemporal cortex. Using fMRI, we examined whether category selectivity for animals and tools would remain, when image gist statistics were comparable across categories. Critically, we investigated how category, shape, and spatial frequency may contribute to the category selectivity in the animal- and tool-selective regions. Female and male human observers viewed low- or high-passed images of round or elongated animals and tools that shared comparable gist statistics in the main experiment, and animal and tool images of naturally varied gist statistics in a separate localizer. Univariate analysis revealed robust category-selective responses for images with comparable gist statistics across categories. Successful classification for category (animals/tools), shape (round/elongated), and spatial frequency (low/high) was also observed, with highest classification accuracy for category. Representational similarity analyses further revealed that the activation patterns in the animal-selective regions were most correlated with a model that represents only animal information, whereas the activation patterns in the tool-selective regions were most correlated with a model that represents only tool information, suggesting that these regions selectively represent information of only animals or tools. Together, in addition to visual features, the distinction between animal and tool representations in the occipitotemporal cortex is likely shaped by higher-level conceptual influences such as categorization or interpretation of visual inputs. SIGNIFICANCE STATEMENT Since different categories often vary systematically in both visual and conceptual features, it remains unclear what kinds of information determine category-selective responses in the occipitotemporal cortex. To minimize the influences of low- and mid-level visual features, here we used a diverse image set of animals and tools that shared comparable gist statistics. We manipulated category (animals/tools), shape (round/elongated), and spatial frequency (low/high), and found that the representational content of the animal- and tool-selective regions is primarily determined by their preferred categories only, regardless of shape or spatial frequency. Our results show that category-selective responses in the occipitotemporal cortex are influenced by higher-level processing such as categorization or interpretation of visual inputs, and highlight the specificity in these category-selective regions.
Collapse
|
12
|
Rösler L, Rubo M, Gamer M. Artificial Faces Predict Gaze Allocation in Complex Dynamic Scenes. Front Psychol 2020; 10:2877. [PMID: 31920893 PMCID: PMC6930810 DOI: 10.3389/fpsyg.2019.02877] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 12/04/2019] [Indexed: 11/13/2022] Open
Abstract
Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.
Collapse
Affiliation(s)
- Lara Rösler
- Department of Psychology, Julius-Maximilians-Universität Würzburg, Würzburg, Germany
| | - Marius Rubo
- Department of Psychology, Julius-Maximilians-Universität Würzburg, Würzburg, Germany
| | - Matthias Gamer
- Department of Psychology, Julius-Maximilians-Universität Würzburg, Würzburg, Germany
| |
Collapse
|
13
|
Pereira EJ, Birmingham E, Ristic J. Contextually-Based Social Attention Diverges across Covert and Overt Measures. Vision (Basel) 2019; 3:E29. [PMID: 31735830 PMCID: PMC6802786 DOI: 10.3390/vision3020029] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 05/27/2019] [Accepted: 05/30/2019] [Indexed: 11/16/2022] Open
Abstract
Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are controlled. Here, we investigated attentional biasing elicited in response to information presented within appropriate background contexts. Using a dot-probe task, participants were presented with a face-house cue pair, with a person sitting in a room and a house positioned within a picture hanging on a wall. A response target occurred at the previous location of the eyes, mouth, top of the house, or bottom of the house. Experiment 1 measured covert attention by assessing manual responses while participants maintained central fixation. Experiment 2 measured overt attention by assessing eye movements using an eye tracker. The data from both experiments indicated no evidence of spontaneous attentional biasing towards faces or facial features in manual responses; however, an infrequent, though reliable, overt bias towards the eyes of faces emerged. Together, these findings suggest that contextually-based social information does not determine spontaneous social attentional biasing in manual measures, although it may act to facilitate oculomotor behavior.
Collapse
Affiliation(s)
- Effie J. Pereira
- Department of Psychology, McGill University, 1205 Dr. Penfield Avenue, Montreal, QC H3A 1B1, Canada
| | - Elina Birmingham
- Faculty of Education, Simon Fraser University, 8888 University Drive, Burnaby, BC V5A 1S6, Canada
| | - Jelena Ristic
- Department of Psychology, McGill University, 1205 Dr. Penfield Avenue, Montreal, QC H3A 1B1, Canada
| |
Collapse
|
14
|
Capozzi F, Ristic J. How attention gates social interactions. Ann N Y Acad Sci 2018; 1426:179-198. [PMID: 29799619 DOI: 10.1111/nyas.13854] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 03/30/2018] [Accepted: 04/24/2018] [Indexed: 01/25/2023]
Abstract
Social interactions are at the core of social life. However, humans selectively choose their exchange partners and do not engage in all available opportunities for social encounters. In this review, we argue that attentional systems play an important role in guiding the selection of social interactions. Supported by both classic and emerging literature, we identify and characterize the three core processes-perception, interpretation, and evaluation-that interact with attentional systems to modulate selective responses to social environments. Perceptual processes facilitate attentional prioritization of social cues. Interpretative processes link attention with understanding of cues' social meanings and agents' mental states. Evaluative processes determine the perceived value of the source of social information. The interplay between attention and these three routes of processing places attention in a powerful role to manage the selection of the vast amount of social information that individuals encounter on a daily basis and, in turn, gate the selection of social interactions.
Collapse
Affiliation(s)
- Francesca Capozzi
- Department of Psychology, McGill University, Montreal, Quebec, Canada
| | - Jelena Ristic
- Department of Psychology, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
15
|
Kanunikov IE, Pavlova VI. Event-Related Potentials to Faces Presented in an Emotional Context. ACTA ACUST UNITED AC 2017. [DOI: 10.1007/s11055-017-0498-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
16
|
Lu L, Zhang C, Li L. Mental imagery of face enhances face-sensitive event-related potentials to ambiguous visual stimuli. Biol Psychol 2017; 129:16-24. [PMID: 28743457 DOI: 10.1016/j.biopsycho.2017.07.013] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2017] [Revised: 07/17/2017] [Accepted: 07/18/2017] [Indexed: 11/25/2022]
Abstract
Visual mental imagery forms mental representations of visual objects when correspondent stimuli are absent and shares some characters with visual perception. Both the vertex-positive-potential (VPP) and N170 components of event-related potentials (ERPs) to visual stimuli have a remarkable preference to faces. This study investigated whether visual mental imagery modulates the face-sensitive VPP and/or N170 components. The results showed that with significantly larger amplitudes under the face-imagery condition than the house-imagery condition, the VPP and P2 responses, but not the N170 component, were elicited by phase-randomized ambiguous stimuli. Thus, the brain substrates underlying VPP are not completely identical to those underlying N170, and the VPP/P2 manifestation of the category selectivity in imagery probably reflects an integration of top-down mental imagery signals (from the prefrontal cortex) and bottom-up perception signals (from the early visual cortex) in the occipito-temporal cortex where VPP and P2 originate.
Collapse
Affiliation(s)
- Lingxi Lu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100080, China; Beijing Institute for Brain Disorders, Beijing 100069, China
| | - Changxin Zhang
- Faculty of Education, East China Normal University, Shanghai 200062, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100080, China; Beijing Institute for Brain Disorders, Beijing 100069, China.
| |
Collapse
|
17
|
Detecting gender before you know it: How implementation intentions control early gender categorization. Brain Res 2016; 1649:9-22. [PMID: 27553629 DOI: 10.1016/j.brainres.2016.08.026] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 07/13/2016] [Accepted: 08/18/2016] [Indexed: 11/22/2022]
Abstract
Gender categorization is highly automatic. Studies measuring ERPs during the presentation of male and female faces in a categorization task showed that this categorization is extremely quick (around 130ms, indicated by the N170). We tested whether this automatic process can be controlled by goal intentions and implementation intentions. First, we replicated the N170 modulation on gender-incongruent faces as reported in previous research. This effect was only observed in a task in which faces had to be categorized according to gender, but not in a task that required responding to a visual feature added to the face stimuli (the color of a dot) while gender was irrelevant. Second, it turned out that the N170 modulation on gender-incongruent faces was altered if a goal intention was set that aimed at controlling a gender bias. We interpret this finding as an indicator of nonconscious goal pursuit. The N170 modulation was completely absent when this goal intention was furnished with an implementation intention. In contrast, intentions did not alter brain activity at a later time window (P300), which is associated with more complex and rather conscious processes. In line with previous research, the P300 was modulated by gender incongruency even if individuals were strongly involved in another task, demonstrating the automaticity of gender detection. We interpret our findings as evidence that automatic gender categorization that occurs at a very early processing stage can be effectively controlled by intentions.
Collapse
|
18
|
Burra N, Barras C, Coll SY, Kerzel D. Electrophysiological evidence for attentional capture by irrelevant angry facial expressions. Biol Psychol 2016; 120:69-80. [PMID: 27568328 DOI: 10.1016/j.biopsycho.2016.08.008] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2016] [Revised: 08/12/2016] [Accepted: 08/23/2016] [Indexed: 11/17/2022]
Abstract
Attention is believed to be biased toward threatening objects or faces. Therefore, we tested whether angry face stimuli would capture attention even when they are irrelevant to the task. Observers searched for a neutral face with a tilted nose. On some trials, the target was shown together with an irrelevant angry or happy face and we measured the N2pc (an electrophysiological marker of attentional selectivity) to the distractor expression. We found that angry distractors triggered an N2pc, whereas happy distractors did not. Follow-up experiments explored the reliability of the N2pc to angry distractors using upright or inverted angry faces, the eye or mouth region of angry faces and face-like stimuli. We conclude that a threatening expression has a high attentional priority due to its emotional content and captures attention despite being irrelevant for the task.
Collapse
Affiliation(s)
- Nicolas Burra
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland.
| | - Caroline Barras
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland
| | - Sélim Yahia Coll
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland
| |
Collapse
|
19
|
Babiloni C, Marzano N, Soricelli A, Cordone S, Millán-Calenti JC, Del Percio C, Buján A. Cortical Neural Synchronization Underlies Primary Visual Consciousness of Qualia: Evidence from Event-Related Potentials. Front Hum Neurosci 2016; 10:310. [PMID: 27445750 PMCID: PMC4927634 DOI: 10.3389/fnhum.2016.00310] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 06/08/2016] [Indexed: 11/13/2022] Open
Abstract
This article reviews three experiments on event-related potentials (ERPs) testing the hypothesis that primary visual consciousness (stimulus self-report) is related to enhanced cortical neural synchronization as a function of stimulus features. ERP peak latency and sources were compared between "seen" trials and "not seen" trials, respectively related and unrelated to the primary visual consciousness. Three salient features of visual stimuli were considered (visuospatial, emotional face expression, and written words). Results showed the typical visual ERP components in both "seen" and "not seen" trials. There was no statistical difference in the ERP peak latencies between the "seen" and "not seen" trials, suggesting a similar timing of the cortical neural synchronization regardless the primary visual consciousness. In contrast, ERP sources showed differences between "seen" and "not seen" trials. For the visuospatial stimuli, the primary consciousness was related to higher activity in dorsal occipital and parietal sources at about 400 ms post-stimulus. For the emotional face expressions, there was greater activity in parietal and frontal sources at about 180 ms post-stimulus. For the written letters, there was higher activity in occipital, parietal and temporal sources at about 230 ms post-stimulus. These results hint that primary visual consciousness is associated with an enhanced cortical neural synchronization having entirely different spatiotemporal characteristics as a function of the features of the visual stimuli and possibly, the relative qualia (i.e., visuospatial, face expression, and words). In this framework, the dorsal visual stream may be synchronized in association with the primary consciousness of visuospatial and emotional face contents. Analogously, both dorsal and ventral visual streams may be synchronized in association with the primary consciousness of linguistic contents. In this line of reasoning, the ensemble of the cortical neural networks underpinning the single visual features would constitute a sort of multi-dimensional palette of colors, shapes, regions of the visual field, movements, emotional face expressions, and words. The synchronization of one or more of these cortical neural networks, each with its peculiar timing, would produce the primary consciousness of one or more of the visual features of the scene.
Collapse
Affiliation(s)
- Claudio Babiloni
- Department of Physiology and Pharmacology "Vittorio Erspamer", Sapienza University of RomeRome, Italy; Department of Neuroscience, IRCCS San Raffaele PisanaRome, Italy
| | - Nicola Marzano
- Department of Integrated Imaging, IRCCS SDN Naples, Italy
| | - Andrea Soricelli
- Department of Integrated Imaging, IRCCS SDNNaples, Italy; Department of Motor Sciences and Healthiness, University of Naples ParthenopeNaples, Italy
| | - Susanna Cordone
- Department of Physiology and Pharmacology "Vittorio Erspamer", Sapienza University of Rome Rome, Italy
| | - José Carlos Millán-Calenti
- Gerontology Research Group, Department of Medicine, Faculty of Health Sciences, University of A Coruña A Coruña, Spain
| | | | - Ana Buján
- Gerontology Research Group, Department of Medicine, Faculty of Health Sciences, University of A Coruña A Coruña, Spain
| |
Collapse
|
20
|
The development of category specificity in infancy – What can we learn from electrophysiology? Neuropsychologia 2016; 83:114-122. [DOI: 10.1016/j.neuropsychologia.2015.08.021] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 08/11/2015] [Accepted: 08/20/2015] [Indexed: 11/18/2022]
|
21
|
Liu-Shuang J, Torfs K, Rossion B. An objective electrophysiological marker of face individualisation impairment in acquired prosopagnosia with fast periodic visual stimulation. Neuropsychologia 2016; 83:100-113. [DOI: 10.1016/j.neuropsychologia.2015.08.023] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Revised: 08/20/2015] [Accepted: 08/21/2015] [Indexed: 01/23/2023]
|
22
|
Deouell LY, Grill-Spector K, Malach R, Murray MM, Rossion B. Introduction to the special issue on functional selectivity in perceptual and cognitive systems--a tribute to Shlomo Bentin (1946-2012). Neuropsychologia 2016; 83:1-4. [PMID: 26826521 DOI: 10.1016/j.neuropsychologia.2016.01.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Leon Y Deouell
- The Hebrew University of Jerusalem, Jerusalem, 91905, Israel.
| | | | - Rafael Malach
- Neurobiology Department, Weizmann Institute of Science, Herzl 100, Rehovot, 76100, Israel
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, SwitzerlandEEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, SwitzerlandDepartment of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Lausanne, Switzerland
| | - Bruno Rossion
- Institute of Research in Psychology and Institute of Neuroscience, University of Louvain, 10 Place Cardinal Mercier, Louvain-la-Neuve 1348, Belgium
| |
Collapse
|
23
|
Pesciarelli F, Leo I, Sarlo M. Implicit Processing of the Eyes and Mouth: Evidence from Human Electrophysiology. PLoS One 2016; 11:e0147415. [PMID: 26790153 PMCID: PMC4720279 DOI: 10.1371/journal.pone.0147415] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Accepted: 01/04/2016] [Indexed: 11/18/2022] Open
Abstract
The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces.
Collapse
Affiliation(s)
- Francesca Pesciarelli
- Department of Biomedical, Metabolic and Neurological Sciences, University of Modena and Reggio Emilia, Modena, Italy
| | - Irene Leo
- Department of Developmental Psychology, University of Padova, Padova, Italy
| | - Michela Sarlo
- Department of General Psychology, University of Padova, Padova, Italy
- Center for Cognitive Neuroscience, University of Padova, Padova, Italy
| |
Collapse
|
24
|
Quian Quiroga R. Neuronal codes for visual perception and memory. Neuropsychologia 2015; 83:227-241. [PMID: 26707718 DOI: 10.1016/j.neuropsychologia.2015.12.016] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2015] [Revised: 12/08/2015] [Accepted: 12/17/2015] [Indexed: 11/18/2022]
Abstract
In this review, I describe and contrast the representation of stimuli in visual cortical areas and in the medial temporal lobe (MTL). While cortex is characterized by a distributed and implicit coding that is optimal for recognition and storage of semantic information, the MTL shows a much sparser and explicit coding of specific concepts that is ideal for episodic memory. I will describe the main characteristics of the coding in the MTL by the so-called concept cells and will then propose a model of the formation and recall of episodic memory based on partially overlapping assemblies.
Collapse
Affiliation(s)
- Rodrigo Quian Quiroga
- Centre for Systems Neuroscience, University of Leicester, 9 Salisbury Rd, LE1 7QR Leicester, UK.
| |
Collapse
|
25
|
Abstract
Human faces are fundamentally dynamic, but experimental investigations of face perception have traditionally relied on static images of faces. Although naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this article, we describe a novel set of computer-generated, dynamic face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, location, and the size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether, 222 stimuli were created, spanning three different categories of movement: (1) an affective movement (fearful face), (2) a neutral movement (close-lipped, puffed cheeks with open eyes), and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between the expressions, we measured the occipital P100 event-related potential, which is known to reflect differences in early stages of visual processing, and the N170, which reflects structural encoding of faces. We found no differences between the faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces, controlled for low-level image characteristics, that are applicable to a range of research questions in social perception.
Collapse
|
26
|
Caharel S, Collet K, Rossion B. The early visual encoding of a face (N170) is viewpoint-dependent: A parametric ERP-adaptation study. Biol Psychol 2015; 106:18-27. [DOI: 10.1016/j.biopsycho.2015.01.010] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2014] [Revised: 09/27/2014] [Accepted: 01/15/2015] [Indexed: 11/25/2022]
|
27
|
Rossion B. Understanding face perception by means of human electrophysiology. Trends Cogn Sci 2014; 18:310-8. [DOI: 10.1016/j.tics.2014.02.013] [Citation(s) in RCA: 153] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2014] [Revised: 02/26/2014] [Accepted: 02/27/2014] [Indexed: 10/25/2022]
|
28
|
Churches O, Nicholls M, Thiessen M, Kohler M, Keage H. Emoticons in mind: An event-related potential study. Soc Neurosci 2014; 9:196-202. [DOI: 10.1080/17470919.2013.873737] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
29
|
Caharel S, Leleu A, Bernard C, Viggiano MP, Lalonde R, Rebaï M. Early holistic face-like processing of Arcimboldo paintings in the right occipito-temporal cortex: evidence from the N170 ERP component. Int J Psychophysiol 2013; 90:157-64. [PMID: 23816562 DOI: 10.1016/j.ijpsycho.2013.06.024] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2013] [Revised: 06/06/2013] [Accepted: 06/21/2013] [Indexed: 10/26/2022]
Abstract
The properties of the face-sensitive N170 component of the event-related brain potential (ERP) were explored through an orientation discrimination task using natural faces, objects, and Arcimboldo paintings presented upright or inverted. Because Arcimboldo paintings are composed of non-face objects but have a global face configuration, they provide great control to disentangle high-level face-like or object-like visual processes at the level of the N170, and may help to examine the implication of each hemisphere in the global/holistic processing of face formats. For upright position, N170 amplitudes in the right occipito-temporal region did not differ between natural faces and Arcimboldo paintings but were larger for both of these categories than for objects, supporting the view that as early as the N170 time-window, the right hemisphere is involved in holistic perceptual processing of face-like configurations irrespective of their features. Conversely, in the left hemisphere, N170 amplitudes differed between Arcimboldo portraits and natural faces, suggesting that this hemisphere processes local facial features. For upside-down orientation in both hemispheres, N170 amplitudes did not differ between Arcimboldo paintings and objects, but were reduced for both categories compared to natural faces, indicating that the disruption of holistic processing with inversion leads to an object-like processing of Arcimboldo paintings due to the lack of local facial features. Overall, these results provide evidence that global/holistic perceptual processing of faces and face-like formats involves the right hemisphere as early as the N170 time-window, and that the local processing of face features is rather implemented in the left hemisphere.
Collapse
Affiliation(s)
- Stéphanie Caharel
- Laboratoire de Psychologie de l'interaction et des relations intersubjectives (InterPsy-EA4432), Université de Lorraine, France
| | | | | | | | | | | |
Collapse
|
30
|
Paras CL, Webster MA. Stimulus requirements for face perception: an analysis based on "totem poles". Front Psychol 2013; 4:18. [PMID: 23407599 PMCID: PMC3569666 DOI: 10.3389/fpsyg.2013.00018] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2012] [Accepted: 01/09/2013] [Indexed: 11/13/2022] Open
Abstract
The stimulus requirements for perceiving a face are not well defined but are presumably simple, for vivid faces can often by seen in random or natural images such as cloud or rock formations. To characterize these requirements, we measured where observers reported the impression of faces in images defined by symmetric 1/f noise. This allowed us to examine the prominence and properties of different features and their necessary configurations. In these stimuli many faces can be perceived along the vertical midline, and appear stacked at multiple scales, reminiscent of "totem poles." In addition to symmetry, the faces in noise are invariably upright and thus reveal the inversion effects that are thought to be a defining property of configural face processing. To a large extent, seeing a face required seeing eyes, and these were largely restricted to dark regions in the images. Other features were more subordinate and showed relatively little bias in polarity. Moreover, the prominence of eyes depended primarily on their luminance contrast and showed little influence of chromatic contrast. Notably, most faces were rated as clearly defined with highly distinctive attributes, suggesting that once an image area is coded as a face it is perceptually completed consistent with this interpretation. This suggests that the requisite trigger features are sufficient to holistically "capture" the surrounding noise structure to form the facial representation. Yet despite these well articulated percepts, we show in further experiments that while a pair of dark spots added to noise images appears face-like, these impressions fail to elicit other signatures of face processing, and in particular, fail to elicit an N170 or fixation patterns typical for images of actual faces. These results suggest that very simple stimulus configurations are sufficient to invoke many aspects of holistic and configural face perception while nevertheless failing to fully engage the neural machinery of face coding, implying that that different signatures of face processing may have different stimulus requirements.
Collapse
Affiliation(s)
- Carrie L Paras
- Department of Psychology, University of Nevada Reno, NV, USA
| | | |
Collapse
|
31
|
The N170, not the P1, indexes the earliest time for categorical perception of faces, regardless of interstimulus variance. Neuroimage 2012; 62:1563-74. [DOI: 10.1016/j.neuroimage.2012.05.043] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2012] [Revised: 05/16/2012] [Accepted: 05/18/2012] [Indexed: 11/22/2022] Open
|
32
|
Abstract
In making sense of the visual world, the brain's processing is driven by two factors: the physical information provided by the eyes ("bottom-up" data) and the expectancies driven by past experience ("top-down" influences). We use degraded stimuli to tease apart the effects of bottom-up and top-down processes because they are easier to recognize with prior knowledge of undegraded images. Using machine learning algorithms, we quantify the amount of information that brain regions contain about stimuli as the subject learns the coherent images. Our results show that several distinct regions, including high-level visual areas and the retinotopic cortex, contain more information about degraded stimuli with prior knowledge. Critically, these regions are separate from those that exhibit classical priming, indicating that top-down influences are more than feature-based attention. Together, our results show how the neural processing of complex imagery is rapidly influenced by fleeting experiences.
Collapse
|
33
|
Abstract
In the present paper, relying on event-related brain potentials (ERPs), we investigated the automatic nature of gender categorization focusing on different stages of the ongoing process. In particular, we explored the degree to which gender categorization occurs automatically by manipulating the semantic vs. nonsemantic processing goals requested by the task (Study 1) and the complexity of the task itself (Study 2). Results of Study 1 highlighted the automatic nature of categorization at an early (N170) and on a later processing stage (P300). Findings of Study 2 showed that at an early stage categorization was automatically driven by the ease of extraction of category-based knowledge from faces while, at a later stage, categorization was more influenced by situational constrains.
Collapse
|
34
|
Effective processing of masked eye gaze requires volitional control. Exp Brain Res 2011; 216:433-43. [PMID: 22101495 DOI: 10.1007/s00221-011-2944-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2011] [Accepted: 11/06/2011] [Indexed: 10/15/2022]
Abstract
The purpose of the present study was to establish whether the validity effect produced by masked eye gaze cues should be attributed to strictly reflexive mechanisms or to volitional top-down mechanisms. While we find that masked eye gaze cues are effective in producing a validity effect in a central cueing paradigm, we also find that the efficacy of masked gaze cues is sharply constrained by the experimental context. Specifically, masked gaze cues only produced a validity effect when they appeared in the context of unmasked and predictive gaze cues. Unmasked gaze cues, in contrast, produced reliable validity effects across a range of experimental contexts, including Experiment 4 where 80% of the cues were invalid (counter-predictive). Taken together, these results suggest that the effective processing of masked gaze cues requires volitional control, whereas the processing of unmasked (clearly visible) gaze cues appears to benefit from both reflexive and top-down mechanisms.
Collapse
|
35
|
Voss JL, Federmeier KD, Paller KA. The potato chip really does look like Elvis! Neural hallmarks of conceptual processing associated with finding novel shapes subjectively meaningful. ACTA ACUST UNITED AC 2011; 22:2354-64. [PMID: 22079921 DOI: 10.1093/cercor/bhr315] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Clouds and inkblots often compellingly resemble something else--faces, animals, or other identifiable objects. Here, we investigated illusions of meaning produced by novel visual shapes. Individuals found some shapes meaningful and others meaningless, with considerable variability among individuals in these subjective categorizations. Repetition for shapes endorsed as meaningful produced conceptual priming in a priming test along with concurrent activity reductions in cortical regions associated with conceptual processing of real objects. Subjectively meaningless shapes elicited robust activity in the same brain areas, but activity was not influenced by repetition. Thus, all shapes were conceptually evaluated, but stable conceptual representations supported neural priming for meaningful shapes only. During a recognition memory test, performance was associated with increased frontoparietal activity, regardless of meaningfulness. In contrast, neural conceptual priming effects for meaningful shapes occurred during both priming and recognition testing. These different patterns of brain activation as a function of stimulus repetition, type of memory test, and subjective meaningfulness underscore the distinctive neural bases of conceptual fluency versus episodic memory retrieval. Finding meaning in ambiguous stimuli appears to depend on conceptual evaluation and cortical processing events similar to those typically observed for known objects. To the brain, the vaguely Elvis-like potato chip truly can provide a substitute for the King himself.
Collapse
Affiliation(s)
- Joel L Voss
- Beckman Institute for Advanced Science and Technology, Urbana, IL 61801, USA.
| | | | | |
Collapse
|
36
|
Babiloni C, Del Percio C, Triggiani AI, Marzano N, Valenzano A, De Rosas M, Petito A, Bellomo A, Lecce B, Mundi C, Limatola C, Cibelli G. Frontal-parietal responses to “oddball” stimuli depicting “fattened” faces are increased in successful dieters: An electroencephalographic study. Int J Psychophysiol 2011; 82:153-66. [DOI: 10.1016/j.ijpsycho.2011.08.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2010] [Revised: 08/02/2011] [Accepted: 08/03/2011] [Indexed: 11/25/2022]
|
37
|
Brain potentials show rapid activation of implicit attitudes towards young and old people. Brain Res 2011; 1429:98-105. [PMID: 22088825 DOI: 10.1016/j.brainres.2011.10.032] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2010] [Revised: 09/28/2011] [Accepted: 10/19/2011] [Indexed: 11/21/2022]
Abstract
While previous behavioural research suggests that attitudes, for example towards elderly people, may be activated automatically, this type of research does not provide information about the detailed time-course of such processing in the brain. We investigated the impact of age related attitude information in a Go/NoGo association task that paired photographs of elderly or young faces with positive or negative words. Event related brain potentials showed an N200 (NoGo) component, which appeared earlier in runs which required similar responses for congruent stimulus pairings (e.g. respond to pictures of elderly faces or negative words) than for incongruent pairings (e.g. respond to elderly faces or positive words). As information processing leading to a certain attitude must precede differential brain activity according to the congruence of the paired words and faces, we show that this type of information is activated almost immediately following the structural encoding of the face, between 170 and 230 ms after onset of the face.
Collapse
|
38
|
Moulson MC, Balas B, Nelson C, Sinha P. EEG correlates of categorical and graded face perception. Neuropsychologia 2011; 49:3847-53. [PMID: 22001852 DOI: 10.1016/j.neuropsychologia.2011.09.046] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2011] [Revised: 09/09/2011] [Accepted: 09/30/2011] [Indexed: 11/19/2022]
Abstract
Face perception is a critical social ability and identifying its neural correlates is important from both basic and applied perspectives. In EEG recordings, faces elicit a distinct electrophysiological signature, the N170, which has a larger amplitude and shorter latency in response to faces compared to other objects. However, determining the face specificity of any neural marker for face perception hinges on finding an appropriate control stimulus. We used a novel stimulus set consisting of 300 images that spanned a continuum between random patches of natural scenes and genuine faces, in order to explore the selectivity of face-sensitive ERP responses with a model-based parametric stimulus set. Critically, our database contained "false alarm" images that were misclassified as face by computational face-detection system and varied in their image-level similarity to real faces. High-density (128-channel) event-related potentials (ERPs) were recorded while 23 adult subjects viewed all 300 images in random order, and determined whether each image was a face or non-face. The goal of our analyses was to determine the extent to which a gradient of sensitivity to face-like structure was evident in the ERP signal. Traditional waveform analyses revealed that the N170 component over occipitotemporal electrodes was larger in amplitude for faces compared to all non-faces, even those that were high in image similarity to faces, suggesting strict selectivity for veridical face stimuli. By contrast, single-trial classification of the entire waveform measured at the same sensors revealed that misclassifications of non-face patterns as faces increased with image-level similarity to faces. These results suggest that individual components may exhibit steep selectivity, but integration of multiple waveform features may afford graded information regarding stimulus appearance.
Collapse
|
39
|
Dering B, Martin CD, Moro S, Pegna AJ, Thierry G. Face-sensitive processes one hundred milliseconds after picture onset. Front Hum Neurosci 2011; 5:93. [PMID: 21954382 PMCID: PMC3173839 DOI: 10.3389/fnhum.2011.00093] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2011] [Accepted: 08/13/2011] [Indexed: 11/13/2022] Open
Abstract
The human face is the most studied object category in visual neuroscience. In a quest for markers of face processing, event-related potential (ERP) studies have debated whether two peaks of activity – P1 and N170 – are category-selective. Whilst most studies have used photographs of unaltered images of faces, others have used cropped faces in an attempt to reduce the influence of features surrounding the “face–object” sensu stricto. However, results from studies comparing cropped faces with unaltered objects from other categories are inconsistent with results from studies comparing whole faces and objects. Here, we recorded ERPs elicited by full front views of faces and cars, either unaltered or cropped. We found that cropping artificially enhanced the N170 whereas it did not significantly modulate P1. In a second experiment, we compared faces and butterflies, either unaltered or cropped, matched for size and luminance across conditions, and within a narrow contrast bracket. Results of Experiment 2 replicated the main findings of Experiment 1. We then used face–car morphs in a third experiment to manipulate the perceived face-likeness of stimuli (100% face, 70% face and 30% car, 30% face and 70% car, or 100% car) and the N170 failed to differentiate between faces and cars. Critically, in all three experiments, P1 amplitude was modulated in a face-sensitive fashion independent of cropping or morphing. Therefore, P1 is a reliable event sensitive to face processing as early as 100 ms after picture onset.
Collapse
|
40
|
Babiloni C, Vecchio F, Buffo P, Buttiglione M, Cibelli G, Rossini PM. Cortical responses to consciousness of schematic emotional facial expressions: a high-resolution EEG study. Hum Brain Mapp 2011; 31:1556-69. [PMID: 20143385 DOI: 10.1002/hbm.20958] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Is conscious perception of emotional face expression related to enhanced cortical responses? Electroencephalographic data (112 channels) were recorded in 15 normal adults during the presentation of cue stimuli with neutral, happy or sad schematic faces (duration: "threshold time" inducing about 50% of correct recognitions), masking stimuli (2 s), and go stimuli with happy or sad schematic faces (0.5 s). The subjects clicked left (right) mouse button in response to go stimuli with happy (sad) faces. After the response, they said "seen" or "not seen" with reference to previous cue stimulus. Electroencephalographic data formed visual event-related potentials (ERPs). Cortical sources of ERPs were estimated by LORETA software. Reaction time to go stimuli was generally shorter during "seen" than "not seen" trials, possibly due to covert attention and awareness. The cue stimuli evoked four ERP components (posterior N100, N170, P200, and P300), which had similar peak latency in the "not seen" and "seen" ERPs. Only N170 amplitude showed differences in amplitude in the "seen" versus "not seen" ERPs. Compared to the "not seen" ERPs, the "seen" ones showed prefrontal, premotor, and posterior parietal sources of N170 higher in amplitude with the sad cue stimuli and lower in amplitude with the neutral and happy cue stimuli. These results suggest that nonconscious and conscious processing of schematic emotional facial expressions shares a similar temporal evolution of cortical activity, and conscious processing induces an early enhancement of bilateral cortical activity for the schematic sad facial expressions (N170).
Collapse
Affiliation(s)
- Claudio Babiloni
- Department of Biomedical Sciences, University of Foggia, Foggia, Italy
| | | | | | | | | | | |
Collapse
|
41
|
Babiloni C, Percio CD, Triggiani AI, Marzano N, Valenzano A, Petito A, Bellomo A, Soricelli A, Lecce B, Mundi C, Limatola C, Cibelli G. Attention cortical responses to enlarged faces are reduced in underweight subjects: An electroencephalographic study. Clin Neurophysiol 2011; 122:1348-59. [DOI: 10.1016/j.clinph.2010.12.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2010] [Revised: 10/01/2010] [Accepted: 12/01/2010] [Indexed: 11/29/2022]
|
42
|
Gordon I, Tanaka JW. Putting a name to a face: the role of name labels in the formation of face memories. J Cogn Neurosci 2011; 23:3280-93. [PMID: 21557646 DOI: 10.1162/jocn_a_00036] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Although previous research in ERPs has focused on the conditions under which faces are recognized, less research has focused on the process by which face representations are acquired and maintained. In Experiment 1, participants were required to monitor for a target "Joe" face that was shown among a series of nontarget "Other" faces. At the halfway point, participants were instructed to switch targets from the Joe face to a previous nontarget face that is now labeled "Bob." The ERP analysis focused on the posterior N250 component known to index face familiarity and the P300 component associated with context updating and response decision. Results showed that, in the first half of the experiment, there was increase in N250 negativity to the target Joe face compared with the nontarget Bob and designated Other face. In the second half of the experiment, an enhanced N250 negativity was produced to the now-target Bob face compared with the Other face. Critically, the enhanced N250 negativity to the Joe face was maintained, although Joe was no longer the target. The P300 component followed a similar pattern of brain response, where the Joe face elicited a significantly larger P300 amplitude than the Other face and the Bob face. In the Bob half of the experiment, the Bob face elicited a reliably larger P300 than the Other faces, and the heightened P300 to the Joe face was sustained. In Experiment 2, we examined whether the increased N250 and P300 to Joe was because of simple naming effects. Participants were introduced to both Joe and Bob faces and names at the beginning of the experiment. In the first half of the experiment, participants monitored for the target Joe face and at the halfway point, they were instructed to switch targets to the Bob face. Findings show that N250 negativity significantly increased to the Joe face relative to the Bob and Other faces in the first half of the experiment and an enhanced N250 negativity was found for the target Bob face and the nontarget Joe face in the second half. An increased P300 amplitude was demonstrated to the target Joe and Bob faces in the first and second halves of the experiment, respectively. Importantly, the P300 amplitude elicited by the Joe face equaled the P300 amplitude to the Bob face, although it was no longer the target face. The findings from Experiments 1 and 2 suggest that the N250 component is not solely determined by name labeling, exposure, or task relevancy, but it is the combination of these factors that contribute to the acquisition of enduring face representations.
Collapse
Affiliation(s)
- Iris Gordon
- University of Victoria, Victoria, British Columbia, Canada.
| | | |
Collapse
|
43
|
Rossion B, Caharel S. ERP evidence for the speed of face categorization in the human brain: Disentangling the contribution of low-level visual cues from face perception. Vision Res 2011; 51:1297-311. [PMID: 21549144 DOI: 10.1016/j.visres.2011.04.003] [Citation(s) in RCA: 229] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2010] [Revised: 03/15/2011] [Accepted: 04/01/2011] [Indexed: 10/18/2022]
Abstract
How fast are visual stimuli categorized as faces by the human brain? Because of their high temporal resolution and the possibility to record simultaneously from the whole brain, electromagnetic scalp measurements should be the ideal method to clarify this issue. However, this question remains debated, with studies reporting face-sensitive responses varying from 50 ms to 200 ms following stimulus onset. Here we disentangle the contribution of the information associated with the phenomenological experience of a face (phase) from low-level visual cues (amplitude spectrum, color) in accounting for early face-sensitivity in the human brain. Pictures of faces and of a category of familiar objects (cars), as well as their phase-scrambled versions, were presented to fifteen human participants tested with high-density (128 channels) EEG. We replicated an early face-sensitivity - larger response to pictures of faces than cars - at the level of the occipital event-related potential (ERP) P1 (80- ). However, a similar larger P1 to phase-scrambled faces than phase-scrambled cars was also found. In contrast, the occipito-temporal N170 was much larger in amplitude for pictures of intact faces than cars, especially in the right hemisphere, while the small N170 elicited by phase-scrambled stimuli did not differ for faces and cars. These findings show that sensitivity to faces on the visual evoked potentials P1 and N1 (N170) is functionally dissociated: the P1 face-sensitivity is driven by low-level visual cues while the N1 (or N170) face-sensitivity reflects the perception of a face. Altogether, these observations indicate that the earliest access to a high-level face representation, that is, a face percept, does not precede the N170 onset in the human brain. Furthermore, they allow resolving apparent discrepancies between the timing of rapid human saccades towards faces and the early activation of high-level facial representations as shown by electrophysiological studies in the primate brain. More generally, they put strong constraints on the interpretation of early (before 100 ms) face-sensitive effects in the human brain.
Collapse
Affiliation(s)
- Bruno Rossion
- Institute of Psychology, Institute of Neuroscience, Université Catholique de Louvain, Belgium.
| | | |
Collapse
|
44
|
Pesciarelli F, Sarlo M, Leo I. The time course of implicit processing of facial features: An event-related potential study. Neuropsychologia 2011; 49:1154-1161. [PMID: 21315094 DOI: 10.1016/j.neuropsychologia.2011.02.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2010] [Revised: 01/28/2011] [Accepted: 02/02/2011] [Indexed: 11/13/2022]
Affiliation(s)
- F Pesciarelli
- Department of Biomedical Sciences, University of Modena and Reggio Emilia, Via Campi 287, 41100 Modena, Italy.
| | - M Sarlo
- Department of General Psychology, University of Padova, Padova, Italy
| | - I Leo
- Department of Developmental Psychology, University of Padova, Padova, Italy
| |
Collapse
|
45
|
Amihai I, Deouell LY, Bentin S. Neural adaptation is related to face repetition irrespective of identity: a reappraisal of the N170 effect. Exp Brain Res 2011; 209:193-204. [PMID: 21287156 DOI: 10.1007/s00221-011-2546-x] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2010] [Accepted: 12/25/2010] [Indexed: 11/28/2022]
Abstract
Event-related potentials offer evidence for face distinctive neural activity that peaks at about 170 ms following the onset of face stimuli (the N170 effect). We investigated the role of the perceptual mechanism reflected by the N170 effect by comparing the adaptation of the N170 amplitude when target faces were preceded either by identical face images or by different faces relative to when they were preceded by objects. In two experiments, we demonstrate that the N170 is equally adapted by repetition of the same or different faces. Thus, our findings show that the N170 is sensitive to the category rather than the identity of a face. This outcome supports the hypothesis that the N170 effect reflects the activity of a perceptual mechanism which discriminates faces from objects and streams face stimuli to dedicated circuits, specialized in encoding and decoding information about the face.
Collapse
Affiliation(s)
- Ido Amihai
- Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem 91905, Israel
| | | | | |
Collapse
|
46
|
Eimer M, Gosling A, Nicholas S, Kiss M. The N170 component and its links to configural face processing: A rapid neural adaptation study. Brain Res 2011; 1376:76-87. [DOI: 10.1016/j.brainres.2010.12.046] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2010] [Revised: 12/07/2010] [Accepted: 12/12/2010] [Indexed: 10/18/2022]
|
47
|
Early influence of prior experience on face perception. Neuroimage 2011; 54:1415-26. [DOI: 10.1016/j.neuroimage.2010.08.081] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2010] [Revised: 07/20/2010] [Accepted: 08/31/2010] [Indexed: 01/01/2023] Open
|
48
|
|
49
|
Jemel B, Schuller AM, Goffaux V. Characterizing the spatio-temporal dynamics of the neural events occurring prior to and up to overt recognition of famous faces. J Cogn Neurosci 2010; 22:2289-305. [PMID: 19642891 DOI: 10.1162/jocn.2009.21320] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Although it is generally acknowledged that familiar face recognition is fast, mandatory, and proceeds outside conscious control, it is still unclear whether processes leading to familiar face recognition occur in a linear (i.e., gradual) or a nonlinear (i.e., all-or-none) manner. To test these two alternative accounts, we recorded scalp ERPs while participants indicated whether they recognize as familiar the faces of famous and unfamiliar persons gradually revealed in a descending sequence of frames, from the noisier to the least noisy. This presentation procedure allowed us to characterize the changes in scalp ERP responses occurring prior to and up to overt recognition. Our main finding is that gradual and all-or-none processes are possibly involved during overt recognition of familiar faces. Although the N170 and the N250 face-sensitive responses displayed an abrupt activity change at the moment of overt recognition of famous faces, later ERPs encompassing the N400 and late positive component exhibited an incremental increase in amplitude as the point of recognition approached. In addition, famous faces that were not overtly recognized at one trial before recognition elicited larger ERP potentials than unfamiliar faces, probably reflecting a covert recognition process. Overall, these findings present evidence that recognition of familiar faces implicates spatio-temporally complex neural processes exhibiting differential pattern activity changes as a function of recognition state.
Collapse
|
50
|
Abstract
Faces are detected more rapidly than other objects in visual scenes and search arrays, but the cause for this face advantage has been contested. In the present study, we found that under conditions of spatial uncertainty, faces were easier to detect than control targets (dog faces, clocks and cars) even in the absence of surrounding stimuli, making an explanation based only on low-level differences unlikely. This advantage improved with eccentricity in the visual field, enabling face detection in wider visual windows, and pointing to selective sparing of face detection at greater eccentricities. This face advantage might be due to perceptual factors favoring face detection. In addition, the relative face advantage is greater under flanked than non-flanked conditions, suggesting an additional, possibly attention-related benefit enabling face detection in groups of distracters.
Collapse
Affiliation(s)
- Orit Hershler
- Department of Neurobiology, Hebrew University, Jerusalem, Israel.
| | | | | | | |
Collapse
|