1
|
Skog E, Qian CS, Parmar A, Schofield AJ. What surprises the Mona Lisa? The relative importance of the eyes and eyebrows for detecting surprise in briefly presented face stimuli. Vision Res 2023; 211:108275. [PMID: 37429054 DOI: 10.1016/j.visres.2023.108275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 05/30/2023] [Accepted: 06/02/2023] [Indexed: 07/12/2023]
Abstract
The classification image (CI) technique has been used to derive templates for judgements of facial emotion and reveal which facial features inform specific emotional judgements. For example, this method has been used to show that detecting an up- or down-turned mouth is a primary strategy for discriminating happy versus sad expressions. We explored the detection of surprise using CIs, expecting widened eyes, raised eyebrows, and open mouths to be dominant features. We briefly presented a photograph of a female face with a neutral expression embedded in random visual noise, which modulated the appearance of the face on a trial-by-trial basis. In separate sessions, we showed this face with or without eyebrows to test the importance of the raised eyebrow element of surprise. Noise samples were aggregated into CIs based on participant responses. Results show that the eye-region was most informative for detecting surprise. Unless attention was specifically directed to the mouth, we found no effects in the mouth region. The eye effect was stronger when the eyebrows were absent, but the eyebrow region was not itself informative and people did not infer eyebrows when they were missing. A follow-up study was conducted in which participants rated the emotional valence of the neutral images combined with their associated CIs. This verified that CIs for 'surprise' convey surprised expressions, while also showing that CIs for 'not surprise' convey disgust. We conclude that the eye-region is important for the detection of surprise.
Collapse
Affiliation(s)
- Emil Skog
- School of Psychology, College of Health and Life Sciences, Aston University, Birmingham B4 7ET, United Kingdom
| | - C Stella Qian
- School of Psychology, College of Health and Life Sciences, Aston University, Birmingham B4 7ET, United Kingdom
| | - Anisha Parmar
- School of Psychology, College of Health and Life Sciences, Aston University, Birmingham B4 7ET, United Kingdom
| | - Andrew J Schofield
- School of Psychology, College of Health and Life Sciences, Aston University, Birmingham B4 7ET, United Kingdom.
| |
Collapse
|
2
|
Orczyk J, Schroeder CE, Abeles IY, Gomez-Ramirez M, Butler PD, Kajikawa Y. Comparison of Scalp ERP to Faces in Macaques and Humans. Front Syst Neurosci 2021; 15:667611. [PMID: 33967709 PMCID: PMC8101630 DOI: 10.3389/fnsys.2021.667611] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Accepted: 03/30/2021] [Indexed: 02/05/2023] Open
Abstract
Face recognition is an essential activity of social living, common to many primate species. Underlying processes in the brain have been investigated using various techniques and compared between species. Functional imaging studies have shown face-selective cortical regions and their degree of correspondence across species. However, the temporal dynamics of face processing, particularly processing speed, are likely different between them. Across sensory modalities activation of primary sensory cortices in macaque monkeys occurs at about 3/5 the latency of corresponding activation in humans, though this human simian difference may diminish or disappear in higher cortical regions. We recorded scalp event-related potentials (ERPs) to presentation of faces in macaques and estimated the peak latency of ERP components. Comparisons of latencies between macaques (112 ms) and humans (192 ms) suggested that the 3:5 ratio could be preserved in higher cognitive regions of face processing between those species.
Collapse
Affiliation(s)
- John Orczyk
- Translational Neuroscience Division, Center for Biological Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Charles E Schroeder
- Translational Neuroscience Division, Center for Biological Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States.,Department of Neurological Surgery, Vagelos College of Physicians and Surgeons, Columbia University Medical Center, New York, NY, United States
| | - Ilana Y Abeles
- Clinical Research Department, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Manuel Gomez-Ramirez
- Clinical Research Department, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Pamela D Butler
- Clinical Research Department, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States.,Psychiatry Department, School of Medicine, New York University, New York, NY, United States
| | - Yoshinao Kajikawa
- Translational Neuroscience Division, Center for Biological Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States.,Psychiatry Department, School of Medicine, New York University, New York, NY, United States
| |
Collapse
|
3
|
Song Y, Qu Y, Xu S, Liu J. Implementation-Independent Representation for Deep Convolutional Neural Networks and Humans in Processing Faces. Front Comput Neurosci 2021; 14:601314. [PMID: 33574746 PMCID: PMC7870475 DOI: 10.3389/fncom.2020.601314] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 12/30/2020] [Indexed: 11/13/2022] Open
Abstract
Deep convolutional neural networks (DCNN) nowadays can match human performance in challenging complex tasks, but it remains unknown whether DCNNs achieve human-like performance through human-like processes. Here we applied a reverse-correlation method to make explicit representations of DCNNs and humans when performing face gender classification. We found that humans and a typical DCNN, VGG-Face, used similar critical information for this task, which mainly resided at low spatial frequencies. Importantly, the prior task experience, which the VGG-Face was pre-trained to process faces at the subordinate level (i.e., identification) as humans do, seemed necessary for such representational similarity, because AlexNet, a DCNN pre-trained to process objects at the basic level (i.e., categorization), succeeded in gender classification but relied on a completely different representation. In sum, although DCNNs and humans rely on different sets of hardware to process faces, they can use a similar and implementation-independent representation to achieve the same computation goal.
Collapse
Affiliation(s)
- Yiying Song
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Yukun Qu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Shan Xu
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Jia Liu
- Department of Psychology & Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| |
Collapse
|
4
|
Rossion B, Retter TL, Liu‐Shuang J. Understanding human individuation of unfamiliar faces with oddball fast periodic visual stimulation and electroencephalography. Eur J Neurosci 2020; 52:4283-4344. [DOI: 10.1111/ejn.14865] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 05/19/2020] [Accepted: 05/30/2020] [Indexed: 01/08/2023]
Affiliation(s)
- Bruno Rossion
- CNRS, CRAN UMR7039 Université de Lorraine F‐54000Nancy France
- Service de Neurologie, CHRU‐Nancy Université de Lorraine F‐54000Nancy France
| | - Talia L. Retter
- Department of Behavioural and Cognitive Sciences Faculty of Language and Literature Humanities, Arts and Education University of Luxembourg Luxembourg Luxembourg
| | - Joan Liu‐Shuang
- Institute of Research in Psychological Science Institute of Neuroscience Université de Louvain Louvain‐la‐Neuve Belgium
| |
Collapse
|
5
|
Rossion B, Taubert J. What can we learn about human individual face recognition from experimental studies in monkeys? Vision Res 2019; 157:142-158. [DOI: 10.1016/j.visres.2018.03.012] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Revised: 03/22/2018] [Accepted: 03/29/2018] [Indexed: 10/28/2022]
|
6
|
Carole P. Pictorial Competence in Primates: A Cognitive Correlate of Mirror Self-Recognition? Primates 2018. [DOI: 10.5772/intechopen.75568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
The Neural Dynamics of Facial Identity Processing: Insights from EEG-Based Pattern Analysis and Image Reconstruction. eNeuro 2018; 5:eN-NWR-0358-17. [PMID: 29492452 PMCID: PMC5829556 DOI: 10.1523/eneuro.0358-17.2018] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 01/11/2018] [Accepted: 01/12/2018] [Indexed: 11/21/2022] Open
Abstract
Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here, we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50–650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.
Collapse
|
8
|
Visual discrimination of primate species based on faces in chimpanzees. Primates 2018; 59:243-251. [DOI: 10.1007/s10329-018-0649-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2017] [Accepted: 01/09/2018] [Indexed: 10/18/2022]
|
9
|
Nakata R, Eifuku S, Tamura R. Crucial information for efficient face searching by humans and Japanese macaques. Anim Cogn 2017; 21:155-164. [PMID: 29256143 DOI: 10.1007/s10071-017-1148-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Revised: 11/27/2017] [Accepted: 11/30/2017] [Indexed: 11/27/2022]
Abstract
Humans can efficiently detect a face among non-face objects, but few studies of this ability have been conducted in animals. Here, in Japanese macaques and humans, we examined visual searching for a face and explored what factors contribute to efficient facial information processing. Subjects were asked to search for an odd target among the different numbers of distracters. Faces of the subjects' own species, the backs of the head of the subjects' own species, faces of the subjects' closely related species or race, and faces of species that are clearly different from the subjects' own species were used as the target. Both the macaques and humans detected a face of their own species more efficiently than a face from a clearly different species. Similar efficient detections were confirmed for the faces of the subjects' closely related species or race. These results suggest that conspecific faces and faces that share morphological similarity with conspecific faces can be detected efficiently among non-face objects by both humans and Japanese macaques. In another experiment, facial recognition efficiency was observed when the subjects searched for own-species faces that had lower-spatial-frequency components compared to faces with higher-spatial-frequency components. It seems reasonable that the ability to search efficiently for faces by using holistic face processing is derived from fundamental social cognition abilities that are broadly shared among species.
Collapse
Affiliation(s)
- Ryuzaburo Nakata
- Graduate School of Informatics, Nagoya University, Furocho, Nagoya, 464-8601, Japan
| | - Satoshi Eifuku
- Department of Systems Neuroscience, School of Medicine, Fukushima Medical University, 1 Hikariga-oka, Fukushima, 960-1295, Japan.
| | - Ryoi Tamura
- Department of Integrative Neuroscience, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, 2630 Sugitani, Toyama, 930-0194, Japan.
| |
Collapse
|
10
|
Rossion B, Taubert J. Commentary: The Code for Facial Identity in the Primate Brain. Front Hum Neurosci 2017; 11:550. [PMID: 29184489 PMCID: PMC5694553 DOI: 10.3389/fnhum.2017.00550] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Accepted: 10/31/2017] [Indexed: 11/16/2022] Open
Affiliation(s)
- Bruno Rossion
- Face Categorization Lab, Psychological Sciences Research Institute and Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium.,Service de Neurologie, Centre Hospitalier Universitaire de Nancy, Nancy, France
| | - Jessica Taubert
- School of Psychology, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
11
|
Chang CH, Nemrodov D, Lee ACH, Nestor A. Memory and Perception-based Facial Image Reconstruction. Sci Rep 2017; 7:6499. [PMID: 28747686 PMCID: PMC5529548 DOI: 10.1038/s41598-017-06585-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2017] [Accepted: 06/14/2017] [Indexed: 01/25/2023] Open
Abstract
Visual memory for faces has been extensively researched, especially regarding the main factors that influence face memorability. However, what we remember exactly about a face, namely, the pictorial content of visual memory, remains largely unclear. The current work aims to elucidate this issue by reconstructing face images from both perceptual and memory-based behavioural data. Specifically, our work builds upon and further validates the hypothesis that visual memory and perception share a common representational basis underlying facial identity recognition. To this end, we derived facial features directly from perceptual data and then used such features for image reconstruction separately from perception and memory data. Successful levels of reconstruction were achieved in both cases for newly-learned faces as well as for familiar faces retrieved from long-term memory. Theoretically, this work provides insights into the content of memory-based representations while, practically, it may open the path to novel applications, such as computer-based 'sketch artists'.
Collapse
Affiliation(s)
- Chi-Hsun Chang
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada.
| | - Dan Nemrodov
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
12
|
Lamaury A, Cochet H, Bourjade M. Acquisition of joint attention by olive baboons gesturing toward humans. Anim Cogn 2017; 22:567-575. [PMID: 28695348 DOI: 10.1007/s10071-017-1111-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 06/13/2017] [Accepted: 07/04/2017] [Indexed: 10/19/2022]
Abstract
Joint attention is a core ability of human social cognition which broadly refers to the coordination of attention with both the presence and activity of social partners. In both human and non-human primates, joint attention can be assessed from behaviour; gestures and gaze alternation between the partner and a distal object are standard behavioural manifestations of joint attention. Here we examined the acquisition of joint attention in olive baboons as a function of their individual experience of a human partner's attentional states during training regimes. Eleven olive baboons (Papio anubis) were observed during their training to perform food-requesting gestures, which occurred either by (1) a human facing them (face condition), or (2) by a human positioned in profile who never turned to them (profile condition). We found neither gestures nor gaze alternation were present at the start of the training but rather developed over the training period. Only baboons in the face condition showed an increase in the number of gaze alternations, and their gaze pattern progressively shifted to a coordinated sequence in which gazes and gestures were coordinated in time. In contrast, baboons trained by a human in profile showed significantly less coordination of gazes with gestures but still learned to request food with their gestures. These results suggest that the partner's social attention plays an important role in the acquisition of visual joint attention and, to a lesser extent, in gesture learning in baboons. Interspecific interactions appear to offer rich opportunities to manipulate and thus identify the social contexts in which socio-communicative skills develop.
Collapse
Affiliation(s)
- Augustine Lamaury
- UMR 5263 Cognition Langues Langage Ergonomie - Laboratoire Travail et Cognition (CLLE-LTC) Maison de la recherche C-616, Université Toulouse Jean Jaurès, 5 allées Antonio Machado, 31058, Toulouse Cedex, France
| | - Hélène Cochet
- UMR 5263 Cognition Langues Langage Ergonomie - Laboratoire Travail et Cognition (CLLE-LTC) Maison de la recherche C-616, Université Toulouse Jean Jaurès, 5 allées Antonio Machado, 31058, Toulouse Cedex, France
| | - Marie Bourjade
- UMR 5263 Cognition Langues Langage Ergonomie - Laboratoire Travail et Cognition (CLLE-LTC) Maison de la recherche C-616, Université Toulouse Jean Jaurès, 5 allées Antonio Machado, 31058, Toulouse Cedex, France.
- Station de Primatologie UPS 846, Centre National de la Recherche Scientifique, Rousset, France.
| |
Collapse
|
13
|
Diamond RFL, Stoinski TS, Mickelberg JL, Basile BM, Gazes RP, Templer VL, Hampton RR. Similar stimulus features control visual classification in orangutans and rhesus monkeys. J Exp Anal Behav 2016; 105:100-10. [PMID: 26615515 PMCID: PMC6413319 DOI: 10.1002/jeab.176] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2015] [Accepted: 11/04/2015] [Indexed: 11/08/2022]
Abstract
Many species classify images according to visual attributes. In pigeons, local features may disproportionately control classification, whereas in primates global features may exert greater control. In the absence of explicitly comparative studies, in which different species are tested with the same stimuli under similar conditions, it is not possible to determine how much of the variation in the control of classification is due to species differences and how much is due to differences in the stimuli, training, or testing conditions. We tested rhesus monkeys (Macaca mulatta) and orangutans (Pongo pygmaeus and Pongo abelii) in identical tests in which images were modified to determine which stimulus features controlled classification. Monkeys and orangutans were trained to classify full color images of birds, fish, flowers, and people; they were later given generalization tests in which images were novel, black and white, black and white line drawings, or scrambled. Classification in these primate species was controlled by multiple stimulus attributes, both global and local, and the species behaved similarly.
Collapse
Affiliation(s)
| | - Tara S. Stoinski
- Zoo Atlanta, Atlanta, GA
- Dian Fossey Gorilla Fund International, Atlanta, GA
| | | | | | | | | | - Robert R. Hampton
- Emory University and Yerkes National Primate Research Center, Atlanta, GA
| |
Collapse
|
14
|
Using the reassignment procedure to test object representation in pigeons and people. Learn Behav 2015; 43:188-207. [PMID: 25762428 DOI: 10.3758/s13420-015-0173-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In four experiments, we evaluated Lea's (1984) reassignment procedure for studying object representation in pigeons (Experiments 1-3) and humans (Experiment 4). In the initial phase of Experiment 1, pigeons were taught to make discriminative button responses to five views of each of four objects. Using the same set of buttons in the second phase, one view of each object was trained to a different button. In the final phase, the four views that had been withheld in the second stage were shown. In Experiment 2, pigeons were initially trained just like the birds in Experiment 1. Then, one view of each object was reassigned to a different button, now using a new set of four response buttons. In Experiment 3, the reassignment paradigm was again tested using the number of pecks to bind together different views of the same object. Across all three experiments, pigeons showed statistically significant generalization of the new response to the non-reassigned views, but such responding was well below that to the reassigned view. In Experiment 4, human participants were studied using the same stimuli and task as the pigeons in Experiment 1. People did strongly generalize the new response to the non-reassigned views. These results indicate that humans, but not pigeons, can employ a unified object representation that they can flexibly map to different responses under the reassignment procedure.
Collapse
|
15
|
Olive baboons, Papio anubis, adjust their visual and auditory intentional gestures to the visual attention of others. Anim Behav 2014. [DOI: 10.1016/j.anbehav.2013.10.019] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
16
|
Nakata R, Eifuku S, Tamura R. Effects of tilted orientations and face-like configurations on visual search asymmetry in macaques. Anim Cogn 2013; 17:67-76. [PMID: 23661410 DOI: 10.1007/s10071-013-0638-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2012] [Revised: 04/12/2013] [Accepted: 04/29/2013] [Indexed: 10/26/2022]
Abstract
Visual search asymmetry has been used as an important tool for exploring cognitive mechanisms in humans. Here, we examined visual search asymmetry in two macaques toward two types of stimulus: the orientation of line stimuli and face-like stimuli. In the experiment, the monkeys were required to detect an odd target among numerous uniform distracters. The monkeys detected a tilted-lines target among horizontal- or vertical-lined distracters significantly faster than a horizontal- or vertical-lined target among tilted-lined distracters, regardless of the display size. However, unlike the situation in which inverted-face stimuli were introduced as distracters, this effect was diminished if upright-face stimuli were used as distracters. Additionally, monkeys detected an upright-face target among inverted-face distracters significantly faster than an inverted-face target among upright-face distracters, regardless of the display size. These results demonstrate that macaques can search a target efficiently to detect both tilted lines among non-tilted lines and upright faces among inverted faces. This clarifies that there are several types of visual search asymmetry in macaques.
Collapse
Affiliation(s)
- Ryuzaburo Nakata
- Department of Integrative Neuroscience, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, 2630 Sugitani, Toyama, 930-0194, Japan
| | | | | |
Collapse
|
17
|
Dahl CD, Rasch MJ, Tomonaga M, Adachi I. The face inversion effect in non-human primates revisited - an investigation in chimpanzees (Pan troglodytes). Sci Rep 2013; 3:2504. [PMID: 23978930 PMCID: PMC3753590 DOI: 10.1038/srep02504] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2013] [Accepted: 08/07/2013] [Indexed: 11/27/2022] Open
Abstract
Faces presented upside-down are harder to recognize than presented right-side up, an effect known as the face inversion effect. With inversion the perceptual processing of the spatial relationship among facial parts is disrupted. Previous literature indicates a face inversion effect in chimpanzees toward familiar and conspecific faces. Although these results are not inconsistent with findings from humans they have some controversy in their methodology. Here, we employed a delayed matching-to-sample task to test captive chimpanzees on discriminating chimpanzee and human faces. Their performances were deteriorated by inversion. More importantly, the discrimination deterioration was systematically different between the two age groups of chimpanzee participants, i.e. young chimpanzees showed a stronger inversion effect for chimpanzee than for human faces, while old chimpanzees showed a stronger inversion effect for human than for chimpanzee faces. We conclude that the face inversion effect in chimpanzees is modulated by the level of expertise of face processing.
Collapse
Affiliation(s)
- Christoph D. Dahl
- Primate Research Institute, Kyoto University, Section of Language and Intelligence, Inuyama, Aichi, Japan
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Malte J. Rasch
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, China
| | - Masaki Tomonaga
- Primate Research Institute, Kyoto University, Section of Language and Intelligence, Inuyama, Aichi, Japan
| | - Ikuma Adachi
- Primate Research Institute, Center for international collaboration and advanced studies in primatology, Inuyama, Aichi, Japan
| |
Collapse
|
18
|
The Thatcher illusion in squirrel monkeys (Saimiri sciureus). Anim Cogn 2012; 15:517-23. [DOI: 10.1007/s10071-012-0479-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2011] [Revised: 02/28/2012] [Accepted: 02/28/2012] [Indexed: 10/28/2022]
|
19
|
Todorov A, Dotsch R, Wigboldus DHJ, Said CP. Data-driven Methods for Modeling Social Perception. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2011. [DOI: 10.1111/j.1751-9004.2011.00389.x] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
20
|
Dai H, Micheyl C. Psychophysical reverse correlation with multiple response alternatives. J Exp Psychol Hum Percept Perform 2010; 36:976-93. [PMID: 20695712 PMCID: PMC3158580 DOI: 10.1037/a0017171] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Psychophysical reverse-correlation methods such as the "classification image" technique provide a unique tool to uncover the internal representations and decision strategies of individual participants in perceptual tasks. Over the past 30 years, these techniques have gained increasing popularity among both visual and auditory psychophysicists. However, thus far, principled applications of the psychophysical reverse-correlation approach have been almost exclusively limited to two-alternative decision (detection or discrimination) tasks. Whether and how reverse-correlation methods can be applied to uncover perceptual templates and decision strategies in situations involving more than just two response alternatives remain largely unclear. Here, the authors consider the problem of estimating perceptual templates and decision strategies in stimulus identification tasks with multiple response alternatives. They describe a modified correlational approach, which can be used to solve this problem. The approach is evaluated under a variety of simulated conditions, including different ratios of internal-to-external noise, different degrees of correlations between the sensory observations, and various statistical distributions of stimulus perturbations. The results indicate that the proposed approach is reasonably robust, suggesting that it could be used in future empirical studies.
Collapse
Affiliation(s)
- Huanping Dai
- Department of Speech, Language, and Hearing Sciences, University of Arizona, Tucson, AZ 85721, USA.
| | | |
Collapse
|
21
|
Macé MJM, Delorme A, Richard G, Fabre-Thorpe M. Spotting animals in natural scenes: efficiency of humans and monkeys at very low contrasts. Anim Cogn 2009; 13:405-18. [PMID: 19921288 DOI: 10.1007/s10071-009-0290-4] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2009] [Revised: 09/23/2009] [Accepted: 10/20/2009] [Indexed: 11/24/2022]
Abstract
The ability of monkeys to categorize objects in visual stimuli such as natural scenes might rely on sets of low-level visual cues without any underlying conceptual abilities. Using a go/no-go rapid animal/non-animal categorization task with briefly flashed achromatic natural scenes, we show that both human and monkey performance is very robust to large variations of stimulus luminance and contrast. When mean luminance was increased or decreased by 25-50%, accuracy and speed impairments were small. The largest impairment was found at the highest luminance value with monkeys being mainly impaired in accuracy (drop of 6% correct vs. <1.5% in humans), whereas humans were mainly impaired in reaction time (20 ms increase in median reaction time vs. 4 ms in monkeys). Contrast reductions induced a large deterioration of image definition, but performance was again remarkably robust. Subjects scored well above chance level, even when the contrast was only 12% of the original photographs ( approximately 81% correct in monkeys; approximately 79% correct in humans). Accuracy decreased with contrast reduction but only reached chance level -in both species- for the most extreme condition, when only 3% of the original contrast remained. A progressive reaction time increase was observed that reached 72 ms in monkeys and 66 ms in humans. These results demonstrate the remarkable robustness of the primate visual system in processing objects in natural scenes with large random variations in luminance and contrast. They illustrate the similarity with which performance is impaired in monkeys and humans with such stimulus manipulations. They finally show that in an animal categorization task, the performance of both monkeys and humans is largely independent of cues relying on global luminance or the fine definition of stimuli.
Collapse
Affiliation(s)
- Marc J-M Macé
- Université de Toulouse, UPS, Centre de recherche Cerveau et Cognition, 133 route de Narbonne, Toulouse, France
| | | | | | | |
Collapse
|
22
|
Goto K. [Global and local processing in vision: perspectives from comparative cognition]. SHINRIGAKU KENKYU : THE JAPANESE JOURNAL OF PSYCHOLOGY 2009; 80:352-367. [PMID: 19938661 DOI: 10.4992/jjpsy.80.352] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Primates and birds are visually dominant species. Recent comparative studies in visual perception address questions about the differences between humans and nonhuman primates, as well as primates and birds. This paper discusses the relative importance of global and local visual processing in primates and birds. Although most nonhuman animals, unlike humans, show a local advantage when processing hierarchical compounding stimuli, studies using other types of stimuli revealed that primate vision may process-global information prior to local information. In contrast, the importance of global processing for birds is restricted for ecologically important stimuli such as conspecific images. Both global and local precedence in vision are the result of animals'equally successful adaptations to their living environments, implying that global-oriented human vision is not the only best system.
Collapse
Affiliation(s)
- Kazuhiro Goto
- Japan Society for the Promotion of Science, Keio University.
| |
Collapse
|
23
|
A comparative psychophysical approach to visual perception in primates. Primates 2009; 50:121-30. [DOI: 10.1007/s10329-008-0128-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2008] [Accepted: 12/23/2008] [Indexed: 11/26/2022]
|
24
|
Abstract
A continuing question in the object recognition literature is whether surface properties play a role in visual representation and recognition. Here, we examined the use of color as a cue in facial gender recognition by applying a version of reverse correlation to face categorization in CIE L∗a∗b∗ color space. We found that observers exploited color information to classify ambiguous signals embedded in chromatic noise. The method also allowed us to identify the specific spatial locations and the components of color used by observers. Although the color patterns found with human observers did not accurately mirror objective natural color differences, they suggest sensitivity to the contrast between the main features and the rest of the face. Overall, the results provide evidence that observers encode and can use the local color properties of faces, in particular, in tasks in which color provides diagnostic information and the availability of other cues is reduced.
Collapse
|
25
|
Concept of uprightness in baboons: assessment with pictures of realistic scenes. Anim Cogn 2008; 12:369-79. [PMID: 18925421 DOI: 10.1007/s10071-008-0196-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2008] [Revised: 09/22/2008] [Accepted: 09/25/2008] [Indexed: 10/21/2022]
Abstract
How nonhuman primates process pictures of natural scenes or objects remains a matter of debates. This issue was addressed in the current research by questioning the processing of the canonical orientation of pictures in baboons. Two adult guinea baboons were trained to use an interactive key (IK) on a touch-screen to change the orientation of target pictures showing humans or quadruped mammals until upright. In experiment 1, both baboons successfully learned to use the IK when that key induced a 90 degrees rightward rotation of the picture, but post-training transfer of performance did not occur to novel pictures of natural scenes due to potential motor biases. In Experiment 2, a touch on IK randomly displayed the pictures in any of the four cardinal orientations. Baboons successfully learned the task, but transfer to novel pictures could only be demonstrated after they had been exposed to 360-480 pictures in that condition. Experiment 3 confirmed positive transfers to novel pictures, and showed that both the figure and background information controlled the behavior. Our research on baboons therefore demonstrates the development and use of an "upright" concept, and indicates that picture processing modes strongly depend on the subject's past experience with naturalistic pictorial stimuli.
Collapse
|
26
|
Marsh HL, MacDonald SE. The use of perceptual features in categorization by orangutans (Pongo abelli). Anim Cogn 2008; 11:569-85. [DOI: 10.1007/s10071-008-0148-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2007] [Revised: 01/13/2008] [Accepted: 02/18/2008] [Indexed: 11/25/2022]
|