101
|
Hills PJ. Children process the self face using configural and featural encoding: Evidence from eye tracking. COGNITIVE DEVELOPMENT 2018. [DOI: 10.1016/j.cogdev.2018.07.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
102
|
Facial perception of infants with cleft lip and palate with/without the NAM appliance. J Orofac Orthop 2018; 79:380-388. [DOI: 10.1007/s00056-018-0157-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 07/27/2018] [Indexed: 11/25/2022]
|
103
|
One versus two eyes makes a difference! Early face perception is modulated by featural fixation and feature context. Cortex 2018; 109:35-49. [PMID: 30286305 DOI: 10.1016/j.cortex.2018.08.025] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 07/11/2018] [Accepted: 08/30/2018] [Indexed: 11/23/2022]
Abstract
The N170 event-related potential component is an early marker of face perception that is particularly sensitive to isolated eye regions and to eye fixations within a face. Here, this eye sensitivity was tested further by measuring the N170 to isolated facial features and to the same features fixated within a face, using a gaze-contingent procedure. The neural response to single isolated eyes and eye regions (two eyes) was also compared. Pixel intensity and contrast were controlled at the global (image) and local (featural) levels. Consistent with previous findings, larger N170 amplitudes were elicited when the left or right eye was fixated within a face, compared to the mouth or nose, demonstrating that the N170 eye sensitivity reflects higher-order perceptual processes and not merely low-level perceptual effects. The N170 was also largest and most delayed for isolated features, compared to equivalent fixations within a face. Specifically, mouth fixation yielded the largest amplitude difference, and nose fixation yielded the largest latency difference between these two contexts, suggesting the N170 may reflect a complex interplay between holistic and featural processes. Critically, eye regions elicited consistently larger and shorter N170 responses compared to single eyes, with enhanced responses for contralateral eye content, irrespective of eye or nasion fixation. These results confirm the importance of the eyes in early face perception, and provide novel evidence of an increased sensitivity to the presence of two symmetric eyes compared to only one eye, consistent with a neural eye region detector rather than an eye detector per se.
Collapse
|
104
|
A Vision Enhancement System to Improve Face Recognition with Central Vision Loss. Optom Vis Sci 2018; 95:738-746. [DOI: 10.1097/opx.0000000000001263] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
105
|
Liu ZX, Shen K, Olsen RK, Ryan JD. Age-related changes in the relationship between visual exploration and hippocampal activity. Neuropsychologia 2018; 119:81-91. [PMID: 30075215 DOI: 10.1016/j.neuropsychologia.2018.07.032] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Revised: 07/23/2018] [Accepted: 07/30/2018] [Indexed: 10/28/2022]
Abstract
Deciphering the mechanisms underlying age-related memory declines remains an important goal in cognitive neuroscience. Recently, we observed that visual sampling behavior predicted activity within the hippocampus, a region critical for memory. In younger adults, increases in the number of gaze fixations were associated with increases in hippocampal activity (Liu et al., 2017). This finding suggests a close coupling between the oculomotor and memory system. However, the extent to which this coupling is altered with aging has not been investigated. In this study, we gave older adults the same face processing task used in Liu et al. (2017) and compared their visual exploration behavior and neural activation in the hippocampus and the fusiform face area (FFA) to those of younger adults. Compared to younger adults, older adults showed an increase in visual exploration as indexed by the number of gaze fixations. However, the relationship between visual exploration and neural responses in the hippocampus and FFA was weaker than that of younger adults. Older adults also showed weaker responses to novel faces and a smaller repetition suppression effect in the hippocampus and FFA compared to younger adults. All together, this study provides novel evidence that the capacity to bind visually sampled information, in real-time, into coherent representations along the ventral visual stream and the medial temporal lobe declines with aging.
Collapse
Affiliation(s)
- Zhong-Xu Liu
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada M6A 2E1.
| | - Kelly Shen
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada M6A 2E1
| | - Rosanna K Olsen
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada M6A 2E1; Department of Psychology, University of Toronto, Toronto, Ontario, Canada M5S 3G3
| | - Jennifer D Ryan
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada M6A 2E1; Department of Psychology, University of Toronto, Toronto, Ontario, Canada M5S 3G3; Department of Psychiatry, University of Toronto, Canada
| |
Collapse
|
106
|
Bodenschatz CM, Kersting A, Suslow T. Effects of Briefly Presented Masked Emotional Facial Expressions on Gaze Behavior: An Eye-Tracking Study. Psychol Rep 2018; 122:1432-1448. [PMID: 30032717 DOI: 10.1177/0033294118789041] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Orientation of gaze toward specific regions of the face such as the eyes or the mouth helps to correctly identify the underlying emotion. The present eye-tracking study investigates whether facial features diagnostic of specific emotional facial expressions are processed preferentially, even when presented outside of subjective awareness. Eye movements of 73 healthy individuals were recorded while completing an affective priming task. Primes (pictures of happy, neutral, sad, angry, and fearful facial expressions) were presented for 50 ms with forward and backward masking. Participants had to evaluate subsequently presented neutral faces. Results of an awareness check indicated that participants were subjectively unaware of the emotional primes. No affective priming effects were observed but briefly presented emotional facial expressions elicited early eye movements toward diagnostic regions of the face. Participants oriented their gaze more rapidly to the eye region of the neutral mask after a fearful facial expression. After a happy facial expression, participants oriented their gaze more rapidly to the mouth region of the neutral mask. Moreover, participants dwelled longest on the eye region after a fearful facial expression, and the dwell time on the mouth region was longest for happy facial expressions. Our findings support the idea that briefly presented fearful and happy facial expressions trigger an automatic mechanism that is sensitive to the distribution of relevant facial features and facilitates the orientation of gaze toward them.
Collapse
Affiliation(s)
| | | | - Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig, Leipzig, Germany
| |
Collapse
|
107
|
Kawakami K, Friesen J, Vingilis-Jaremko L. Visual attention to members of own and other groups: Preferences, determinants, and consequences. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2018. [DOI: 10.1111/spc3.12380] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
108
|
Chakraborty A, Chakrabarti B. Looking at My Own Face: Visual Processing Strategies in Self-Other Face Recognition. Front Psychol 2018; 9:121. [PMID: 29487554 PMCID: PMC5816906 DOI: 10.3389/fpsyg.2018.00121] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2017] [Accepted: 01/24/2018] [Indexed: 01/26/2023] Open
Abstract
We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner.
Collapse
Affiliation(s)
- Anya Chakraborty
- Centre for Integrative Neuroscience and Neurodynamics, School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom
| | - Bhismadev Chakrabarti
- Centre for Integrative Neuroscience and Neurodynamics, School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom
| |
Collapse
|
109
|
Hannula DE. Attention and long-term memory: Bidirectional interactions and their effects on behavior. PSYCHOLOGY OF LEARNING AND MOTIVATION 2018. [DOI: 10.1016/bs.plm.2018.09.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
110
|
Waddell ML, Amazeen EL. Lift speed moderates the effects of muscle activity on perceived heaviness. Q J Exp Psychol (Hove) 2018; 71:2174-2185. [DOI: 10.1177/1747021817739784] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Research has shown that perceived heaviness is a function of the ratio of muscle activity (measured by electromyogram [EMG]) to the resulting acceleration of the object. However, objects will commonly be lifted at different speeds, implying variation in both EMG and acceleration. This study examined the effects of lifting speed by having participants report perceived heaviness for objects lifted by elbow flexion at three different speeds: slow, preferred, and fast. EMG and angular acceleration were recorded during these lifts. Both EMG and angular acceleration changed across lift speed. Nevertheless, despite these variations, perceived heaviness consistently scaled to the ratio of EMG to angular acceleration. The exponents on these parameters suggested that the saliency of muscle activity and movement changed across the three lift speeds.
Collapse
Affiliation(s)
- Morgan L Waddell
- Department of Psychology, Arizona State University, Tempe, AZ, USA
| | - Eric L Amazeen
- Department of Psychology, Arizona State University, Tempe, AZ, USA
| |
Collapse
|
111
|
Bennetts RJ, Mole J, Bate S. Super-recognition in development: A case study of an adolescent with extraordinary face recognition skills. Cogn Neuropsychol 2017; 34:357-376. [PMID: 29165028 DOI: 10.1080/02643294.2017.1402755] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.
Collapse
Affiliation(s)
- Rachel J Bennetts
- a School of Biological and Chemical Sciences , Queen Mary University of London , London , UK
| | - Joseph Mole
- b Oxford Doctoral Course in Clinical Psychology , University of Oxford , Oxford , UK
| | - Sarah Bate
- c Department of Psychology , Bournemouth University , Poole , UK
| |
Collapse
|
112
|
Della Longa L, Gliga T, Farroni T. Tune to touch: Affective touch enhances learning of face identity in 4-month-old infants. Dev Cogn Neurosci 2017; 35:42-46. [PMID: 29153656 PMCID: PMC6347579 DOI: 10.1016/j.dcn.2017.11.002] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Revised: 10/02/2017] [Accepted: 11/06/2017] [Indexed: 02/04/2023] Open
Abstract
Touch provides more than sensory input for discrimination of what is on the skin. From early in development it has a rewarding and motivational value, which may reflect an evolutionary mechanism that promotes learning and affiliative bonding. In the present study we investigated whether affective touch helps infants tune to social signals, such as faces. Four-month-old infants were habituated to an individual face with averted gaze, which typically does not engage infants to the same extent as direct gaze does. As in a previous study, in the absence of touch, infants did not learn the identity of this face. Critically, 4-month-old infants did learn to discriminate this face when parents provided gentle stroking, but they did not when they experienced a non-social tactile stimulation. A preliminary follow-up eye-tracking study (Supplementary material) revealed no significant difference in the visual scanning of faces between touch and no-touch conditions, suggesting that affective touch may not affect the distribution of visual attention, but that it may promote more efficient learning of facial information.
Collapse
Affiliation(s)
- Letizia Della Longa
- Developmental Psychology and Socialization Department, Padua University, Italy
| | - Teodora Gliga
- Centre for Brain and Cognitive Development, Birkbeck, University of London, UK
| | - Teresa Farroni
- Developmental Psychology and Socialization Department, Padua University, Italy.
| |
Collapse
|
113
|
Sammaknejad N, Pouretemad H, Eslahchi C, Salahirad A, Alinejad A. Gender Classification Based on Eye Movements: A Processing Effect During Passive Face Viewing. Adv Cogn Psychol 2017; 13:232-240. [PMID: 29071007 PMCID: PMC5648518 DOI: 10.5709/acp-0223-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 06/19/2017] [Indexed: 01/10/2023] Open
Abstract
Studies have revealed superior face recognition skills in females, partially due
to their different eye movement strategies when encoding faces. In the current
study, we utilized these slight but important differences and proposed a model
that estimates the gender of the viewers and classifies them into two subgroups,
males and females. An eye tracker recorded participant’s eye movements while
they viewed images of faces. Regions of interest (ROIs) were defined for each
face. Results showed that the gender dissimilarity in eye movements was not due
to differences in frequency of fixations in the ROI s per se. Instead, it was
caused by dissimilarity in saccade paths between the ROIs. The difference
enhanced when saccades were towards the eyes. Females showed significant
increase in transitions from other ROI s to the eyes. Consequently, the
extraction of temporal transient information of saccade paths through a
transition probability matrix, similar to a first order Markov chain model,
significantly improved the accuracy of the gender classification results.
Collapse
Affiliation(s)
- Negar Sammaknejad
- Institute for Cognitive and Brain Sciences, Shahid Beheshti
University, Tehran, Iran
| | - Hamidreza Pouretemad
- Department of Psychology & Institute for Cognitive and Brain
Sciences, Shahid Beheshti University, Tehran, Iran
| | - Changiz Eslahchi
- Department of Computer Sciences, Shahid Beheshti University and
Institute for Research in Fundamental Sciences, Tehran, Iran
| | - Alireza Salahirad
- Department of Computer Sciences, University of South Carolina,
Columbia
| | - Ashkan Alinejad
- Department of Computer Sciences, University of Tehran, Tehran,
Iran
| |
Collapse
|
114
|
Berberat J, Montali M, Gruber P, Pircher A, Hlavica M, Wang F, Killer HP, Remonda L. Modulation of the Emotional Response to Viewing Strabismic Children in Mothers-Measured by fMRI. Clin Neuroradiol 2017; 29:87-94. [PMID: 28913609 DOI: 10.1007/s00062-017-0625-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2017] [Accepted: 08/28/2017] [Indexed: 11/29/2022]
Abstract
PURPOSE Strabismus influences not only the individual with nonparallel eyes but also the observer. It has previously been demonstrated by fMRI that adults viewing images of strabismic adults have a negative reaction to the images as demonstrated by limbic activation, especially activation of the left amygdala. The aim of this study was to see if mothers would have a similar reaction to viewing strabismic children and whether or not that reaction would be different in mothers of strabismic children. METHODS Healthy mothers of children with strabismus (n = 10, Group I) and without strabismus (n = 15, Group II) voluntarily underwent fMRI at 3T. Blood oxygen level dependent signal responses to viewing images of strabismic and non-strabismic children were analyzed. RESULTS Group II, while viewing images of strabismic children, showed significantly increased activation of the limbic network (p < 0.05) and bilateral amygdala activation. Group I showed considerably less limbic activation, compared to the group II, and had no amygdala activation. Both groups revealed statically significant activation in the FEF (frontal eye field) when they were viewing images of strabismic children as compared to when they were viewing children with parallel eyes. The activated FEF area for Group II was much larger than for group I. CONCLUSION Mothers of non-strabismic children showed similar negative emotional fMRI patterns as adults did while viewing strabismic adults. Strabismus is an interpersonal organic issue for the observer, which also impacts the youngest members of our society.
Collapse
Affiliation(s)
- J Berberat
- Neuroradiology, Cantonal Hospital, Tellstrasse 25, 5001, Aarau, Switzerland.
| | - M Montali
- Neuroradiology, Cantonal Hospital, Tellstrasse 25, 5001, Aarau, Switzerland
| | - P Gruber
- Neuroradiology, Cantonal Hospital, Tellstrasse 25, 5001, Aarau, Switzerland
| | - A Pircher
- Ophthalmology, Cantonal Hospital, Aarau, Switzerland
| | - M Hlavica
- Neuroradiology, Cantonal Hospital, Tellstrasse 25, 5001, Aarau, Switzerland
| | - F Wang
- Department of Ophthalmology, Albert Einstein College of Medicine, Bronx, NY, USA.,Department of Ophthalmology, New York Eye and Ear Infirmary of Mt. Sinai, New York, NY, USA
| | - H P Killer
- Ophthalmology, Cantonal Hospital, Aarau, Switzerland
| | - L Remonda
- Neuroradiology, Cantonal Hospital, Tellstrasse 25, 5001, Aarau, Switzerland
| |
Collapse
|
115
|
Hidden Markov model analysis reveals the advantage of analytic eye movement patterns in face recognition across cultures. Cognition 2017; 169:102-117. [PMID: 28869811 DOI: 10.1016/j.cognition.2017.08.003] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2016] [Revised: 08/08/2017] [Accepted: 08/08/2017] [Indexed: 11/21/2022]
Abstract
It remains controversial whether culture modulates eye movement behavior in face recognition. Inconsistent results have been reported regarding whether cultural differences in eye movement patterns exist, whether these differences affect recognition performance, and whether participants use similar eye movement patterns when viewing faces from different ethnicities. These inconsistencies may be due to substantial individual differences in eye movement patterns within a cultural group. Here we addressed this issue by conducting individual-level eye movement data analysis using hidden Markov models (HMMs). Each individual's eye movements were modeled with an HMM. We clustered the individual HMMs according to their similarities and discovered three common patterns in both Asian and Caucasian participants: holistic (looking mostly at the face center), left-eye-biased analytic (looking mostly at the two individual eyes in addition to the face center with a slight bias to the left eye), and right-eye-based analytic (looking mostly at the right eye in addition to the face center). The frequency of participants adopting the three patterns did not differ significantly between Asians and Caucasians, suggesting little modulation from culture. Significantly more participants (75%) showed similar eye movement patterns when viewing own- and other-race faces than different patterns. Most importantly, participants with left-eye-biased analytic patterns performed significantly better than those using either holistic or right-eye-biased analytic patterns. These results suggest that active retrieval of facial feature information through an analytic eye movement pattern may be optimal for face recognition regardless of culture.
Collapse
|
116
|
Lyyra P, Wirth JH, Hietanen JK. Are you looking my way? Ostracism widens the cone of gaze. Q J Exp Psychol (Hove) 2017; 70:1713-1721. [DOI: 10.1080/17470218.2016.1204327] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Ostracized individuals demonstrate an increased need for belonging. To satisfy this need, they search for signals of inclusion, one of which may be another person's gaze directed at oneself. We tested if ostracized, compared to included, individuals judge a greater degree of averted gaze as still being direct. This range of gaze angles still viewed as direct has been dubbed “the cone of (direct) gaze”. In the current research, ostracized and included participants viewed friendly-looking face stimuli with direct or slightly averted gaze (0°, 2°, 4°, 6°, and 8° to the left and to the right) and judged whether stimulus persons were looking at them or not. Ostracized individuals demonstrated a wider gaze cone than included individuals.
Collapse
Affiliation(s)
- Pessi Lyyra
- Human Information Processing Laboratory, School of Social Sciences and Humanities/Psychology, University of Tampere, Tampere, Finland
| | - James H. Wirth
- Department of Psychology, The Ohio State University at Newark, Newark, OH, USA
| | - Jari K. Hietanen
- Human Information Processing Laboratory, School of Social Sciences and Humanities/Psychology, University of Tampere, Tampere, Finland
| |
Collapse
|
117
|
Contributions of individual face features to face discrimination. Vision Res 2017; 137:29-39. [PMID: 28688904 DOI: 10.1016/j.visres.2017.05.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Revised: 05/02/2017] [Accepted: 05/06/2017] [Indexed: 11/21/2022]
Abstract
Faces are highly complex stimuli that contain a host of information. Such complexity poses the following questions: (a) do observers exhibit preferences for specific information? (b) how does sensitivity to individual face parts compare? These questions were addressed by quantifying sensitivity to different face features. Discrimination thresholds were determined for synthetic faces under the following conditions: (i) 'full face': all face features visible; (ii) 'isolated feature': single feature presented in isolation; (iii) 'embedded feature': all features visible, but only one feature modified. Mean threshold elevations for isolated features, relative to full-faces, were 0.84x, 1.08, 2.12, 3.34, 4.07 and 4.47 for head-shape, hairline, nose, mouth, eyes and eyebrows respectively. Hence, when two full faces can be discriminated at threshold, the difference between the eyes is about four times less than what is required when discriminating between isolated eyes. In all cases, sensitivity was higher when features were presented in isolation than when they were embedded within a face context (threshold elevations of 0.94x, 1.74, 2.67, 2.90, 5.94 and 9.94). This reveals a specific pattern of sensitivity to face information. Observers are between two and four times more sensitive to external than internal features. The pattern for internal features (higher sensitivity for the nose, compared to mouth, eyes and eyebrows) is consistent with lower sensitivity for those parts affected by facial dynamics (e.g. facial expressions). That isolated features are easier to discriminate than embedded features supports a holistic face processing mechanism which impedes extraction of information about individual features from full faces.
Collapse
|
118
|
Abstract
Current interpretations of hippocampal memory function are blind to the fact that viewing behaviors are pervasive and complicate the relationships among perception, behavior, memory, and brain activity. For example, hippocampal activity and associative memory demands increase with stimulus complexity. Stimulus complexity also strongly modulates viewing. Associative processing and viewing thus are often confounded, rendering interpretation of hippocampal activity ambiguous. Similar considerations challenge many accounts of hippocampal function. To explain relationships between memory and viewing, we propose that the hippocampus supports the online memory demands necessary to guide visual exploration. The hippocampus thus orchestrates memory-guided exploration that unfolds over time to build coherent memories. This new perspective on hippocampal function harmonizes with the fact that memory formation and exploratory viewing are tightly intertwined.
Collapse
|
119
|
Chuk T, Chan AB, Hsiao JH. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling. Vision Res 2017; 141:204-216. [PMID: 28435123 DOI: 10.1016/j.visres.2017.03.010] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2016] [Revised: 03/16/2017] [Accepted: 03/18/2017] [Indexed: 11/30/2022]
Abstract
The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance.
Collapse
Affiliation(s)
- Tim Chuk
- Department of Psychology, University of Hong Kong, Hong Kong
| | - Antoni B Chan
- Department of Computer Science, City University of Hong Kong, Hong Kong
| | - Janet H Hsiao
- Department of Psychology, University of Hong Kong, Hong Kong.
| |
Collapse
|
120
|
Hills PJ, Mileva M, Thompson C, Pake JM. Carryover of scanning behaviour affects upright face recognition differently to inverted face recognition. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1314399] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Peter J Hills
- Department of Psychology, Bournemouth University, Dorset, UK
| | - Mila Mileva
- Department of Psychology, University of York, York, UK
| | | | - J. Michael Pake
- Department of Psychology, Anglia Ruskin University, Cambridge, UK
| |
Collapse
|
121
|
Abstract
BACKGROUND There is a paucity of data describing male attitudes toward age-related changes to their facial features and associated preferences for prioritizing treatment. METHODS Injectable-naive but aesthetically oriented men aged 30 to 65 participated in an online study (N = 600). Respondents indicated how concerned they were by the appearance of 15 age-related facial features, and the Maximum Difference scaling system was used to explore which features were most likely to be prioritized for treatment. The correlation between the features of most concern and the areas of treatment priority was assessed. Other aspects regarding the male perspective on aesthetic procedures, such as awareness, motivating factors, and barriers, also were explored. RESULTS Crow's feet and tear troughs were rated as the most likely to be treated first (80% of first preferences) followed by forehead lines (74%), double chin (70%), and glabellar lines (60%). The areas of most concern in order were tear troughs, double chin, crow's feet, and forehead lines. There was a strong correlation between the features of most concern and the areas of treatment priority (r = 0.81). CONCLUSION The periorbital areas, in particular crow's feet and tear troughs, are of most concern and likely to be prioritized for treatment among aesthetically oriented men.
Collapse
|
122
|
End A, Gamer M. Preferential Processing of Social Features and Their Interplay with Physical Saliency in Complex Naturalistic Scenes. Front Psychol 2017; 8:418. [PMID: 28424635 PMCID: PMC5371661 DOI: 10.3389/fpsyg.2017.00418] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2016] [Accepted: 03/06/2017] [Indexed: 11/30/2022] Open
Abstract
According to so-called saliency-based attention models, attention during free viewing of visual scenes is particularly allocated to physically salient image regions. In the present study, we assumed that social features in complex naturalistic scenes would be processed preferentially irrespective of their physical saliency. Therefore, we expected worse prediction of gazing behavior by saliency-based attention models when social information is present in the visual field. To test this hypothesis, participants freely viewed color photographs of complex naturalistic social (e.g., including heads, bodies) and non-social (e.g., including landscapes, objects) scenes while their eye movements were recorded. In agreement with our hypothesis, we found that social features (especially heads) were heavily prioritized during visual exploration. Correspondingly, the presence of social information weakened the influence of low-level saliency on gazing behavior. Importantly, this pattern was most pronounced for the earliest fixations indicating automatic attentional processes. These findings were further corroborated by a linear mixed model approach showing that social features (especially heads) add substantially to the prediction of fixations beyond physical saliency. Taken together, the current study indicates gazing behavior for naturalistic scenes to be better predicted by the interplay of social and physically salient features than by low-level saliency alone. These findings strongly challenge the generalizability of saliency-based attention models and demonstrate the importance of considering social influences when investigating the driving factors of human visual attention.
Collapse
Affiliation(s)
- Albert End
- Department of Systems Neuroscience, University Medical Center Hamburg-EppendorfHamburg, Germany
| | - Matthias Gamer
- Department of Systems Neuroscience, University Medical Center Hamburg-EppendorfHamburg, Germany.,Department of Psychology, Julius Maximilians University of WürzburgWürzburg, Germany
| |
Collapse
|
123
|
Faces elicit different scanning patterns depending on task demands. Atten Percept Psychophys 2017; 79:1050-1063. [DOI: 10.3758/s13414-017-1284-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
124
|
Laidlaw KEW, Kingstone A. Fixations to the eyes aids in facial encoding; covertly attending to the eyes does not. Acta Psychol (Amst) 2017; 173:55-65. [PMID: 28012434 DOI: 10.1016/j.actpsy.2016.11.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2016] [Revised: 11/15/2016] [Accepted: 11/27/2016] [Indexed: 11/18/2022] Open
Abstract
When looking at images of faces, people will often focus their fixations on the eyes. It has previously been demonstrated that the eyes convey important information that may improve later facial recognition. Whether this advantage requires that the eyes be fixated, or merely attended to covertly (i.e. while looking elsewhere), is unclear from previous work. While attending to the eyes covertly without fixating them may be sufficient, the act of using overt attention to fixate the eyes may improve the processing of important details used for later recognition. In the present study, participants were shown a series of faces and, in Experiment 1, asked to attend to them normally while avoiding looking at either the eyes or, as a control, the mouth (overt attentional avoidance condition); or in Experiment 2 fixate the center of the face while covertly attending to either the eyes or the mouth (covert attention condition). After the first phase, participants were asked to perform an old/new face recognition task. We demonstrate that a) when fixations to the eyes are avoided during initial viewing then subsequent face discrimination suffers, and b) covert attention to the eyes alone is insufficient to improve face discrimination performance. Together, these findings demonstrate that fixating the eyes provides an encoding advantage that is not availed by covert attention alone.
Collapse
Affiliation(s)
- Kaitlin E W Laidlaw
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, British Columbia V6T 1Z4, Canada.
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, British Columbia V6T 1Z4, Canada
| |
Collapse
|
125
|
Liu ZX, Shen K, Olsen RK, Ryan JD. Visual Sampling Predicts Hippocampal Activity. J Neurosci 2017; 37:599-609. [PMID: 28100742 PMCID: PMC6596763 DOI: 10.1523/jneurosci.2610-16.2016] [Citation(s) in RCA: 62] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Revised: 11/04/2016] [Accepted: 11/30/2016] [Indexed: 11/21/2022] Open
Abstract
Eye movements serve to accumulate information from the visual world, contributing to the formation of coherent memory representations that support cognition and behavior. The hippocampus and the oculomotor network are well connected anatomically through an extensive set of polysynaptic pathways. However, the extent to which visual sampling behavior is related to functional responses in the hippocampus during encoding has not been studied directly in human neuroimaging. In the current study, participants engaged in a face processing task while brain responses were recorded with fMRI and eye movements were monitored simultaneously. The number of gaze fixations that a participant made on a given trial was correlated significantly with hippocampal activation such that more fixations were associated with stronger hippocampal activation. Similar results were also found in the fusiform face area, a face-selective perceptual processing region. Notably, the number of fixations was associated with stronger hippocampal activation when the presented faces were novel, but not when the faces were repeated. Increases in fixations during viewing of novel faces also led to larger repetition-related suppression in the hippocampus, indicating that this fixation-hippocampal relationship may reflect the ongoing development of lasting representations. Together, these results provide novel empirical support for the idea that visual exploration and hippocampal binding processes are inherently linked. SIGNIFICANCE STATEMENT The hippocampal and oculomotor networks have each been studied extensively for their roles in the binding of information and gaze function, respectively. Despite the evidence that individuals with amnesia whose damage includes the hippocampus show alterations in their eye movement patterns and recent findings that the two systems are anatomically connected, it has not been demonstrated whether visual exploration is related to hippocampal activity in neurologically intact adults. In this combined fMRI-eye-tracking study, we show how hippocampal responses scale with the number of gaze fixations made during viewing of novel, but not repeated, faces. These findings provide new evidence suggesting that the hippocampus plays an important role in the binding of information, as sampled by gaze fixations, during visual exploration.
Collapse
Affiliation(s)
- Zhong-Xu Liu
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada M6A 2E1, and
| | - Kelly Shen
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada M6A 2E1, and
| | - Rosanna K Olsen
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada M6A 2E1, and
- Department of Psychology and
| | - Jennifer D Ryan
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada M6A 2E1, and
- Department of Psychology and
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada M5S 3G3
| |
Collapse
|
126
|
Liberati A, Fadda R, Doneddu G, Congiu S, Javarone MA, Striano T, Chessa A. A Statistical Physics Perspective to Understand Social Visual Attention in Autism Spectrum Disorder. Perception 2017; 46:889-913. [PMID: 28056653 DOI: 10.1177/0301006616685976] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
This study investigated social visual attention in children with Autism Spectrum Disorder (ASD) and with typical development (TD) in the light of Brockmann and Geisel's model of visual attention. The probability distribution of gaze movements and clustering of gaze points, registered with eye-tracking technology, was studied during a free visual exploration of a gaze stimulus. A data-driven analysis of the distribution of eye movements was chosen to overcome any possible methodological problems related to the subjective expectations of the experimenters about the informative contents of the image in addition to a computational model to simulate group differences. Analysis of the eye-tracking data indicated that the scanpaths of children with TD and ASD were characterized by eye movements geometrically equivalent to Lévy flights. Children with ASD showed a higher frequency of long saccadic amplitudes compared with controls. A clustering analysis revealed a greater dispersion of eye movements for these children. Modeling of the results indicated higher values of the model parameter modulating the dispersion of eye movements for children with ASD. Together, the experimental results and the model point to a greater dispersion of gaze points in ASD.
Collapse
Affiliation(s)
- Alessio Liberati
- Department of Physics, University of Cagliari, Complesso Universitario di Monserrato, Italy
| | - Roberta Fadda
- Department of Pedagogy, Psychology, Philosophy, University of Cagliari, Italy
| | - Giuseppe Doneddu
- Center for Pervasive Developmental Disorders, Azienda Ospedaliera Brotzu, Cagliari, Italy
| | - Sara Congiu
- Center for Pervasive Developmental Disorders, Azienda Ospedaliera Brotzu, Cagliari, Italy
| | - Marco A Javarone
- DUMAS-Department of Human and Social Sciences, University of Sassari, Italy
| | - Tricia Striano
- Department of Psychology, Hunter College, New York, NY, USA
| | | |
Collapse
|
127
|
Methods Investigating How Individuals with Autism Spectrum Disorder Spontaneously Attend to Social Events. REVIEW JOURNAL OF AUTISM AND DEVELOPMENTAL DISORDERS 2016. [DOI: 10.1007/s40489-016-0099-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
128
|
Mitchell TV. Category selectivity of the N170 and the role of expertise in deaf signers. Hear Res 2016; 343:150-161. [PMID: 27770622 DOI: 10.1016/j.heares.2016.10.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 10/07/2016] [Accepted: 10/15/2016] [Indexed: 10/20/2022]
Abstract
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.
Collapse
Affiliation(s)
- Teresa V Mitchell
- Eunice Kennedy Shriver Center, University of Massachusetts Medical School, Worcester, MA, USA; Brandeis University, Waltham, MA, USA.
| |
Collapse
|
129
|
Chelnokova O, Laeng B, Løseth G, Eikemo M, Willoch F, Leknes S. The µ-opioid system promotes visual attention to faces and eyes. Soc Cogn Affect Neurosci 2016; 11:1902-1909. [PMID: 27531386 DOI: 10.1093/scan/nsw116] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2016] [Revised: 07/17/2016] [Accepted: 08/10/2016] [Indexed: 12/25/2022] Open
Abstract
Paying attention to others' faces and eyes is a cornerstone of human social behavior. The µ-opioid receptor (MOR) system, central to social reward-processing in rodents and primates, has been proposed to mediate the capacity for affiliative reward in humans. We assessed the role of the human MOR system in visual exploration of faces and eyes of conspecifics. Thirty healthy males received a novel, bidirectional battery of psychopharmacological treatment (an MOR agonist, a non-selective opioid antagonist, or placebo, on three separate days). Eye-movements were recorded while participants viewed facial photographs. We predicted that the MOR system would promote visual exploration of faces, and hypothesized that MOR agonism would increase, whereas antagonism decrease overt attention to the information-rich eye region. The expected linear effect of MOR manipulation on visual attention to the stimuli was observed, such that MOR agonism increased while antagonism decreased visual exploration of faces and overt attention to the eyes. The observed effects suggest that the human MOR system promotes overt visual attention to socially significant cues, in line with theories linking reward value to gaze control and target selection. Enhanced attention to others' faces and eyes represents a putative behavioral mechanism through which the human MOR system promotes social interest.
Collapse
Affiliation(s)
- Olga Chelnokova
- Department of Psychology, University of Oslo, Oslo N-0317, Norway
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo N-0317, Norway
| | - Guro Løseth
- Department of Psychology, University of Oslo, Oslo N-0317, Norway
| | - Marie Eikemo
- Department of Psychology, University of Oslo, Oslo N-0317, Norway.,Norwegian Center for Addiction Research, University of Oslo, Oslo N-0318, Norway.,Division of Mental Health and Addiction, Oslo University Hospital, Oslo N-0318, Norway
| | - Frode Willoch
- Department of Medicine, University of Oslo, Oslo N-0316, Norway
| | - Siri Leknes
- Department of Psychology, University of Oslo, Oslo N-0317, Norway.,Department of Medicine, University of Oslo, Oslo N-0316, Norway.,The Intervention Centre, Oslo University Hospital, Oslo N-0424, Norway
| |
Collapse
|
130
|
Pancaroglu R, Hills CS, Sekunova A, Viswanathan J, Duchaine B, Barton JJS. Seeing the eyes in acquired prosopagnosia. Cortex 2016; 81:251-65. [PMID: 27288649 DOI: 10.1016/j.cortex.2016.04.024] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Revised: 01/18/2016] [Accepted: 04/27/2016] [Indexed: 12/19/2022]
Abstract
Case reports have suggested that perception of the eye region may be impaired more than that of other facial regions in acquired prosopagnosia. However, it is unclear how frequently this occurs, whether such impairments are specific to a certain anatomic subtype of prosopagnosia, and whether these impairments are related to changes in the scanning of faces. We studied a large cohort of 11 subjects with this rare disorder, who had a variety of occipitotemporal or anterior temporal lesions, both unilateral and bilateral. Lesions were characterized by functional and structural imaging. Subjects performed a perceptual discrimination test in which they had to discriminate changes in feature position, shape, or external contour. Test conditions were manipulated to stress focused or divided attention across the whole face. In a second experiment we recorded eye movements while subjects performed a face memory task. We found that greater impairment for eye processing was more typical of subjects with occipitotemporal lesions than those with anterior temporal lesions. This eye selectivity was evident for both eye position and shape, with no evidence of an upper/lower difference for external contour. A greater impairment for eye processing was more apparent under attentionally more demanding conditions. Despite these perceptual deficits, most subjects showed a normal tendency to scan the eyes more than the mouth. We conclude that occipitotemporal lesions are associated with a partially selective processing loss for eye information and that this deficit may be linked to loss of the right fusiform face area, which has been shown to have activity patterns that emphasize the eye region.
Collapse
Affiliation(s)
- Raika Pancaroglu
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - Charlotte S Hills
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Alla Sekunova
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Jayalakshmi Viswanathan
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Brad Duchaine
- Department of Psychology, Dartmouth University, Dartmouth, USA
| | - Jason J S Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.
| |
Collapse
|
131
|
Martin-Malivel J, Mangini MC, Fagot J, Biederman I. Do Humans and Baboons Use the Same Information When Categorizing Human and Baboon Faces? Psychol Sci 2016; 17:599-607. [PMID: 16866746 DOI: 10.1111/j.1467-9280.2006.01751.x] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
What information is used for sorting pictures of complex stimuli into categories? We applied a reverse correlation method to reveal the visual features mediating categorization in humans and baboons. Two baboons and 6 humans were trained to sort, by species, pictures of human and baboon faces on which random visual noise was superimposed. On ambiguous probe trials, a human-baboon morph was presented, eliciting “human” responses on some trials and “baboon” responses on others. The difference between the noise patterns that induced the two responses made explicit the information mediating the classification. Unlike the humans, the baboons based their categorization on information that closely matched that used by a theoretical observer responding solely on the basis of the pixel similarities between the probe and training images. We show that the classification-image technique and principal components analysis provide a method to make explicit the differences in the information mediating categorization in humans and animals.
Collapse
Affiliation(s)
- Julie Martin-Malivel
- Department of Psychology, Neuroscience Program, University of Southern California, CA, USA.
| | | | | | | |
Collapse
|
132
|
Abstract
Abstract. An important development in cognitive psychology in the past decade has been the examination of visual attention during real social interaction. This contrasts traditional laboratory studies of attention, including “social attention,” in which observers perform tasks alone. In this review, we show that although the lone-observer method has been central to attention research, real person interaction paradigms have not only uncovered the processes that occur during “joint attention,” but have also revealed attentional processes previously thought not to occur. Furthermore, the examination of some visual attention processes almost invariably requires the use of real person paradigms. While we do not argue for an increase in “ecological validity” for its own sake, we do suggest that research using real person interaction has greatly benefited the development of visual attention theories.
Collapse
Affiliation(s)
| | | | - Gustav Kuhn
- Department of Psychology, University of London, UK
| |
Collapse
|
133
|
Bobak AK, Parris BA, Gregory NJ, Bennetts RJ, Bate S. Eye-movement strategies in developmental prosopagnosia and "super" face recognition. Q J Exp Psychol (Hove) 2016; 70:201-217. [PMID: 26933872 DOI: 10.1080/17470218.2016.1161059] [Citation(s) in RCA: 85] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Developmental prosopagnosia (DP) is a cognitive condition characterized by a severe deficit in face recognition. Few investigations have examined whether impairments at the early stages of processing may underpin the condition, and it is also unknown whether DP is simply the "bottom end" of the typical face-processing spectrum. To address these issues, we monitored the eye-movements of DPs, typical perceivers, and "super recognizers" (SRs) while they viewed a set of static images displaying people engaged in naturalistic social scenarios. Three key findings emerged: (a) Individuals with more severe prosopagnosia spent less time examining the internal facial region, (b) as observed in acquired prosopagnosia, some DPs spent less time examining the eyes and more time examining the mouth than controls, and (c) SRs spent more time examining the nose-a measure that also correlated with face recognition ability in controls. These findings support previous suggestions that DP is a heterogeneous condition, but suggest that at least the most severe cases represent a group of individuals that qualitatively differ from the typical population. While SRs seem to merely be those at the "top end" of normal, this work identifies the nose as a critical region for successful face recognition.
Collapse
Affiliation(s)
- Anna K Bobak
- a Department of Psychology, Faculty of Science and Technology , Bournemouth University , Poole , UK
| | - Benjamin A Parris
- a Department of Psychology, Faculty of Science and Technology , Bournemouth University , Poole , UK
| | - Nicola J Gregory
- a Department of Psychology, Faculty of Science and Technology , Bournemouth University , Poole , UK
| | - Rachel J Bennetts
- a Department of Psychology, Faculty of Science and Technology , Bournemouth University , Poole , UK
| | - Sarah Bate
- a Department of Psychology, Faculty of Science and Technology , Bournemouth University , Poole , UK
| |
Collapse
|
134
|
Abstract
UNLABELLED Hippocampal sharp-wave ripples (SWRs) are highly synchronous oscillatory field potentials that are thought to facilitate memory consolidation. SWRs typically occur during quiescent states, when neural activity reflecting recent experience is replayed. In rodents, SWRs also occur during brief locomotor pauses in maze exploration, where they appear to support learning during experience. In this study, we detected SWRs that occurred during quiescent states, but also during goal-directed visual exploration in nonhuman primates (Macaca mulatta). The exploratory SWRs showed peak frequency bands similar to those of quiescent SWRs, and both types were inhibited at the onset of their respective behavioral epochs. In apparent contrast to rodent SWRs, these exploratory SWRs occurred during active periods of exploration, e.g., while animals searched for a target object in a scene. SWRs were associated with smaller saccades and longer fixations. Also, when they coincided with target-object fixations during search, detection was more likely than when these events were decoupled. Although we observed high gamma-band field potentials of similar frequency to SWRs, only the SWRs accompanied greater spiking synchrony in neural populations. These results reveal that SWRs are not limited to off-line states as conventionally defined; rather, they occur during active and informative performance windows. The exploratory SWR in primates is an infrequent occurrence associated with active, attentive performance, which may indicate a new, extended role of SWRs during exploration in primates. SIGNIFICANCE STATEMENT Sharp-wave ripples (SWRs) are high-frequency oscillations that generate highly synchronized activity in neural populations. Their prevalence in sleep and quiet wakefulness, and the memory deficits that result from their interruption, suggest that SWRs contribute to memory consolidation during rest. Here, we report that SWRs from the monkey hippocampus occur not only during behavioral inactivity but also during successful visual exploration. SWRs were associated with attentive, focal search and appeared to enhance perception of locations viewed around the time of their occurrence. SWRs occurring in rest are noteworthy for their relation to heightened neural population activity, temporally precise and widespread synchronization, and memory consolidation; therefore, the SWRs reported here may have a similar effect on neural populations, even as experiences unfold.
Collapse
|
135
|
Arizpe J, Kravitz DJ, Walsh V, Yovel G, Baker CI. Differences in Looking at Own- and Other-Race Faces Are Subtle and Analysis-Dependent: An Account of Discrepant Reports. PLoS One 2016; 11:e0148253. [PMID: 26849447 PMCID: PMC4744017 DOI: 10.1371/journal.pone.0148253] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Accepted: 01/16/2016] [Indexed: 12/04/2022] Open
Abstract
The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processing mechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using different analysis methods. While we detect statistically significant, though subtle, differences in fixation pattern using an Area of Interest (AOI) approach, we fail to detect significant differences when applying a spatial density map approach. Though there were no significant differences in the spatial density maps, the qualitative patterns matched the results from the AOI analyses reflecting how, in certain contexts, Area of Interest (AOI) analyses can be more sensitive in detecting the differential fixation patterns than spatial density analyses, due to spatial pooling of data with AOIs. AOI analyses, however, also come with the limitation of requiring a priori specification. These findings provide evidence that the conflicting reports in the prior literature may be at least partially accounted for by the differences in the statistical sensitivity associated with the different analysis methods employed across studies. Overall, our results suggest that detection of differences in eye-movement patterns can be analysis-dependent and rests on the assumptions inherent in the given analysis.
Collapse
Affiliation(s)
- Joseph Arizpe
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
- Applied Cognitive Neuroscience Group, Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- Department of Neurology, University of Tennessee Health Science Center, Memphis, Tennessee, United States of America
- Le Bonheur Children’s Hospital, Memphis, Tennessee, United States of America
- * E-mail:
| | - Dwight J. Kravitz
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
- Department of Psychology, The George Washington University, Washington, D.C., United States of America
| | - Vincent Walsh
- Applied Cognitive Neuroscience Group, Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Galit Yovel
- Department of Psychology, Tel Aviv University, Tel Aviv, Israel
| | - Chris I. Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
| |
Collapse
|
136
|
Bobak AK, Dowsett AJ, Bate S. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills. PLoS One 2016; 11:e0148148. [PMID: 26829321 PMCID: PMC4735453 DOI: 10.1371/journal.pone.0148148] [Citation(s) in RCA: 64] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 01/13/2016] [Indexed: 11/21/2022] Open
Abstract
Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called “super recognisers” (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the “Glasgow Face Matching Test”, and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the “Models Face Matching Test”. Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies.
Collapse
Affiliation(s)
- Anna Katarzyna Bobak
- Psychology Research Centre, Faculty of Science and Technology, Bournemouth University, Poole, Dorset, United Kingdom
- * E-mail:
| | | | - Sarah Bate
- Psychology Research Centre, Faculty of Science and Technology, Bournemouth University, Poole, Dorset, United Kingdom
| |
Collapse
|
137
|
Kleberg JL, Selbing I, Lundqvist D, Hofvander B, Olsson A. Spontaneous eye movements and trait empathy predict vicarious learning of fear. Int J Psychophysiol 2015; 98:577-83. [DOI: 10.1016/j.ijpsycho.2015.04.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2014] [Revised: 03/02/2015] [Accepted: 04/08/2015] [Indexed: 01/16/2023]
|
138
|
Proietti V, Macchi Cassia V, dell'Amore F, Conte S, Bricolo E. Visual scanning behavior is related to recognition performance for own- and other-age faces. Front Psychol 2015; 6:1684. [PMID: 26579056 PMCID: PMC4630505 DOI: 10.3389/fpsyg.2015.01684] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2015] [Accepted: 10/19/2015] [Indexed: 11/13/2022] Open
Abstract
It is well-established that our recognition ability is enhanced for faces belonging to familiar categories, such as own-race faces and own-age faces. Recent evidence suggests that, for race, the recognition bias is also accompanied by different visual scanning strategies for own- compared to other-race faces. Here, we tested the hypothesis that these differences in visual scanning patterns extend also to the comparison between own and other-age faces and contribute to the own-age recognition advantage. Participants (young adults with limited experience with infants) were tested in an old/new recognition memory task where they encoded and subsequently recognized a series of adult and infant faces while their eye movements were recorded. Consistent with findings on the other-race bias, we found evidence of an own-age bias in recognition which was accompanied by differential scanning patterns, and consequently differential encoding strategies, for own-compared to other-age faces. Gaze patterns for own-age faces involved a more dynamic sampling of the internal features and longer viewing time on the eye region compared to the other regions of the face. This latter strategy was extensively employed during learning (vs. recognition) and was positively correlated to discriminability. These results suggest that deeply encoding the eye region is functional for recognition and that the own-age bias is evident not only in differential recognition performance, but also in the employment of different sampling strategies found to be effective for accurate recognition.
Collapse
Affiliation(s)
- Valentina Proietti
- Department of Psychology, Brock University St. Catharines, ON, Canada ; NeuroMI, Milan Center for Neuroscience Milan, Italy
| | - Viola Macchi Cassia
- NeuroMI, Milan Center for Neuroscience Milan, Italy ; Department of Psychology, University of Milano-Bicocca Milan, Italy
| | | | - Stefania Conte
- NeuroMI, Milan Center for Neuroscience Milan, Italy ; Department of Psychology, University of Milano-Bicocca Milan, Italy
| | - Emanuela Bricolo
- NeuroMI, Milan Center for Neuroscience Milan, Italy ; Department of Psychology, University of Milano-Bicocca Milan, Italy
| |
Collapse
|
139
|
Abstract
Visual scanning of faces in individuals with Autism Spectrum Disorder (ASD) has been intensively studied using eye-tracking technology. However, most of studies have relied on the same analytic approach based on the quantification of fixation time, which may have failed to reveal some important features of the scanning strategies employed by individuals with ASD. In the present study, we examined the scanning of faces in a group of 20 preschoolers with ASD and their typically developing (TD) peers, using both classical fixation time approach and a new developed approach based on transition matrices and network analysis. We found between group differences in the eye region in terms of fixation time, with increased right eye fixation time for the ASD group and increased left eye fixation time for the TD group. Our complementary network approach revealed that the left eye might play the role of an anchor in the scanning strategies of TD children but not in that of children with ASD. In ASD, fixation time on the different facial parts was almost exclusively dependent on exploratory activity. Our study highlights the importance of developing innovative measures that bear the potential of revealing new properties of the scanning strategies employed by individuals with ASD.
Collapse
|
140
|
Pizzamiglio MR, De Luca M, Di Vita A, Palermo L, Tanzilli A, Dacquino C, Piccardi L. Congenital prosopagnosia in a child: Neuropsychological assessment, eye movement recordings and training. Neuropsychol Rehabil 2015; 27:369-408. [DOI: 10.1080/09602011.2015.1084335] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
141
|
Hansen BC, Rakhshan PJ, Ho AK, Pannasch S. Looking at others through implicitly or explicitly prejudiced eyes. VISUAL COGNITION 2015. [DOI: 10.1080/13506285.2015.1063554] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
142
|
Mandel A, Helokunnas S, Pihko E, Hari R. Brain responds to another person's eye blinks in a natural setting-the more empathetic the viewer the stronger the responses. Eur J Neurosci 2015; 42:2508-14. [PMID: 26132210 DOI: 10.1111/ejn.13011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2015] [Revised: 06/24/2015] [Accepted: 06/28/2015] [Indexed: 12/30/2022]
Abstract
An observer's brain is known to respond to another person's small nonverbal signals, such as gaze shifts and eye blinks. Here we aimed to find out how an observer's brain reacts to a speaker's eye blinks in the presence of other audiovisual information. Magnetoencephalographic brain responses along with eye gaze were recorded from 13 adults who watched a video of a person telling a story. The video was presented first without sound (visual), then with sound (audiovisual), and finally the audio story was presented with a still-frame picture on the screen (audio control). The viewers mainly gazed at the eye region of the speaker. Their saccades were suppressed at about 180 ms after the start of the speaker's blinks, a subsequent increase of saccade occurence to the base level, or higher, at around 340 ms. The suppression occurred in visual and audiovisual conditions but not during the control audio presentation. Prominent brain responses to blinks peaked in the viewer's occipital cortex at about 250 ms, with no differences in mean peak amplitudes or latencies between visual and audiovisual conditions. During the audiovisual, but not visual-only, presentation, the responses were the stronger the more empathetic the subject was according to the Empathic Concern score of the Interpersonal Reactivity Index questionnaire (Spearman's rank correlation, 0.73). The other person's eye blinks, nonverbal signs that often go unnoticed, thus elicited clear brain responses even in the presence of attention-attracting audiovisual information from the narrative, with stronger responses in people with higher empathy scores.
Collapse
Affiliation(s)
- Anne Mandel
- Brain Research Unit, Department of Neuroscience and Biomedical Engineering, and MEG Core, Aalto NeuroImaging, Aalto University, P.O. Box 15100, 00076, Aalto, Finland
| | - Siiri Helokunnas
- Brain Research Unit, Department of Neuroscience and Biomedical Engineering, and MEG Core, Aalto NeuroImaging, Aalto University, P.O. Box 15100, 00076, Aalto, Finland
| | - Elina Pihko
- Brain Research Unit, Department of Neuroscience and Biomedical Engineering, and MEG Core, Aalto NeuroImaging, Aalto University, P.O. Box 15100, 00076, Aalto, Finland
| | - Riitta Hari
- Brain Research Unit, Department of Neuroscience and Biomedical Engineering, and MEG Core, Aalto NeuroImaging, Aalto University, P.O. Box 15100, 00076, Aalto, Finland
| |
Collapse
|
143
|
Modulations of eye movement patterns by spatial filtering during the learning and testing phases of an old/new face recognition task. Atten Percept Psychophys 2015; 77:536-50. [PMID: 25287618 DOI: 10.3758/s13414-014-0778-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In two experiments, we examined the effects of varying the spatial frequency (SF) content of face images on eye movements during the learning and testing phases of an old/new recognition task. At both learning and testing, participants were presented with face stimuli band-pass filtered to 11 different SF bands, as well as an unfiltered baseline condition. We found that eye movements varied significantly as a function of SF. Specifically, the frequency of transitions between facial features showed a band-pass pattern, with more transitions for middle-band faces (≈5-20 cycles/face) than for low-band (≈<5 cpf) or high-band (≈>20 cpf) ones. These findings were similar for the learning and testing phases. The distributions of transitions across facial features were similar for the middle-band, high-band, and unfiltered faces, showing a concentration on the eyes and mouth; conversely, low-band faces elicited mostly transitions involving the nose and nasion. The eye movement patterns elicited by low, middle, and high bands are similar to those previous researchers have suggested reflect holistic, configural, and featural processing, respectively. More generally, our results are compatible with the hypotheses that eye movements are functional, and that the visual system makes flexible use of visuospatial information in face processing. Finally, our finding that only middle spatial frequencies yielded the same number and distribution of fixations as unfiltered faces adds more evidence to the idea that these frequencies are especially important for face recognition, and reveals a possible mediator for the superior performance that they elicit.
Collapse
|
144
|
Noiret N, Carvalho N, Laurent É, Vulliez L, Bennabi D, Chopard G, Haffen E, Nicolier M, Monnin J, Vandel P. Visual scanning behavior during processing of emotional faces in older adults with major depression. Aging Ment Health 2015; 19:264-73. [PMID: 24954009 DOI: 10.1080/13607863.2014.926473] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
OBJECTIVES Although several reported studies have suggested that younger adults with depression display depression-related biases during the processing of emotional faces, there remains a lack of data concerning these biases in older adults. The aim of our study was to assess scanning behavior during the processing of emotional faces in depressed older adults. METHOD Older adults with and without depression viewed happy, neutral or sad portraits during an eye movement recording. RESULTS Depressed older adults spent less time with fewer fixations on emotional features than healthy older adults, but only for sad and neutral portraits, with no significant difference for happy portraits. CONCLUSION These results suggest disengagement from sad and neutral faces in depressed older adults, which is not consistent with standard theoretical proposals on congruence biases in depression. Also, aging and associated emotional regulation change may explain the expression of depression-related biases. Our preliminary results suggest that information processing in depression consists of a more complex phenomenon than merely a general searching for mood-congruent stimuli or general disengagement from all kinds of stimuli. These findings underline that care must be used when evaluating potential variables, such as aging, which interact with depression and selectively influence the choice of relevant stimulus dimensions.
Collapse
Affiliation(s)
- Nicolas Noiret
- a Laboratory of Psychology EA 3188 , University of Franche-Comté , Besançon , France
| | | | | | | | | | | | | | | | | | | |
Collapse
|
145
|
Amestoy A, Guillaud E, Bouvard MP, Cazalets JR. Developmental changes in face visual scanning in autism spectrum disorder as assessed by data-based analysis. Front Psychol 2015; 6:989. [PMID: 26236264 PMCID: PMC4503892 DOI: 10.3389/fpsyg.2015.00989] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Accepted: 06/29/2015] [Indexed: 11/25/2022] Open
Abstract
Individuals with autism spectrum disorder (ASD) present reduced visual attention to faces. However, contradictory conclusions have been drawn about the strategies involved in visual face scanning due to the various methodologies implemented in the study of facial screening. Here, we used a data-driven approach to compare children and adults with ASD subjected to the same free viewing task and to address developmental aspects of face scanning, including its temporal patterning, in healthy children, and adults. Four groups (54 subjects) were included in the study: typical adults, typically developing children, and adults and children with ASD. Eye tracking was performed on subjects viewing unfamiliar faces. Fixations were analyzed using a data-driven approach that employed spatial statistics to provide an objective, unbiased definition of the areas of interest. Typical adults expressed a spatial and temporal strategy for visual scanning that differed from the three other groups, involving a sequential fixation of the right eye (RE), left eye (LE), and mouth. Typically developing children, adults and children with autism exhibited similar fixation patterns and they always started by looking at the RE. Children (typical or with ASD) subsequently looked at the LE or the mouth. Based on the present results, the patterns of fixation for static faces that mature from childhood to adulthood in typical subjects are not found in adults with ASD. The atypical patterns found after developmental progression and experience in ASD groups appear to remain blocked in an immature state that cannot be differentiated from typical developmental child patterns of fixation.
Collapse
Affiliation(s)
- Anouck Amestoy
- Department of Child and Adolescent Psychiatry, Charles Perrens Hospital, Université de Bordeaux, BordeauxFrance
- CNRS UMR 5287, Institut de Neurosciences Cognitives et Intégratives d’Aquitaine, Université de Bordeaux, BordeauxFrance
| | - Etienne Guillaud
- CNRS UMR 5287, Institut de Neurosciences Cognitives et Intégratives d’Aquitaine, Université de Bordeaux, BordeauxFrance
| | - Manuel P. Bouvard
- Department of Child and Adolescent Psychiatry, Charles Perrens Hospital, Université de Bordeaux, BordeauxFrance
- CNRS UMR 5287, Institut de Neurosciences Cognitives et Intégratives d’Aquitaine, Université de Bordeaux, BordeauxFrance
| | - Jean-René Cazalets
- CNRS UMR 5287, Institut de Neurosciences Cognitives et Intégratives d’Aquitaine, Université de Bordeaux, BordeauxFrance
| |
Collapse
|
146
|
Cheetham M, Wu L, Pauli P, Jancke L. Arousal, valence, and the uncanny valley: psychophysiological and self-report findings. Front Psychol 2015; 6:981. [PMID: 26236260 PMCID: PMC4502535 DOI: 10.3389/fpsyg.2015.00981] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2014] [Accepted: 06/29/2015] [Indexed: 11/17/2022] Open
Abstract
The main prediction of the Uncanny Valley Hypothesis (UVH) is that observation of humanlike characters that are difficult to distinguish from the human counterpart will evoke a state of negative affect. Well-established electrophysiological [late positive potential (LPP) and facial electromyography (EMG)] and self-report [Self-Assessment Manikin (SAM)] indices of valence and arousal, i.e., the primary orthogonal dimensions of affective experience, were used to test this prediction by examining affective experience in response to categorically ambiguous compared with unambiguous avatar and human faces (N = 30). LPP and EMG provided direct psychophysiological indices of affective state during passive observation and the SAM provided self-reported indices of affective state during explicit cognitive evaluation of static facial stimuli. The faces were drawn from well-controlled morph continua representing the UVH’ dimension of human likeness (DHL). The results provide no support for the notion that category ambiguity along the DHL is specifically associated with enhanced experience of negative affect. On the contrary, the LPP and SAM-based measures of arousal and valence indicated a general increase in negative affective state (i.e., enhanced arousal and negative valence) with greater morph distance from the human end of the DHL. A second sample (N = 30) produced the same finding, using an ad hoc self-rating scale of feelings of familiarity, i.e., an oft-used measure of affective experience along the UVH’ familiarity dimension. In conclusion, this multi-method approach using well-validated psychophysiological and self-rating indices of arousal and valence rejects – for passive observation and for explicit affective evaluation of static faces – the main prediction of the UVH.
Collapse
Affiliation(s)
- Marcus Cheetham
- Department of Neuropsychology, University of Zurich Zurich, Switzerland ; Department of Psychology, Nungin University Seoul, South Korea
| | - Lingdan Wu
- Swiss Centre for Affective Sciences, University of Geneva Geneva, Switzerland ; Department of Psychology, University of Wurzburg Wurzburg, Germany
| | - Paul Pauli
- Department of Psychology, University of Wurzburg Wurzburg, Germany
| | - Lutz Jancke
- Department of Neuropsychology, University of Zurich Zurich, Switzerland
| |
Collapse
|
147
|
Van Herwegen J. Williams syndrome and its cognitive profile: the importance of eye movements. Psychol Res Behav Manag 2015; 8:143-51. [PMID: 26082669 PMCID: PMC4461016 DOI: 10.2147/prbm.s63474] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
People with Williams syndrome (WS), a rare neurodevelopmental disorder that is caused by a deletion on the long arm of chromosome 7, often show an uneven cognitive profile with participants performing better on language and face recognition tasks, in contrast to visuospatial and number tasks. Recent studies have shown that this specific cognitive profile in WS is a result of atypical developmental processes that interact with and affect brain development from infancy onward. Using examples from language, face processing, number, and visuospatial studies, this review evaluates current evidence from eye-tracking and developmental studies and argues that domain general processes, such as the ability to plan or execute saccades, influence the development of these domain-specific outcomes. Although more research on eye movements in WS is required, the importance of eye movements for cognitive development suggests a possible intervention pathway to improve cognitive abilities in this population.
Collapse
Affiliation(s)
- Jo Van Herwegen
- Department of Psychology, Kingston University London, Surrey, UK
| |
Collapse
|
148
|
Guo K, Shaw H. Face in profile view reduces perceived facial expression intensity: an eye-tracking study. Acta Psychol (Amst) 2015; 155:19-28. [PMID: 25531122 DOI: 10.1016/j.actpsy.2014.12.001] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Revised: 11/28/2014] [Accepted: 12/03/2014] [Indexed: 10/24/2022] Open
Abstract
Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.
Collapse
|
149
|
Megreya AM, Bindemann M. Developmental Improvement and Age-Related Decline in Unfamiliar Face Matching. Perception 2015; 44:5-22. [DOI: 10.1068/p7825] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Age-related changes have been documented widely in studies of face recognition and eyewitness identification. However, it is not clear whether these changes arise from general developmental differences in memory or occur specifically during the perceptual processing of faces. We report two experiments to track such perceptual changes using a 1-in-10 (experiment 1) and 1-in-1 (experiment 2) matching task for unfamiliar faces. Both experiments showed improvements in face matching during childhood and adult-like accuracy levels by adolescence. In addition, face-matching performance declined in adults of the age of 65 years. These findings indicate that developmental improvements and aging-related differences in face processing arise from changes in the perceptual encoding of faces. A clear face inversion effect was also present in all age groups. This indicates that those age-related changes in face matching reflect a quantitative effect, whereby typical face processes are engaged but do not operate at the best-possible level. These data suggest that part of the problem of eyewitness identification in children and elderly persons might reflect impairments in the perceptual processing of unfamiliar faces.
Collapse
Affiliation(s)
- Ahmed M Megreya
- Department of Psychological Sciences, College of Education, Qatar University, Doha, Qatar
| | | |
Collapse
|
150
|
Fixation oculaire initiale et exploration d’un visage : le cas de l’enfant avec Trouble du spectre de l’autisme et retard développemental. ENFANCE 2014. [DOI: 10.4074/s0013754514004017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|