1
|
Dunn JD, Miellet S, White D. Information sampling differences supporting superior face identity processing ability. Psychon Bull Rev 2024:10.3758/s13423-024-02579-0. [PMID: 39313677 DOI: 10.3758/s13423-024-02579-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2024] [Indexed: 09/25/2024]
Abstract
Face recognition in humans is often cited as a model example of perceptual expertise that is characterized by an increased tendency to process faces as holistic percepts. However emerging evidence across different domains of expertise points to a critical role of feature-based processing strategies during the initial encoding of information. Here, we examined the eye-movement patterns of super-recognisers-individuals with extremely high face identification ability compared with the average person-using gaze-contingent "spotlight" apertures that restrict visual face information in real time around their point of fixation. As an additional contrast, we also compared their performance with that of facial examiners-highly trained individuals whose superiority has been shown to rely heavily on featural processing. Super-recognisers and facial examiners showed equivalent face matching accuracy in both spotlight aperture and natural viewing conditions, suggesting that they were equally adept at using featural information for face identity processing. Further, both groups sampled more information across the face than controls. Together, these results show that the active exploration of facial features is an important determinant of face recognition ability that generalizes across different types of experts.
Collapse
Affiliation(s)
- James D Dunn
- School of Psychology, UNSW Sydney, Kensington, NSW, 2052, Australia.
| | - Sebastien Miellet
- School of Psychology, University of Wollongong, Wollongong, Australia
| | - David White
- School of Psychology, UNSW Sydney, Kensington, NSW, 2052, Australia
| |
Collapse
|
2
|
Paparelli A, Sokhn N, Stacchi L, Coutrot A, Richoz AR, Caldara R. Idiosyncratic fixation patterns generalize across dynamic and static facial expression recognition. Sci Rep 2024; 14:16193. [PMID: 39003314 PMCID: PMC11246522 DOI: 10.1038/s41598-024-66619-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 07/02/2024] [Indexed: 07/15/2024] Open
Abstract
Facial expression recognition (FER) is crucial for understanding the emotional state of others during human social interactions. It has been assumed that humans share universal visual sampling strategies to achieve this task. However, recent studies in face identification have revealed striking idiosyncratic fixation patterns, questioning the universality of face processing. More importantly, very little is known about whether such idiosyncrasies extend to the biological relevant recognition of static and dynamic facial expressions of emotion (FEEs). To clarify this issue, we tracked observers' eye movements categorizing static and ecologically valid dynamic faces displaying the six basic FEEs, all normalized for time presentation (1 s), contrast and global luminance across exposure time. We then used robust data-driven analyses combining statistical fixation maps with hidden Markov Models to explore eye-movements across FEEs and stimulus modalities. Our data revealed three spatially and temporally distinct equally occurring face scanning strategies during FER. Crucially, such visual sampling strategies were mostly comparably effective in FER and highly consistent across FEEs and modalities. Our findings show that spatiotemporal idiosyncratic gaze strategies also occur for the biologically relevant recognition of FEEs, further questioning the universality of FER and, more generally, face processing.
Collapse
Affiliation(s)
- Anita Paparelli
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Lisa Stacchi
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Antoine Coutrot
- Laboratoire d'Informatique en Image Et Systèmes d'information, French Centre National de La Recherche Scientifique, University of Lyon, Lyon, France
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland.
| |
Collapse
|
3
|
Yu L, Wang Z, Fan Y, Ban L, Mottron L. Autistic preschoolers display reduced attention orientation for competition but intact facilitation from a parallel competitor: Eye-tracking and behavioral data. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2024; 28:1551-1564. [PMID: 38514915 PMCID: PMC11134990 DOI: 10.1177/13623613241239416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
LAY ABSTRACT Recent research suggests that we might have underestimated the social motivation of autistic individuals. Autistic children might be engaged in a social situation, even if they seem not to be attending to people in a typical way. Our study investigated how young autistic children behave in a "parallel" situation, which we call "parallel competition," where people participate in friendly contests side-by-side but without direct interaction. First, we used eye-tracking technology to observe how much autistic children pay attention to two video scenarios: one depicting parallel competition, and the other where individuals play directly with each other. The results showed that autistic children looked less toward the parallel competition video than their typically developing peers. However, when autistic children took part in parallel competitions themselves, playing physical and cognitive games against a teacher, their performance improved relative to playing individually just as much as their typically developing peers. This suggests that even though autistic children pay attention to social events differently, they can still benefit from the presence of others. These findings suggest complementing traditional cooperative activities by incorporating parallel activities into educational programs for young autistic children. By doing so, we can create more inclusive learning environments for these children.
Collapse
Affiliation(s)
- Luodi Yu
- Center for Autism Research, School of Education, Guangzhou University, China
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China
| | - Zhiren Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China
| | - Yuebo Fan
- Center for Autism Research, School of Education, Guangzhou University, China
- Guangzhou Autism Light and Salt Center, China
| | - Lizhi Ban
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China
| | - Laurent Mottron
- Psychiatry and Addictology Department, and CIUSSS-NIM Research Center, University of Montreal, Canada
| |
Collapse
|
4
|
Ontivero-Ortega M, Iglesias-Fuster J, Perez-Hidalgo J, Marinazzo D, Valdes-Sosa M, Valdes-Sosa P. Intra-V1 functional networks and classification of observed stimuli. Front Neuroinform 2024; 18:1080173. [PMID: 38528885 PMCID: PMC10961393 DOI: 10.3389/fninf.2024.1080173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 02/08/2024] [Indexed: 03/27/2024] Open
Abstract
Introduction Previous studies suggest that co-fluctuations in neural activity within V1 (measured with fMRI) carry information about observed stimuli, potentially reflecting various cognitive mechanisms. This study explores the neural sources shaping this information by using different fMRI preprocessing methods. The common response to stimuli shared by all individuals can be emphasized by using inter-subject correlations or de-emphasized by deconvolving the fMRI with hemodynamic response functions (HRFs) before calculating the correlations. The latter approach shifts the balance towards participant-idiosyncratic activity. Methods Here, we used multivariate pattern analysis of intra-V1 correlation matrices to predict the Level or Shape of observed Navon letters employing the types of correlations described above. We assessed accuracy in inter-subject prediction of specific conjunctions of properties, and attempted intra-subject cross-classification of stimulus properties (i.e., prediction of one feature despite changes in the other). Weight maps from successful classifiers were projected onto the visual field. A control experiment investigated eye-movement patterns during stimuli presentation. Results All inter-subject classifiers accurately predicted the Level and Shape of specific observed stimuli. However, successful intra-subject cross-classification was achieved only for stimulus Level, but not Shape, regardless of preprocessing scheme. Weight maps for successful Level classification differed between inter-subject correlations and deconvolved correlations. The latter revealed asymmetries in visual field link strength that corresponded to known perceptual asymmetries. Post-hoc measurement of eyeball fMRI signals did not find differences in gaze between stimulus conditions, and a control experiment (with derived simulations) also suggested that eye movements do not explain the stimulus-related changes in V1 topology. Discussion Our findings indicate that both inter-subject common responses and participant-specific activity contribute to the information in intra-V1 co-fluctuations, albeit through distinct sub-networks. Deconvolution, that enhances subject-specific activity, highlighted interhemispheric links for Global stimuli. Further exploration of intra-V1 networks promises insights into the neural basis of attention and perceptual organization.
Collapse
Affiliation(s)
- Marlis Ontivero-Ortega
- The Clinical Hospital of Chengdu Brain Sciences, University of Electronic Sciences Technology of China, Chengdu, China
- Cuban Center for Neuroscience, Havana, Cuba
- Department of Data Analysis, Ghent University, Ghent, Belgium
| | | | | | | | - Mitchell Valdes-Sosa
- The Clinical Hospital of Chengdu Brain Sciences, University of Electronic Sciences Technology of China, Chengdu, China
- Cuban Center for Neuroscience, Havana, Cuba
| | - Pedro Valdes-Sosa
- The Clinical Hospital of Chengdu Brain Sciences, University of Electronic Sciences Technology of China, Chengdu, China
- Cuban Center for Neuroscience, Havana, Cuba
| |
Collapse
|
5
|
Visalli A, Montefinese M, Viviani G, Finos L, Vallesi A, Ambrosini E. lmeEEG: Mass linear mixed-effects modeling of EEG data with crossed random effects. J Neurosci Methods 2024; 401:109991. [PMID: 37884082 DOI: 10.1016/j.jneumeth.2023.109991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 09/26/2023] [Accepted: 10/21/2023] [Indexed: 10/28/2023]
Abstract
BACKGROUND Mixed-effects models are the current standard for the analysis of behavioral studies in psycholinguistics and related fields, given their ability to simultaneously model crossed random effects for subjects and items. However, they are hardly applied in neuroimaging and psychophysiology, where the use of mass univariate analyses in combination with permutation testing would be too computationally demanding to be practicable with mixed models. NEW METHOD Here, we propose and validate an analytical strategy that enables the use of linear mixed models (LMM) with crossed random intercepts in mass univariate analyses of EEG data (lmeEEG). It avoids the unfeasible computational costs that would arise from massive permutation testing with LMM using a simple solution: removing random-effects contributions from EEG data and performing mass univariate linear analysis and permutations on the obtained marginal EEG. RESULTS lmeEEG showed excellent performance properties in terms of power and false positive rate. COMPARISON WITH EXISTING METHODS lmeEEG overcomes the computational costs of standard available approaches (our method was indeed more than 300 times faster). CONCLUSIONS lmeEEG allows researchers to use mixed models with EEG mass univariate analyses. Thanks to the possibility offered by the method described here, we anticipate that LMM will become increasingly important in neuroscience. Data and codes are available at osf.io/kw87a. The codes and a tutorial are also available at github.com/antovis86/lmeEEG.
Collapse
Affiliation(s)
| | - Maria Montefinese
- Department of Developmental and Social Psychology, University of Padova, Padova, Italy
| | - Giada Viviani
- Department of Neuroscience, University of Padova, Padova, Italy; Padova Neuroscience Center, University of Padova, Padova, Italy
| | - Livio Finos
- Padova Neuroscience Center, University of Padova, Padova, Italy; Department of Statistical Sciences, University of Padova, Padova, Italy
| | - Antonino Vallesi
- Department of Neuroscience, University of Padova, Padova, Italy; Padova Neuroscience Center, University of Padova, Padova, Italy
| | - Ettore Ambrosini
- Department of Neuroscience, University of Padova, Padova, Italy; Padova Neuroscience Center, University of Padova, Padova, Italy; Department of General Psychology, University of Padova, Padova, Italy
| |
Collapse
|
6
|
Gingras F, Estéphan A, Fiset D, Lingnan H, Caldara R, Blais C. Differences in eye movements for face recognition between Canadian and Chinese participants are not modulated by social orientation. PLoS One 2023; 18:e0295256. [PMID: 38096320 PMCID: PMC10721205 DOI: 10.1371/journal.pone.0295256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 11/18/2023] [Indexed: 12/18/2023] Open
Abstract
Face recognition strategies do not generalize across individuals. Many studies have reported robust cultural differences between West Europeans/North Americans and East Asians in eye movement strategies during face recognition. The social orientation hypothesis posits that individualistic vs. collectivistic (IND/COL) value systems, respectively defining West European/North American and East Asian societies, would be at the root of many cultural differences in visual perception. Whether social orientation is also responsible for such cultural contrast in face recognition remains to be clarified. To this aim, we conducted two experiments with West European/North American and Chinese observers. In Experiment 1, we probed the existence of a link between IND/COL social values and eye movements during face recognition, by using an IND/COL priming paradigm. In Experiment 2, we dissected the latter relationship in greater depth, by using two IND/COL questionnaires, including subdimensions to those concepts. In both studies, cultural differences in fixation patterns were revealed between West European/North American and East Asian observers. Priming IND/COL values did not modulate eye movement visual sampling strategies, and only specific subdimensions of the IND/COL questionnaires were associated with distinct eye-movement patterns. Altogether, we show that the typical contrast between IND/COL cannot fully account for cultural differences in eye movement strategies for face recognition. Cultural differences in eye movements for faces might originate from mechanisms distinct from social orientation.
Collapse
Affiliation(s)
- Francis Gingras
- Département de psychoéducation et psychologie, Université du Québec en Outaouais, Gatineau, Canada
- Département de psychologie, Université du Québec à Montréal, Montreal, Canada
| | - Amanda Estéphan
- Département de psychoéducation et psychologie, Université du Québec en Outaouais, Gatineau, Canada
- Département de psychologie, Université du Québec à Montréal, Montreal, Canada
| | - Daniel Fiset
- Département de psychoéducation et psychologie, Université du Québec en Outaouais, Gatineau, Canada
| | - He Lingnan
- School of Communication and Design, Sun Yat-Sen University, Guangzhou, People’s Republic of China
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Caroline Blais
- Département de psychoéducation et psychologie, Université du Québec en Outaouais, Gatineau, Canada
| |
Collapse
|
7
|
Rodger H, Sokhn N, Lao J, Liu Y, Caldara R. Developmental eye movement strategies for decoding facial expressions of emotion. J Exp Child Psychol 2023; 229:105622. [PMID: 36641829 DOI: 10.1016/j.jecp.2022.105622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 12/21/2022] [Accepted: 12/23/2022] [Indexed: 01/15/2023]
Abstract
In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six "basic emotions" to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations.
Collapse
Affiliation(s)
- Helen Rodger
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Yingdi Liu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| |
Collapse
|
8
|
Doidy F, Desaunay P, Rebillard C, Clochon P, Lambrechts A, Wantzen P, Guénolé F, Baleyte JM, Eustache F, Bowler DM, Lebreton K, Guillery-Girard B. How scene encoding affects memory discrimination: Analysing eye movements data using data driven methods. VISUAL COGNITION 2023. [DOI: 10.1080/13506285.2023.2188335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
Affiliation(s)
- F. Doidy
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - P. Desaunay
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
- Service de Psychiatrie de l’enfant et de l’adolescent, CHU de Caen, Caen, France
| | - C. Rebillard
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - P. Clochon
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - A. Lambrechts
- Autism Research Group, Department of Psychology, City, University of London, London, UK
| | - P. Wantzen
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - F. Guénolé
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
- Service de Psychiatrie de l’enfant et de l’adolescent, CHU de Caen, Caen, France
| | - J. M. Baleyte
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
- Service de Psychiatrie de l’enfant et de l’adolescent, Centre Hospitalier Interuniversitaire de Créteil, Créteil, France
| | - F. Eustache
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - D. M. Bowler
- Autism Research Group, Department of Psychology, City, University of London, London, UK
| | - K. Lebreton
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - B. Guillery-Girard
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| |
Collapse
|
9
|
Beylergil SB, Kilbane C, Shaikh AG, Ghasia FF. Eye movements in Parkinson's disease during visual search. J Neurol Sci 2022; 440:120299. [PMID: 35810513 DOI: 10.1016/j.jns.2022.120299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Revised: 04/30/2022] [Accepted: 05/23/2022] [Indexed: 10/18/2022]
Abstract
Visual spatial dysfunction is not uncommon in Parkinson's disease. We hypothesized that visual search behavior is impaired in Parkinson's disease and the deficits correlate with changes in the amplitudes and frequency of fixational and non-fixational rapid eye movements. We measured eye movements, the horizontal and vertical angular position vectors of the right and left eye using high-resolution video oculography, in the Parkinsonian cohort who viewed a blank scene and pictures with real-life scene. Latter was associated with a task of searching an object hidden in a clutter, either at an expected or an unexpected location. Parkinsonian cohort took longer initial time to reach the region of interest. The ultimate response time was comparable in both Parkinson's disease and their healthy peers. The fixation duration was comparable in two cohorts but there was a trend wise decline for the ones located at unexpected locations. Parkinson's disease participants made more fixational saccades with significantly larger amplitude and less non-fixational saccades with significantly smaller amplitude during blank scene viewing. However, overall scanned area of the blank scene was not affected in Parkinson's disease. The Parkinson's disease participants made less non-fixational saccades with amplitudes comparable to healthy control during the visual search of a target object. Fixational saccades during visual search were larger in Parkinson's disease particularly when target was placed at an unexpected location, but the frequency was unchanged.
Collapse
Affiliation(s)
- Sinem B Beylergil
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, USA; Daroff-Dell'Osso Ocular Motility Laboratory, Louis Stokes Cleveland VA Medical Center, Cleveland, USA
| | - Camilla Kilbane
- Department of Neurology, University Hospitals, Cleveland, USA
| | - Aasef G Shaikh
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, USA; Daroff-Dell'Osso Ocular Motility Laboratory, Louis Stokes Cleveland VA Medical Center, Cleveland, USA; Department of Neurology, University Hospitals, Cleveland, USA.
| | - Fatema F Ghasia
- Daroff-Dell'Osso Ocular Motility Laboratory, Louis Stokes Cleveland VA Medical Center, Cleveland, USA; Cole Eye Institute, Cleveland Clinic, Cleveland, USA
| |
Collapse
|
10
|
Nicholls VI, Wiener JM, Meso AI, Miellet S. The Relative Contribution of Executive Functions and Aging on Attentional Control During Road Crossing. Front Psychol 2022; 13:912446. [PMID: 35645940 PMCID: PMC9133663 DOI: 10.3389/fpsyg.2022.912446] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 04/19/2022] [Indexed: 11/13/2022] Open
Abstract
As we age, many physical, perceptual and cognitive abilities decline, which can critically impact our day-to-day lives. However, the decline of many abilities is concurrent; thus, it is challenging to disentangle the relative contributions of different abilities in the performance deterioration in realistic tasks, such as road crossing, with age. Research into road crossing has shown that aging and a decline in executive functioning (EFs) is associated with altered information sampling and less safe crossing decisions compared to younger adults. However, in these studies declines in age and EFs were confounded. Therefore, it is impossible to disentangle whether age-related declines in EFs impact on visual sampling and road-crossing performance, or whether visual exploration, and road-crossing performance, are impacted by aging independently of a decline in EFs. In this study, we recruited older adults with maintained EFs to isolate the impacts of aging independently of a decline EFs on road crossing abilities. We recorded eye movements of younger adults and older adults while they watched videos of road traffic and were asked to decide when they could cross the road. Overall, our results show that older adults with maintained EFs sample visual information and make similar road crossing decisions to younger adults. Our findings also reveal that both environmental constraints and EF abilities interact with aging to influence how the road-crossing task is performed. Our findings suggest that older pedestrians' safety, and independence in day-to-day life, can be improved through a limitation of scene complexity and a preservation of EF abilities.
Collapse
Affiliation(s)
- Victoria I Nicholls
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom.,Ageing and Dementia Research Centre, Bournemouth University, Poole, United Kingdom
| | - Jan M Wiener
- Ageing and Dementia Research Centre, Bournemouth University, Poole, United Kingdom
| | - Andrew Isaac Meso
- Neuroimaging Department, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, United Kingdom
| | - Sebastien Miellet
- School of Psychology, University of Wollongong, Wollongong, NSW, Australia
| |
Collapse
|
11
|
Rossion B. Twenty years of investigation with the case of prosopagnosia PS to understand human face identity recognition. Part I: Function. Neuropsychologia 2022; 173:108278. [DOI: 10.1016/j.neuropsychologia.2022.108278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/28/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022]
|
12
|
Shen W, Wei X. Tracking the effectiveness of creative ads with a computer mouse. Psych J 2021; 11:51-54. [PMID: 34743421 DOI: 10.1002/pchj.497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 08/29/2021] [Accepted: 09/29/2021] [Indexed: 11/10/2022]
Abstract
This study aimed to explore the impact of creative advertising on consumers' purchase intention and attention. By calculating mouse trajectories in two tasks, we found that when advertising gained enough attention, creative advertising can attract more attention and reduce individuals' feelings of uncertainty when making purchase decisions.
Collapse
Affiliation(s)
- Wangbing Shen
- School of Public Administration, Hohai University, Nanjing, China
| | - Xing Wei
- School of Public Administration, Hohai University, Nanjing, China
| |
Collapse
|
13
|
de Lissa P, Sokhn N, Lasrado S, Tanaka K, Watanabe K, Caldara R. Rapid saccadic categorization of other-race faces. J Vis 2021; 21:1. [PMID: 34724530 PMCID: PMC8572436 DOI: 10.1167/jov.21.12.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The human visual system is very fast and efficient at extracting socially relevant information from faces. Visual studies employing foveated faces have consistently reported faster categorization by race response times for other-race compared with same-race faces. However, in everyday life we typically encounter faces outside the foveated visual field. In study 1, we explored whether and how race is categorized extrafoveally in same- and other-race faces normalized for low-level properties by tracking eye movements of Western Caucasian and East Asian observers in a saccadic response task. The results show that not only are people sensitive to race in faces presented outside of central vision, but the speed advantage in categorizing other-race faces occurs astonishingly quickly in as little as 200 ms. Critically, this visual categorization process was approximately 300 ms faster than the typical button press responses on centrally presented foveated faces. Study 2 investigated the genesis of the extrafoveal saccadic response speed advantage by comparing the influences of the response modality (button presses and saccadic responses), as well as the potential contribution of the impoverished low-spatial frequency spectrum characterizing extrafoveal visual information processing. Button press race categorization was not significantly faster with reconstructed retinal-filtered low spatial frequency faces, regardless of the visual field presentation. The speed of race categorization was significantly boosted only by extrafoveal saccades and not centrally foveated faces. Race is a potent, rapid, and effective visual signal transmitted by faces used for the categorization of ingroup/outgroup members. This fast universal visual categorization can occur outside central vision, igniting a cascade of social processes.
Collapse
Affiliation(s)
- Peter de Lissa
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland.,
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland.,
| | - Sasha Lasrado
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland.,
| | - Kanji Tanaka
- Faculty of Arts and Science, Kyushu University, Fukuoka, Japan.,
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan.,Faculty of Arts, Design, and Architecture, University of New South Wales, Sydney, Australia.,
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland.,
| |
Collapse
|
14
|
Efficient calculations of NSS-based gaze similarity for time-dependent stimuli. Behav Res Methods 2021; 54:94-116. [PMID: 34109561 DOI: 10.3758/s13428-021-01562-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2021] [Indexed: 11/08/2022]
Abstract
The degree of spatial similarity between the gaze of participants viewing dynamic stimuli such as videos has been previously measured using metrics which are based on the NSS (Normalized Scanpath Saliency). Methods currently used to calculate this metric rely upon a numerical grid, which can be computationally prohibitive for a variety of otherwise useful applications such as Monte Carlo analyses. In the present work we derive a new analytical calculation method for the same metric that yields equal or more accurate results, but with speeds than can be orders of magnitude faster (depending on parameters). Our analytical method scales well with dimensionality, and could also be of use for other applications. The drawback is that it can become very slow if the number of participants in the study is very large or if the gaze sampling rate is high. We provide performance benchmarks for a Fortran implementation of our method, and make available the source code developed.
Collapse
|
15
|
Rim NW, Choe KW, Scrivner C, Berman MG. Introducing Point-of-Interest as an alternative to Area-of-Interest for fixation duration analysis. PLoS One 2021; 16:e0250170. [PMID: 33970920 PMCID: PMC8109773 DOI: 10.1371/journal.pone.0250170] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Accepted: 03/31/2021] [Indexed: 11/18/2022] Open
Abstract
Many eye-tracking data analyses rely on the Area-of-Interest (AOI) methodology, which utilizes AOIs to analyze metrics such as fixations. However, AOI-based methods have some inherent limitations including variability and subjectivity in shape, size, and location of AOIs. In this article, we propose an alternative approach to the traditional AOI dwell time analysis: Weighted Sum Durations (WSD). This approach decreases the subjectivity of AOI definitions by using Points-of-Interest (POI) while maintaining interpretability. In WSD, the durations of fixations toward each POI is weighted by the distance from the POI and summed together to generate a metric comparable to AOI dwell time. To validate WSD, we reanalyzed data from a previously published eye-tracking study (n = 90). The re-analysis replicated the original findings that people gaze less towards faces and more toward points of contact when viewing violent social interactions.
Collapse
Affiliation(s)
- Nak Won Rim
- Masters in Computational Social Science, The University of Chicago, Chicago, Illinois, United States of America
| | - Kyoung Whan Choe
- Department of Psychology, The University of Chicago, Chicago, Illinois, United States of America
- Mansueto Institute for Urban Innovation, The University of Chicago, Chicago, Illinois, United States of America
| | - Coltan Scrivner
- Department of Comparative Human Development, The University of Chicago, Chicago, Illinois, United States of America
- Institute for Mind and Biology, The University of Chicago, Chicago, Illinois, United States of America
| | - Marc G. Berman
- Department of Psychology, The University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, The University of Chicago, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
16
|
Jiang K, Wang Y, Feng Z, Cui J, Huang Z, Yu Z, Sze NN. Research on intervention methods for children's street-crossing behaviour: Application and expansion of the theory of "behaviour spectrums". ACCIDENT; ANALYSIS AND PREVENTION 2021; 152:105979. [PMID: 33548586 DOI: 10.1016/j.aap.2021.105979] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 01/01/2021] [Accepted: 01/05/2021] [Indexed: 06/12/2023]
Abstract
Due to immaturity in their physical and cognitive development, children are particularly vulnerable to road traffic injuries as pedestrians. Child pedestrian injury primarily occurs in urban areas, with a significant share at crosswalks. The aim of this study is to explore whether an intervention programme based on the theory of "behaviour spectrums" can improve the street-crossing skills of primary school children. Children were recruited near a local primary school through invitation letters and were randomly divided into two groups: a control group (n = 10, no intervention) and an experimental group (n = 10, intervention). The children in the experimental group received 30-45 min of training. The child participants were asked to wear an eye tracker and performed a crossing test in a real-world street environment; in this test, they were required to successively pass through an unsignalised intersection, an unsignalised T-intersection and a signalised intersection on a designated test route. A high-definition camera was used to record the children's crossing behaviour, and the Tobii Pro Glasses 2 eye tracker was used to derive indicators of the children's visual behaviour in the areas of interest (AOIs) in the street. The evaluation was conducted on children's crossing behaviour in the control group (which received no intervention) and the experimental group (tested at two time points after the intervention: children tested immediately after the intervention and children retested one month after the intervention). The results showed that compared with the control group, the children in the experimental group no longer focused on the small area around the body (e.g., the zebra crossing area) and the area in front of the eyes (e.g., the sidewalk area), which increased their visual attention to the traffic areas on the left and right sides of the zebra crossing; thus, unsafe crossing behaviour was reduced in the experimental group. Compared with the experimental group immediately after the intervention, the intervention effect on some indicators showed a significant weakening trend in the retest of the experimental group one month later. Overall, the results show that an intervention programme based on the theory of "behaviour spectrums" can improve children's crossing skills. This study provides valuable information for the development and evaluation of intervention programmes to improve children's street-crossing skills.
Collapse
Affiliation(s)
- Kang Jiang
- Affiliation: School of Automobile and Traffic Engineering, Hefei University of Technology, Hefei, 230009, Anhui, PR China.
| | - Yulong Wang
- Affiliation: School of Automobile and Traffic Engineering, Hefei University of Technology, Hefei, 230009, Anhui, PR China.
| | - Zhongxiang Feng
- Affiliation: School of Automobile and Traffic Engineering, Hefei University of Technology, Hefei, 230009, Anhui, PR China.
| | - Jianqiang Cui
- School of Environment and Science, Griffith University, Brisbane, Queensland, Australia.
| | - Zhipeng Huang
- Affiliation: School of Automobile and Traffic Engineering, Hefei University of Technology, Hefei, 230009, Anhui, PR China.
| | - Zhenhua Yu
- School of Mechanical Engineering, Hefei University of Technology, Hefei, 230009, Anhui, PR China.
| | - N N Sze
- Department of Civil and Environmental Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong.
| |
Collapse
|
17
|
Hu Z, Wang X, Hu X, Lei X, Liu H. Aesthetic Evaluation of Computer Icons: Visual Pattern Differences Between Art-Trained and Lay Raters of Icons. Percept Mot Skills 2020; 128:115-134. [PMID: 33121355 DOI: 10.1177/0031512520969637] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Adopting eye-tracking measures, we explored the influence of art experience on the aesthetic evaluation of computer icons. Participants were 27 college students with art training and 27 laypersons. Both groups rated icons of varying complexity and symmetry for "beauty" while we recorded participants' eye movements. Results showed that art-trained participants viewed the icons with more eye fixations and had shorter scanning paths than participants in the non-art group, suggesting that art-trained participants processed the icons more deliberately. In addition, we observed an interaction effect between art experience and symmetry. For asymmetrical icons, art-trained participants' ratings tended to be higher than those of lay persons; for symmetric icons, there was no such rater difference. The different visual patterns associated with aesthetic evaluations by these two participant groups suggest that art experience plays a pivotal role in the aesthetic appreciation of icons and has important implications for icon design strategy.
Collapse
Affiliation(s)
- Zhiguo Hu
- Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, P. R. China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou Normal University, Hangzhou, P. R. China
| | - Xinrui Wang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, P. R. China
| | - Xinkui Hu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, P. R. China
| | - Xiaofang Lei
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, P. R. China
| | - Hongyan Liu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, P. R. China
| |
Collapse
|
18
|
Visual exploration of emotional body language: a behavioural and eye-tracking study. PSYCHOLOGICAL RESEARCH 2020; 85:2326-2339. [PMID: 32920675 DOI: 10.1007/s00426-020-01416-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 09/01/2020] [Indexed: 10/23/2022]
Abstract
Bodily postures are essential to correctly comprehend others' emotions and intentions. Nonetheless, very few studies focused on the pattern of eye movements implicated in the recognition of emotional body language (EBL), demonstrating significant differences in relation to different emotions. A yet unanswered question regards the presence of the "left-gaze bias" (i.e. the tendency to look first, to make more fixations and to spend more looking time on the left side of centrally presented stimuli) while scanning bodies. Hence, the present study aims at exploring both the presence of a left-gaze bias and the modulation of EBL visual exploration mechanisms, by investigating the fixation patterns (number of fixations and latency of the first fixation) of participants while judging the emotional intensity of static bodily postures (Angry, Happy and Neutral, without head). While results on the latency of first fixations demonstrate for the first time the presence of the left-gaze bias while scanning bodies, suggesting that it could be related to the stronger expressiveness of the left hand (from the observer's point of view), results on fixations' number only partially fulfil our hypothesis. Moreover, an opposite viewing pattern between Angry and Happy bodily postures is showed. In sum, the present results, by integrating the spatial and temporal dimension of gaze exploration patterns, shed new light on EBL visual exploration mechanisms.
Collapse
|
19
|
Papinutto M, Lao J, Lalanne D, Caldara R. Watchers do not follow the eye movements of Walkers. Vision Res 2020; 176:130-140. [PMID: 32882595 DOI: 10.1016/j.visres.2020.08.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 08/03/2020] [Accepted: 08/05/2020] [Indexed: 11/27/2022]
Abstract
Eye movements are a functional signature of how the visual system effectively decodes and adapts to the environment. However, scientific knowledge in eye movements mostly arises from studies conducted in laboratories, with well-controlled stimuli presented in constrained unnatural settings. Only a few studies have attempted to directly compare and assess whether eye movement data acquired in the real world generalize with those in laboratory settings, with same visual inputs. However, none of these studies controlled for both the auditory signals typical of real-world settings and the top-down task effects across conditions, leaving this question unresolved. To minimize this inherent gap across conditions, we compared the eye movements recorded from observers during ecological spatial navigation in the wild (the Walkers) with those recorded in laboratory (the Watchers) on the same visual and auditory inputs, with both groups performing the very same active cognitive task. We derived robust data-driven statistical saliency and motion maps. The Walkers and Watchers differed in terms of eye movement characteristics: fixation number and duration, saccade amplitude. The Watchers relied significantly more on saliency and motion than the Walkers. Interestingly, both groups exhibited similar fixation patterns towards social agents and objects. Altogether, our data show that eye movements patterns obtained in laboratory do not fully generalize to real world, even when task and auditory information is controlled. These observations invite to caution when generalizing the eye movements obtained in laboratory with those of ecological spatial navigation.
Collapse
Affiliation(s)
- M Papinutto
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Switzerland; Human-IST Institute, Department of Informatics, University of Fribourg, Switzerland.
| | - J Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Switzerland
| | - D Lalanne
- Human-IST Institute, Department of Informatics, University of Fribourg, Switzerland
| | - R Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Switzerland
| |
Collapse
|
20
|
Hilton C, Miellet S, Slattery TJ, Wiener J. Are age-related deficits in route learning related to control of visual attention? PSYCHOLOGICAL RESEARCH 2020; 84:1473-1484. [PMID: 30850875 PMCID: PMC7387378 DOI: 10.1007/s00426-019-01159-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Accepted: 02/18/2019] [Indexed: 11/29/2022]
Abstract
Typically aged adults show reduced ability to learn a route compared to younger adults. In this experiment, we investigate the role of visual attention through eye-tracking and engagement of attentional resources in age-related route learning deficits. Participants were shown a route through a realistic virtual environment before being tested on their route knowledge. Younger and older adults were compared on their gaze behaviour during route learning and on their reaction time to a secondary probe task as a measure of attentional engagement. Behavioural results show a performance deficit in route knowledge for older adults compared to younger adults, which is consistent with previous research. We replicated previous findings showing that reaction times to the secondary probe task were longer at decision points than non-decision points, indicating stronger attentional engagement at navigationally relevant locations. However, we found no differences in attentional engagement and no differences for a range of gaze measures between age groups. We conclude that age-related changes in route learning ability are not reflected in changes in control of visual attention or regulation of attentional engagement.
Collapse
Affiliation(s)
- Christopher Hilton
- Department of Psychology, Bournemouth University, Poole House, Talbot Campus, Fern Barrow, Poole, Dorset, BH12 5BB, UK.
| | - Sebastien Miellet
- Active Vision Lab, School of Psychology, University of Wollongong, Northfields Ave, Wollongong, NSW, 2522, Australia
| | - Timothy J Slattery
- Department of Psychology, Bournemouth University, Poole House, Talbot Campus, Fern Barrow, Poole, Dorset, BH12 5BB, UK
| | - Jan Wiener
- Department of Psychology, Bournemouth University, Poole House, Talbot Campus, Fern Barrow, Poole, Dorset, BH12 5BB, UK
| |
Collapse
|
21
|
Millen AE, Hope L, Hillstrom AP. Eye spy a liar: assessing the utility of eye fixations and confidence judgments for detecting concealed recognition of faces, scenes and objects. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:38. [PMID: 32797306 PMCID: PMC7427826 DOI: 10.1186/s41235-020-00227-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 05/08/2020] [Indexed: 11/10/2022]
Abstract
BACKGROUND In criminal investigations, uncooperative witnesses might deny knowing a perpetrator, the location of a murder scene or knowledge of a weapon. We sought to identify markers of recognition in eye fixations and confidence judgments whilst participants told the truth and lied about recognising faces (Experiment 1) and scenes and objects (Experiment 2) that varied in familiarity. To detect recognition we calculated effect size differences in markers of recognition between familiar and unfamiliar items that varied in familiarity (personally familiar, newly learned). RESULTS In Experiment 1, recognition of personally familiar faces was reliably detected across multiple fixation markers (e.g. fewer fixations, fewer interest areas viewed, fewer return fixations) during honest and concealed recognition. In Experiment 2, recognition of personally familiar non-face items (scenes and objects) was detected solely by fewer fixations during honest and concealed recognition; differences in other fixation measures were not consistent. In both experiments, fewer fixations exposed concealed recognition of newly learned faces, scenes and objects, but the same pattern was not observed during honest recognition. Confidence ratings were higher for recognition of personally familiar faces than for unfamiliar faces. CONCLUSIONS Robust memories of personally familiar faces were detected in patterns of fixations and confidence ratings, irrespective of task demands required to conceal recognition. Crucially, we demonstrate that newly learned faces should not be used as a proxy for real-world familiarity, and that conclusions should not be generalised across different types of familiarity or stimulus class.
Collapse
Affiliation(s)
- Ailsa E Millen
- Department of Psychology, University of Portsmouth, Portsmouth, England, UK.
| | - Lorraine Hope
- Department of Psychology, University of Portsmouth, Portsmouth, England, UK
| | - Anne P Hillstrom
- Department of Psychology, University of Portsmouth, Portsmouth, England, UK
| |
Collapse
|
22
|
Chan FH, Suen H, Jackson T, Vlaeyen JW, Barry TJ. Pain-related attentional processes: A systematic review of eye-tracking research. Clin Psychol Rev 2020; 80:101884. [DOI: 10.1016/j.cpr.2020.101884] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/03/2020] [Accepted: 06/11/2020] [Indexed: 02/01/2023]
|
23
|
Wang C, Haponenko H, Liu X, Sun H, Zhao G. How Attentional Guidance and Response Selection Boost Contextual Learning: Evidence from Eye Movement. Adv Cogn Psychol 2020; 15:265-275. [PMID: 32477438 PMCID: PMC7246933 DOI: 10.5709/acp-0274-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
The contextual cueing effect (CCE) refers to the learned association between predictive configuration and target location, speeding up response times for targets. Previous studies have examined the underlying processes (initial perceptual process, attentional guidance, and response selection) of CCE but have not reached a general consensus on their contributions to CCE. In the present study, we used eye tracking to address this question by analyzing the oculomotor correlates of context-guided learning in visual search and eliminating indefinite response factors during response priming. The results show that both attentional guidance and response selection contribute to contextual learning.
Collapse
Affiliation(s)
- Chao Wang
- Faculty of Psychology, Tianjin Normal University, Tianjin, Tianjin, China, 300387
| | - Hanna Haponenko
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, L8S 4K1, Canada
| | - Xingze Liu
- Medical Psychological Center, Second Xiangya Hospital of Central South University, Hunan, China, 410011
| | - Hongjin Sun
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, L8S 4K1, Canada
| | - Guang Zhao
- Faculty of Psychology, Tianjin Normal University, Tianjin, Tianjin, China, 300387
| |
Collapse
|
24
|
Chan FHF, Suen H, Hsiao JH, Chan AB, Barry TJ. Interpretation biases and visual attention in the processing of ambiguous information in chronic pain. Eur J Pain 2020; 24:1242-1256. [PMID: 32223046 DOI: 10.1002/ejp.1565] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Revised: 03/12/2020] [Accepted: 03/16/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND Theories propose that interpretation biases and attentional biases might account for the maintenance of chronic pain symptoms, but the interactions between these two forms of biases in the context of chronic pain are understudied. METHODS To fill this gap, 63 participants (40 females) with and without chronic pain completed an interpretation bias task that measures participants' interpretation styles in ambiguous scenarios and a novel eye-tracking task where participants freely viewed neutral faces that were given ambiguous pain/health-related labels (i.e. 'doctor', 'patient' and 'healthy people'). Eye movements were analysed with the Hidden Markov Models (EMHMM) approach, a machine-learning data-driven method that clusters people's eye movements into different strategy subgroups. RESULTS Adults with chronic pain endorsed more negative interpretations for scenarios related to immediate bodily injury and long-term illness than healthy controls, but they did not differ significantly in terms of their eye movements on ambiguous faces. Across groups, people who interpreted illness-related scenarios in a more negative way also focused more on the nose region and less on the eye region when looking at patients' and healthy people's faces and, to a lesser extent, doctors' faces. This association between interpretive and attentional processing was particularly apparent in participants with chronic pain. CONCLUSIONS In summary, the present study provided evidence for the interplay between multiple forms of cognitive biases. Future studies should investigate whether this interaction might influence subsequent functioning in people with chronic pain.
Collapse
Affiliation(s)
| | - Hin Suen
- Department of Psychology, The University of Hong Kong, Hong Kong
| | - Janet H Hsiao
- Department of Psychology, The University of Hong Kong, Hong Kong.,The State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong
| | - Antoni B Chan
- Department of Computer Science, The City University of Hong Kong, Hong Kong
| | - Tom J Barry
- Department of Psychology, The University of Hong Kong, Hong Kong.,Institute of Psychiatry, King's College London, London, UK
| |
Collapse
|
25
|
Wang Q, Hoi SP, Wang Y, Song C, Li T, Lam CM, Fang F, Yi L. Out of mind, out of sight? Investigating abnormal face scanning in autism spectrum disorder using gaze‐contingent paradigm. Dev Sci 2019; 23:e12856. [DOI: 10.1111/desc.12856] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 01/23/2019] [Accepted: 04/18/2019] [Indexed: 12/16/2022]
Affiliation(s)
- Qiandong Wang
- Peking‐Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies Peking University Beijing China
| | - Sio Pan Hoi
- School of Psychological and Cognitive Science & Beijing Key Laboratory of Behavior and Mental Health Peking University Beijing China
| | - Yuyin Wang
- Department of Psychology Sun Yat‐sen University Guangzhou China
| | - Ci Song
- School of Psychological and Cognitive Science & Beijing Key Laboratory of Behavior and Mental Health Peking University Beijing China
| | - Tianbi Li
- School of Psychological and Cognitive Science & Beijing Key Laboratory of Behavior and Mental Health Peking University Beijing China
| | - Cheuk Man Lam
- Institute of Psychology Chinese Academy of Science Beijing China
| | - Fang Fang
- Peking‐Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies Peking University Beijing China
- School of Psychological and Cognitive Science & Beijing Key Laboratory of Behavior and Mental Health Peking University Beijing China
- Key Laboratory of Machine Perception (Ministry of Education) Peking University Beijing China
- PKU‐IDG/McGovern Institute for Brain Research Peking University Beijing China
| | - Li Yi
- School of Psychological and Cognitive Science & Beijing Key Laboratory of Behavior and Mental Health Peking University Beijing China
| |
Collapse
|
26
|
Huber-Huber C, Buonocore A, Dimigen O, Hickey C, Melcher D. The peripheral preview effect with faces: Combined EEG and eye-tracking suggests multiple stages of trans-saccadic predictive and non-predictive processing. Neuroimage 2019; 200:344-362. [PMID: 31260837 DOI: 10.1016/j.neuroimage.2019.06.059] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Revised: 05/23/2019] [Accepted: 06/25/2019] [Indexed: 02/06/2023] Open
Abstract
The world appears stable despite saccadic eye-movements. One possible explanation for this phenomenon is that the visual system predicts upcoming input across saccadic eye-movements based on peripheral preview of the saccadic target. We tested this idea using concurrent electroencephalography (EEG) and eye-tracking. Participants made cued saccades to peripheral upright or inverted face stimuli that changed orientation (invalid preview) or maintained orientation (valid preview) while the saccade was completed. Experiment 1 demonstrated better discrimination performance and a reduced fixation-locked N170 component (fN170) with valid than with invalid preview, demonstrating integration of pre- and post-saccadic information. Moreover, the early fixation-related potentials (FRP) showed a preview face inversion effect suggesting that some pre-saccadic input was represented in the brain until around 170 ms post fixation-onset. Experiment 2 replicated Experiment 1 and manipulated the proportion of valid and invalid trials to test whether the preview effect reflects context-based prediction across trials. A whole-scalp Bayes factor analysis showed that this manipulation did not alter the fN170 preview effect but did influence the face inversion effect before the saccade. The pre-saccadic inversion effect declined earlier in the mostly invalid block than in the mostly valid block, which is consistent with the notion of pre-saccadic expectations. In addition, in both studies, we found strong evidence for an interaction between the pre-saccadic preview stimulus and the post-saccadic target as early as 50 ms (Experiment 2) or 90 ms (Experiment 1) into the new fixation. These findings suggest that visual stability may involve three temporal stages: prediction about the saccadic target, integration of pre-saccadic and post-saccadic information at around 50-90 ms post fixation onset, and post-saccadic facilitation of rapid categorization.
Collapse
Affiliation(s)
- Christoph Huber-Huber
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, Rovereto, TN, 38068, Italy.
| | - Antimo Buonocore
- Werner Reichardt Centre for Integrative Neuroscience, Tuebingen University, Otfried-Müller-Straße 25, Tuebingen, 72076, Germany; Hertie Institute for Clinical Brain Research, Tuebingen University, Tuebingen, 72076, Germany
| | - Olaf Dimigen
- Department of Psychology, Humboldt-Universität zu Berlin, Unter Den Linden 6, 10099, Berlin, Germany
| | - Clayton Hickey
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, Rovereto, TN, 38068, Italy
| | - David Melcher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, Rovereto, TN, 38068, Italy
| |
Collapse
|
27
|
Neural Representations of Faces Are Tuned to Eye Movements. J Neurosci 2019; 39:4113-4123. [PMID: 30867260 DOI: 10.1523/jneurosci.2968-18.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 02/07/2019] [Accepted: 03/05/2019] [Indexed: 01/23/2023] Open
Abstract
Eye movements provide a functional signature of how human vision is achieved. Many recent studies have consistently reported robust idiosyncratic visual sampling strategies during face recognition. Whether these interindividual differences are mirrored by idiosyncratic neural responses remains unknown. To this aim, we first tracked eye movements of male and female observers during face recognition. Additionally, for every observer we obtained an objective index of neural face discrimination through EEG that was recorded while they fixated different facial information. We found that foveation of facial features fixated longer during face recognition elicited stronger neural face discrimination responses across all observers. This relationship occurred independently of interindividual differences in preferential facial information sampling (e.g., eye vs mouth lookers), and started as early as the first fixation. Our data show that eye movements play a functional role during face processing by providing the neural system with the information that is diagnostic to a specific observer. The effective processing of identity involves idiosyncratic, rather than universal face representations.SIGNIFICANCE STATEMENT When engaging in face recognition, observers deploy idiosyncratic fixation patterns to sample facial information. Whether these individual differences concur with idiosyncratic face-sensitive neural responses remains unclear. To address this issue, we recorded observers' fixation patterns, as well as their neural face discrimination responses elicited during fixation of 10 different locations on the face, corresponding to different types of facial information. Our data reveal a clear interplay between individuals' face-sensitive neural responses and their idiosyncratic eye-movement patterns during identity processing, which emerges as early as the first fixation. Collectively, our findings favor the existence of idiosyncratic, rather than universal face representations.
Collapse
|
28
|
Developing attentional control in naturalistic dynamic road crossing situations. Sci Rep 2019; 9:4176. [PMID: 30862845 PMCID: PMC6414534 DOI: 10.1038/s41598-019-39737-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 01/24/2019] [Indexed: 11/09/2022] Open
Abstract
In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles’ appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.
Collapse
|
29
|
Lüthold P, Lao J, He L, Zhou X, Caldara R. Waldo reveals cultural differences in return fixations. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2018.1561567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Patrick Lüthold
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Lingnan He
- School of Communication and Design, Sun Yat-Sen University, Guangzhou, People’s Republic of China
| | - Xinyue Zhou
- School of Management, Zhejiang University, Zhejiang, People’s Republic of China
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
30
|
Feng S, Wang X, Wang Q, Fang J, Wu Y, Yi L, Wei K. The uncanny valley effect in typically developing children and its absence in children with autism spectrum disorders. PLoS One 2018; 13:e0206343. [PMID: 30383848 PMCID: PMC6211702 DOI: 10.1371/journal.pone.0206343] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Accepted: 10/01/2018] [Indexed: 12/27/2022] Open
Abstract
Robots and virtual reality are gaining popularity in the intervention of children with autism spectrum disorder (ASD). To shed light on children’s attitudes towards robots and characters in virtual reality, this study aims to examine whether children with ASD show the uncanny valley effect. We varied the realism of facial appearance by morphing a cartoon face into a human face, and induced perceptual mismatch by enlarging the eyes, which has previously been shown as an effective method to induce the uncanny valley effect in adults. Children with ASD and typically developing (TD) children participated in a two-alternative forced choice task that asked them to choose one they liked more from the two images presented on the screen. We found that TD children showed the effect, i.e., the enlargement of eye size and the approaching realism reduced their preference. In contrast, children with ASD did not show the uncanny valley effect. Our findings in TD children help resolve the controversy in the literature about the existence of the uncanny valley effect among young children. Meanwhile, the absence of the uncanny valley effect in children with ASD might be attributed to their reduced sensitivity to subtle changes of face features and their limited visual experience to faces caused by diminished social motivation. Last, our findings provide practical implications for designing robots and virtual characters for the intervention of children with ASD.
Collapse
Affiliation(s)
- Shuyuan Feng
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Xueqin Wang
- Department of Statistical Science, School of Mathematics and Computational Science, Sun Yat-sen University, Guangzhou, Guangdong, China
- Southern China Research Center of Statistical Science, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Qiandong Wang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
- Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Jing Fang
- Qingdao Autism Research Institute, Qingdao, Shandong, China
| | - Yaxue Wu
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Li Yi
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
- * E-mail: (LY); (KW)
| | - Kunlin Wei
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
- * E-mail: (LY); (KW)
| |
Collapse
|
31
|
Hermens F, Golubickis M, Macrae CN. Eye movements while judging faces for trustworthiness and dominance. PeerJ 2018; 6:e5702. [PMID: 30324015 PMCID: PMC6186410 DOI: 10.7717/peerj.5702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Accepted: 09/06/2018] [Indexed: 11/20/2022] Open
Abstract
Past studies examining how people judge faces for trustworthiness and dominance have suggested that they use particular facial features (e.g. mouth features for trustworthiness, eyebrow and cheek features for dominance ratings) to complete the task. Here, we examine whether eye movements during the task reflect the importance of these features. We here compared eye movements for trustworthiness and dominance ratings of face images under three stimulus configurations: Small images (mimicking large viewing distances), large images (mimicking face to face viewing), and a moving window condition (removing extrafoveal information). Whereas first area fixated, dwell times, and number of fixations depended on the size of the stimuli and the availability of extrafoveal vision, and varied substantially across participants, no clear task differences were found. These results indicate that gaze patterns for face stimuli are highly individual, do not vary between trustworthiness and dominance ratings, but are influenced by the size of the stimuli and the availability of extrafoveal vision.
Collapse
Affiliation(s)
- Frouke Hermens
- School of Psychology, University of Lincoln, Lincoln, Lincolnshire, UK
| | | | - C. Neil Macrae
- School of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
32
|
|
33
|
Abstract
How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.
Collapse
Affiliation(s)
| | - Janet H Hsiao
- Department of Psychology, The University of Hong Kong, Pok Fu Lam, Hong Kong
| | - Antoni B Chan
- Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|
34
|
Nuthmann A, Einhäuser W, Schütz I. How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models. Front Hum Neurosci 2017; 11:491. [PMID: 29163092 PMCID: PMC5671469 DOI: 10.3389/fnhum.2017.00491] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 09/26/2017] [Indexed: 11/21/2022] Open
Abstract
Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead (“central bias”). This problem is further exacerbated in the context of model comparisons, because some—but not all—models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox “GridFix” available.
Collapse
Affiliation(s)
- Antje Nuthmann
- Department of Psychology, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, United Kingdom.,Perception and Cognition Group, Institute of Psychology, University of Kiel, Kiel, Germany
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Immo Schütz
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
35
|
Hidden Markov model analysis reveals the advantage of analytic eye movement patterns in face recognition across cultures. Cognition 2017; 169:102-117. [PMID: 28869811 DOI: 10.1016/j.cognition.2017.08.003] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2016] [Revised: 08/08/2017] [Accepted: 08/08/2017] [Indexed: 11/21/2022]
Abstract
It remains controversial whether culture modulates eye movement behavior in face recognition. Inconsistent results have been reported regarding whether cultural differences in eye movement patterns exist, whether these differences affect recognition performance, and whether participants use similar eye movement patterns when viewing faces from different ethnicities. These inconsistencies may be due to substantial individual differences in eye movement patterns within a cultural group. Here we addressed this issue by conducting individual-level eye movement data analysis using hidden Markov models (HMMs). Each individual's eye movements were modeled with an HMM. We clustered the individual HMMs according to their similarities and discovered three common patterns in both Asian and Caucasian participants: holistic (looking mostly at the face center), left-eye-biased analytic (looking mostly at the two individual eyes in addition to the face center with a slight bias to the left eye), and right-eye-based analytic (looking mostly at the right eye in addition to the face center). The frequency of participants adopting the three patterns did not differ significantly between Asians and Caucasians, suggesting little modulation from culture. Significantly more participants (75%) showed similar eye movement patterns when viewing own- and other-race faces than different patterns. Most importantly, participants with left-eye-biased analytic patterns performed significantly better than those using either holistic or right-eye-biased analytic patterns. These results suggest that active retrieval of facial feature information through an analytic eye movement pattern may be optimal for face recognition regardless of culture.
Collapse
|
36
|
Caldara R. Culture Reveals a Flexible System for Face Processing. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2017. [DOI: 10.1177/0963721417710036] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Nonetheless, a fundamental question remains debated: Is face processing governed by universal perceptual processes? It has long been presumed that this is the case. However, over the past decade, our work at the Eye and Brain Mapping Laboratory has called into question this widely held assumption. We have investigated the eye movements of Western and Eastern observers across various face-processing tasks to determine the effect of culture on perceptual processing. Commonalities aside, we found that Westerners distribute local fixations across the eye and mouth regions, whereas Easterners preferentially deploy central, global fixations during face recognition. Moreover, during the recognition of facial expressions of emotion, Westerners fixate the mouth relatively more to discriminate across expressions, whereas Easterners favor the eye region. Both observations demonstrate that the face system relies on different strategies to perform a range of socially relevant face-processing tasks with comparable levels of efficiency. Overall, these cultural perceptual biases challenge the view that the processes dedicated to face processing are universal, favoring instead the existence of distinct, flexible strategies. The way humans perceive the world and process faces is determined by experience and environmental factors.
Collapse
Affiliation(s)
- Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| |
Collapse
|
37
|
Bovet J, Lao J, Bartholomée O, Caldara R, Raymond M. Mapping female bodily features of attractiveness. Sci Rep 2016; 6:18551. [PMID: 26791105 PMCID: PMC4726249 DOI: 10.1038/srep18551] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Accepted: 11/20/2015] [Indexed: 11/22/2022] Open
Abstract
“Beauty is bought by judgment of the eye” (Shakespeare, Love’s Labour’s Lost), but the bodily features governing this critical biological choice are still debated. Eye movement studies have demonstrated that males sample coarse body regions expanding from the face, the breasts and the midriff, while making female attractiveness judgements with natural vision. However, the visual system ubiquitously extracts diagnostic extra-foveal information in natural conditions, thus the visual information actually used by men is still unknown. We thus used a parametric gaze-contingent design while males rated attractiveness of female front- and back-view bodies. Males used extra-foveal information when available. Critically, when bodily features were only visible through restricted apertures, fixations strongly shifted to the hips, to potentially extract hip-width and curvature, then the breast and face. Our hierarchical mapping suggests that the visual system primary uses hip information to compute the waist-to-hip ratio and the body mass index, the crucial factors in determining sexual attractiveness and mate selection.
Collapse
Affiliation(s)
- Jeanne Bovet
- Institute for Advanced Study in Toulouse, Manufacture des Tabacs, 21 allée de Brienne, 31015 Toulouse Cedex 6, France.,Institute of Evolutionary Sciences, University of Montpellier, CNRS, IRD, EPHE, France
| | - Junpeng Lao
- Department of Psychology, University of Fribourg, Switzerland
| | - Océane Bartholomée
- Institute of Evolutionary Sciences, University of Montpellier, CNRS, IRD, EPHE, France
| | - Roberto Caldara
- Department of Psychology, University of Fribourg, Switzerland
| | - Michel Raymond
- Institute of Evolutionary Sciences, University of Montpellier, CNRS, IRD, EPHE, France
| |
Collapse
|