1
|
Yin W, Lee YC. How different face mask types affect interpersonal distance perception and threat feeling in social interaction. Cogn Process 2024; 25:477-490. [PMID: 38492094 DOI: 10.1007/s10339-024-01179-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 02/05/2024] [Indexed: 03/18/2024]
Abstract
Due to the easing of the pandemic, public policies no longer mandated people to wear masks. People can choose to no wear or wear different types of masks based on personal preferences and safety perceptions during daily interaction. Available information about the influence of face mask type on interpersonal distance (IPD) by different aging populations is still lacking. Thus, this study aimed to investigate the face mask type (no wear, cloth, medical and N95 mask) and age group effect of avatars (children, adults and older adults) on IPD perception, threat feeling and physiological skin conductance response under active and passive approaching. One hundred participants with a range from 20 to 35 years old were recruited for this study. Twelve avatars (three age groups*four face mask conditions) were created and applied in a virtual reality environment. The results showed that age group, mask type and approach mode had significant effects on IPD and subjective threat feeling. A non-significant effect was found on skin conductance responses. Participants maintained a significantly longer IPD when facing the older adults, followed by adults and then children. In the passive approach condition, people tended to maintain a significantly greater comfort distance than during the active approach. For the mask type effect, people kept a significantly largest and shortest IPD when facing an avatar with no mask or the N95 mask, respectively. A non-significant IPD difference was found between the N95 and medical mask. Additionally, based on the subjective threat feeling, facing an avatar wearing a medical mask generated the lowest threat feeling compared to the others. The findings of this study indicated that wearing medical masks provided a benefit in bringing people closer for interaction during specific situations. Understanding that mask-wearing, especially medical one, brought to shortest IPD when compared to the unmasked condition can be utilized to enhance safety measures in crowded public spaces and health-care settings. This information could guide the development of physical distancing recommendations, taking into account both the type of mask and the age groups involved, to ensure the maintenance of appropriate distances.
Collapse
Affiliation(s)
- Wenjing Yin
- School of Design, South China University of Technology, Guangzhou, China
| | - Yu-Chi Lee
- Department of Industrial Engineering and Management, National Taipei University of Technology, 1, Sec. 3, Zhongxiao E. Rd., Taipei, 10608, Taiwan.
| |
Collapse
|
2
|
Ghiani A, Amelink D, Brenner E, Hooge ITC, Hessels RS. When knowing the activity is not enough to predict gaze. J Vis 2024; 24:6. [PMID: 38984899 PMCID: PMC11238878 DOI: 10.1167/jov.24.7.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 05/31/2024] [Indexed: 07/11/2024] Open
Abstract
It is reasonable to assume that where people look in the world is largely determined by what they are doing. The reasoning is that the activity determines where it is useful to look at each moment in time. Assuming that it is vital to accurately judge the positions of the steps when navigating a staircase, it is surprising that people differ a lot in the extent to which they look at the steps. Apparently, some people consider the accuracy of peripheral vision, predictability of the step size, and feeling the edges of the steps with their feet to be good enough. If so, occluding part of the view of the staircase and making it more important to place one's feet gently might make it more beneficial to look directly at the steps before stepping onto them, so that people will more consistently look at many steps. We tested this idea by asking people to walk on staircases, either with or without a tray with two cups of water on it. When carrying the tray, people walked more slowly, but they shifted their gaze across steps in much the same way as they did when walking without the tray. They did not look at more steps. There was a clear positive correlation between the fraction of steps that people looked at when walking with and without the tray. Thus, the variability in the extent to which people look at the steps persists when one makes walking on the staircase more challenging.
Collapse
Affiliation(s)
- Andrea Ghiani
- Department of Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Daan Amelink
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Eli Brenner
- Department of Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Ignace T C Hooge
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Roy S Hessels
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
3
|
Jia SJ, Jing JQ, Yang CJ. A Review on Autism Spectrum Disorder Screening by Artificial Intelligence Methods. J Autism Dev Disord 2024:10.1007/s10803-024-06429-9. [PMID: 38842671 DOI: 10.1007/s10803-024-06429-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/30/2024] [Indexed: 06/07/2024]
Abstract
PURPOSE With the increasing prevalence of autism spectrum disorders (ASD), the importance of early screening and diagnosis has been subject to considerable discussion. Given the subtle differences between ASD children and typically developing children during the early stages of development, it is imperative to investigate the utilization of automatic recognition methods powered by artificial intelligence. We aim to summarize the research work on this topic and sort out the markers that can be used for identification. METHODS We searched the papers published in the Web of Science, PubMed, Scopus, Medline, SpringerLink, Wiley Online Library, and EBSCO databases from 1st January 2013 to 13th November 2023, and 43 articles were included. RESULTS These articles mainly divided recognition markers into five categories: gaze behaviors, facial expressions, motor movements, voice features, and task performance. Based on the above markers, the accuracy of artificial intelligence screening ranged from 62.13 to 100%, the sensitivity ranged from 69.67 to 100%, the specificity ranged from 54 to 100%. CONCLUSION Therefore, artificial intelligence recognition holds promise as a tool for identifying children with ASD. However, it still needs to continually enhance the screening model and improve accuracy through multimodal screening, thereby facilitating timely intervention and treatment.
Collapse
Affiliation(s)
- Si-Jia Jia
- Faculty of Education, East China Normal University, Shanghai, China
| | - Jia-Qi Jing
- Faculty of Education, East China Normal University, Shanghai, China
| | - Chang-Jiang Yang
- Faculty of Education, East China Normal University, Shanghai, China.
- China Research Institute of Care and Education of Infants and Young, Shanghai, China.
| |
Collapse
|
4
|
Dolinski D, Grzyb T. Obedience to authority as a function of the physical proximity of the student, teacher, and experimenter. THE JOURNAL OF SOCIAL PSYCHOLOGY 2024:1-13. [PMID: 38696401 DOI: 10.1080/00224545.2024.2348479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 04/18/2024] [Indexed: 05/04/2024]
Abstract
The authors are proposing a theoretical model explaining the behavior of individuals tested through experiments on obedience toward authority conducted according to Milgram's paradigm. Their assumption is that the participant faces typical avoidance-avoidance conflict conditions. Participant does not want to hurt the learner in the adjacent room but he or she also does not want to harm the experimenter. The solution to this conflict, entailing hurting on of the two, may be different depending on the spatial organization of the experiment. In the study, experimental conditions were modified, so that the participant was (vs. was not) in the same room as the experimenter and was (vs. was not) in the same room as the learner. Forty individuals (20 women and 20 men) were tested in each of the four experimental conditions. It turns out that the physical presence of the experimenter was conducive to obedience, while the physical presence of the learner reduced it.
Collapse
|
5
|
Zhou S, Sun Y, Zhao Y, Jiang T, Yang H, Li S. I prefer what you can see: The role of visual perspective-taking on the gaze-liking effect. Heliyon 2024; 10:e29615. [PMID: 38681601 PMCID: PMC11046107 DOI: 10.1016/j.heliyon.2024.e29615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 03/30/2024] [Accepted: 04/10/2024] [Indexed: 05/01/2024] Open
Abstract
Individuals' gaze on an object usually leads others to prefer that object, which is called the gaze-liking effect. However, it is still unclear whether this effect is driven by social factors (i.e., visual perspective-taking) or the domain-general processing (i.e., attention cueing). This research explored the mechanism of the gaze-liking effect by manipulating the objects' visibility to an avatar in six online one-shot experiments. The results showed that participants' affective evaluation for the object was modulated by the avatar's visual perspective. Specifically, the visible object to the avatar received a higher rating of liking degree. However, when the avatar was replaced with a non-social stimulus, the experimental effect was absent. Furthermore, the gaze-liking effect was robust while controlling for confounding factors such as the distance between the object and the avatar or type of stimuli. These findings provided convincing evidence that the gaze-liking effect involves a process of the other's visual experience and is not merely a by-product of the gaze-cueing effect.
Collapse
Affiliation(s)
- Song Zhou
- School of Psychology, Fujian Normal University, Fuzhou, China
| | | | - Yan Zhao
- School of Psychology, Fujian Normal University, Fuzhou, China
| | - Tao Jiang
- Research Center for Regional and National Comparative Diplomacy, China Foreign Affairs University, Beijing, China
| | - Huaqi Yang
- School of Psychology, Fujian Normal University, Fuzhou, China
| | - Sha Li
- School of Psychology, Fujian Normal University, Fuzhou, China
| |
Collapse
|
6
|
Kingstone A, Walker E, Amin S, Bischof WF. Eyes meet, hands greet: The art of timing in social interactions. Perception 2024; 53:287-290. [PMID: 38173337 PMCID: PMC10960310 DOI: 10.1177/03010066231223440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Shaking hands is a fundamental form of social interaction. The current study used high-definition cameras during a university graduation ceremony to examine the temporal sequencing of eye contact and shaking hands. Analyses revealed that mutual gaze always preceded shaking hands. A follow up investigation manipulated gaze when shaking hands, and found that participants take significantly longer to accept a handshake when an outstretched hand precedes eye contact. These findings demonstrate that the timing between a person's gaze and their offer to shake hands is critical to how their action is interpreted.
Collapse
|
7
|
Valtakari NV, Hessels RS, Niehorster DC, Viktorsson C, Nyström P, Falck-Ytter T, Kemner C, Hooge ITC. A field test of computer-vision-based gaze estimation in psychology. Behav Res Methods 2024; 56:1900-1915. [PMID: 37101100 PMCID: PMC10990994 DOI: 10.3758/s13428-023-02125-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/07/2023] [Indexed: 04/28/2023]
Abstract
Computer-vision-based gaze estimation refers to techniques that estimate gaze direction directly from video recordings of the eyes or face without the need for an eye tracker. Although many such methods exist, their validation is often found in the technical literature (e.g., computer science conference papers). We aimed to (1) identify which computer-vision-based gaze estimation methods are usable by the average researcher in fields such as psychology or education, and (2) evaluate these methods. We searched for methods that do not require calibration and have clear documentation. Two toolkits, OpenFace and OpenGaze, were found to fulfill these criteria. First, we present an experiment where adult participants fixated on nine stimulus points on a computer screen. We filmed their face with a camera and processed the recorded videos with OpenFace and OpenGaze. We conclude that OpenGaze is accurate and precise enough to be used in screen-based experiments with stimuli separated by at least 11 degrees of gaze angle. OpenFace was not sufficiently accurate for such situations but can potentially be used in sparser environments. We then examined whether OpenFace could be used with horizontally separated stimuli in a sparse environment with infant participants. We compared dwell measures based on OpenFace estimates to the same measures based on manual coding. We conclude that OpenFace gaze estimates may potentially be used with measures such as relative total dwell time to sparse, horizontally separated areas of interest, but should not be used to draw conclusions about measures such as dwell duration.
Collapse
Affiliation(s)
- Niilo V Valtakari
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands.
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Charlotte Viktorsson
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Pär Nyström
- Uppsala Child and Baby Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
- Karolinska Institutet Center of Neurodevelopmental Disorders (KIND), Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Chantal Kemner
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| |
Collapse
|
8
|
Landmann E, Breil C, Huestegge L, Böckler A. The semantics of gaze in person perception: a novel qualitative-quantitative approach. Sci Rep 2024; 14:893. [PMID: 38195808 PMCID: PMC10776783 DOI: 10.1038/s41598-024-51331-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 12/31/2023] [Indexed: 01/11/2024] Open
Abstract
Interpreting gaze behavior is essential in evaluating interaction partners, yet the 'semantics of gaze' in dynamic interactions are still poorly understood. We aimed to comprehensively investigate effects of gaze behavior patterns in different conversation contexts, using a two-step, qualitative-quantitative procedure. Participants watched video clips of single persons listening to autobiographic narrations by another (invisible) person. The listener's gaze behavior was manipulated in terms of gaze direction, frequency and direction of gaze shifts, and blink frequency; emotional context was manipulated through the valence of the narration (neutral/negative). In Experiment 1 (qualitative-exploratory), participants freely described which states and traits they attributed to the listener in each condition, allowing us to identify relevant aspects of person perception and to construct distinct rating scales that were implemented in Experiment 2 (quantitative-confirmatory). Results revealed systematic and differential meanings ascribed to the listener's gaze behavior. For example, rapid blinking and fast gaze shifts were rated more negatively (e.g., restless and unnatural) than slower gaze behavior; downward gaze was evaluated more favorably (e.g., empathetic) than other gaze aversion types, especially in the emotionally negative context. Overall, our study contributes to a more systematic understanding of flexible gaze semantics in social interaction.
Collapse
Affiliation(s)
- Eva Landmann
- Department of Psychology, Julius-Maximilians-Universität Würzburg (JMU), 97070, Würzburg, Germany.
| | - Christina Breil
- Department of Psychology, Julius-Maximilians-Universität Würzburg (JMU), 97070, Würzburg, Germany
| | - Lynn Huestegge
- Department of Psychology, Julius-Maximilians-Universität Würzburg (JMU), 97070, Würzburg, Germany
| | - Anne Böckler
- Department of Psychology, Julius-Maximilians-Universität Würzburg (JMU), 97070, Würzburg, Germany
| |
Collapse
|
9
|
Portugal AM, Viktorsson C, Taylor MJ, Mason L, Tammimies K, Ronald A, Falck-Ytter T. Infants' looking preferences for social versus non-social objects reflect genetic variation. Nat Hum Behav 2024; 8:115-124. [PMID: 38012276 PMCID: PMC10810753 DOI: 10.1038/s41562-023-01764-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 10/12/2023] [Indexed: 11/29/2023]
Abstract
To what extent do individual differences in infants' early preference for faces versus non-facial objects reflect genetic and environmental factors? Here in a sample of 536 5-month-old same-sex twins, we assessed attention to faces using eye tracking in two ways: initial orienting to faces at the start of the trial (thought to reflect subcortical processing) and sustained face preference throughout the trial (thought to reflect emerging attention control). Twin model fitting suggested an influence of genetic and unique environmental effects, but there was no evidence for an effect of shared environment. The heritability of face orienting and preference were 0.19 (95% confidence interval (CI) 0.04 to 0.33) and 0.46 (95% CI 0.33 to 0.57), respectively. Face preference was associated positively with later parent-reported verbal competence (β = 0.14, 95% CI 0.03 to 0.25, P = 0.014, R2 = 0.018, N = 420). This study suggests that individual differences in young infants' selection of perceptual input-social versus non-social-are heritable, providing a developmental perspective on gene-environment interplay occurring at the level of eye movements.
Collapse
Affiliation(s)
- Ana Maria Portugal
- Development and Neurodiversity Lab (DIVE), Department of Psychology, Uppsala University, Uppsala, Sweden.
- Center of Neurodevelopmental Disorders (KIND), Centre for Psychiatry Research, Department of Women's and Childrn's Health, Karolinska Institutet & Stockholm Health Care Services, Stockholm, Sweden.
| | - Charlotte Viktorsson
- Development and Neurodiversity Lab (DIVE), Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Mark J Taylor
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Luke Mason
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Kristiina Tammimies
- Center of Neurodevelopmental Disorders (KIND), Centre for Psychiatry Research, Department of Women's and Childrn's Health, Karolinska Institutet & Stockholm Health Care Services, Stockholm, Sweden
- Astrid Lindgren Children's Hospital, Karolinska University Hospital, Stockholm, Sweden
| | - Angelica Ronald
- School of Psychology, Faculty of Health and Medical Sciences, University of Surrey, Guildford, UK
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab (DIVE), Department of Psychology, Uppsala University, Uppsala, Sweden.
- Center of Neurodevelopmental Disorders (KIND), Centre for Psychiatry Research, Department of Women's and Childrn's Health, Karolinska Institutet & Stockholm Health Care Services, Stockholm, Sweden.
- Swedish Collegium for Advanced Study, Uppsala, Sweden.
| |
Collapse
|
10
|
Shiell MM, Høy-Christensen J, Skoglund MA, Keidser G, Zaar J, Rotger-Griful S. Multilevel Modeling of Gaze From Listeners With Hearing Loss Following a Realistic Conversation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4575-4589. [PMID: 37850878 DOI: 10.1044/2023_jslhr-22-00641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
PURPOSE There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation. METHOD Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener's gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition. RESULTS We found no evidence that our conversation events predicted changes in the listener's gaze, but the listener's gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker's speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners. CONCLUSIONS MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener's perception of realistic conversation. Our experience provides insight to guide future research with this method.
Collapse
Affiliation(s)
| | | | - Martin A Skoglund
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Division of Automatic Control, Department of Electrical Engineering, The Institute of Technology, Linköping University, Sweden
| | - Gitte Keidser
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Behavioural Sciences and Learning, Linneaus Center HEAD, Linköping University, Sweden
| | - Johannes Zaar
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Kongens Lyngby
| | | |
Collapse
|
11
|
Tsikandilakis M, Bali P. Learning emotional dialects: A British population study of cross-cultural communication. Perception 2023; 52:812-843. [PMID: 37796849 PMCID: PMC10634218 DOI: 10.1177/03010066231204180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 09/12/2023] [Indexed: 10/07/2023]
Abstract
The aim of the current research was to explore whether we can improve the recognition of cross-cultural freely-expressed emotional faces in British participants. We tested several methods for improving the recognition of freely-expressed emotional faces, such as different methods for presenting other-culture expressions of emotion from individuals from Chile, New Zealand and Singapore in two experimental stages. In the first experimental stage, in phase one, participants were asked to identify the emotion of cross-cultural freely-expressed faces. In the second phase, different cohorts were presented with interactive side-by-side, back-to-back and dynamic morphing of cross-cultural freely-expressed emotional faces, and control conditions. In the final phase, we repeated phase one using novel stimuli. We found that all non-control conditions led to recognition improvements. Morphing was the most effective condition for improving the recognition of cross-cultural emotional faces. In the second experimental stage, we presented morphing to different cohorts including own-to-other and other-to-own freely-expressed cross-cultural emotional faces and neutral-to-emotional and emotional-to-neutral other-culture freely-expressed emotional faces. All conditions led to recognition improvements and the presentation of freely-expressed own-to-other cultural-emotional faces provided the most effective learning. These findings suggest that training can improve the recognition of cross-cultural freely-expressed emotional expressions.
Collapse
|
12
|
Itier RJ, Durston AJ. Mass-univariate analysis of scalp ERPs reveals large effects of gaze fixation location during face processing that only weakly interact with face emotional expression. Sci Rep 2023; 13:17022. [PMID: 37813928 PMCID: PMC10562468 DOI: 10.1038/s41598-023-44355-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 10/06/2023] [Indexed: 10/11/2023] Open
Abstract
Decoding others' facial expressions is critical for social functioning. To clarify the neural correlates of expression perception depending on where we look on the face, three combined gaze-contingent ERP experiments were analyzed using robust mass-univariate statistics. Regardless of task, fixation location impacted face processing from 50 to 350 ms, maximally around 120 ms, reflecting retinotopic mapping around C2 and P1 components. Fixation location also impacted majorly the N170-P2 interval while weak effects were seen at the face-sensitive N170 peak. Results question the widespread assumption that faces are processed holistically into an indecomposable perceptual whole around the N170. Rather, face processing is a complex and view-dependent process that continues well beyond the N170. Expression and fixation location interacted weakly during the P1-N170 interval, supporting a role for the mouth and left eye in fearful and happy expression decoding. Expression effects were weakest at the N170 peak but strongest around P2, especially for fear, reflecting task-independent affective processing. Results suggest N170 reflects a transition between processes rather than the maximum of a holistic face processing stage. Focus on this peak should be replaced by data-driven analyses of the epoch using robust statistics to fully unravel the early visual processing of faces and their affective content.
Collapse
Affiliation(s)
- Roxane J Itier
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada.
| | - Amie J Durston
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| |
Collapse
|
13
|
Troje NF. Depth from motion parallax: Deictic consistency, eye contact, and a serious problem with Zoom. J Vis 2023; 23:1. [PMID: 37656465 PMCID: PMC10479236 DOI: 10.1167/jov.23.10.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 07/24/2023] [Indexed: 09/02/2023] Open
Abstract
The dynamics of head and eye gaze between two or more individuals displayed during verbal and nonverbal face-to-face communication contains a wealth of information and is used for both volitionary and unconscious signaling. Current video communication systems convey visual signals about gaze behavior and other directional cues, but the information they carry is often spurious and potentially misleading. I discuss the consequences of this situation, identify the source of the problem as a more general lack of deictic consistency, and demonstrate that using display technologies that simulate motion parallax are both necessary and sufficient to alleviate it. I then devise an avatar-based remote communication solution that achieves deictic consistency and provides natural, dynamic eye contact for computer-mediated audiovisual communication.
Collapse
Affiliation(s)
- Nikolaus F Troje
- Centre for Vision Research and Department of Biology, York University, Toronto, Ontario, Canada
| |
Collapse
|
14
|
Romero V, Paxton A. Stage 2: Visual information and communication context as modulators of interpersonal coordination in face-to-face and videoconference-based interactions. Acta Psychol (Amst) 2023; 239:103992. [PMID: 37536011 DOI: 10.1016/j.actpsy.2023.103992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 06/23/2023] [Accepted: 07/21/2023] [Indexed: 08/05/2023] Open
Abstract
Interpersonal coordination of body movement-or similarity in patterning and timing of body movement between interaction partners-is well documented in face-to-face (FTF) conversation. Here, we investigated the degree to which interpersonal coordination is impacted by the amount of visual information available and the type of interaction conversation partners are having. To do so within a naturalistic context, we took advantage of the increased familiarity with videoconferencing (VC) platforms and with limited visual information in FTF conversation due to the COVID-19 pandemic. Pairs of participants communicated in one of three ways: FTF in a laboratory setting while socially distanced and wearing face masks; VC in a laboratory setting with a view of one another's full movements; or VC in a remote setting with a view of one another's face and shoulders. Each pair held three conversations: affiliative, argumentative, and cooperative task-based. We quantified interpersonal coordination as the relationship between the two participants' overall body movement using nonlinear time series analyses. Coordination changed as a function of the contextual constraints, and these constraints interacted with coordination patterns to affect subjective conversation outcomes. Importantly, we found patterns of results that were distinct from previous research; we hypothesize that these differences may be due to changes in the broader social context from COVID-19. Taken together, our results are consistent with a dynamical systems view of social phenomena, with interpersonal coordination emerging from the interaction between components, constraints, and history of the system.
Collapse
Affiliation(s)
- Veronica Romero
- Psychology Department, Colby College, Waterville, ME, USA; Davis Institute for Artificial Intelligence, Colby College, Waterville, ME, USA.
| | - Alexandra Paxton
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA; Center for the Ecological Study of Perception and Action, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
15
|
Viktorsson C, Valtakari NV, Falck-Ytter T, Hooge ITC, Rudling M, Hessels RS. Stable eye versus mouth preference in a live speech-processing task. Sci Rep 2023; 13:12878. [PMID: 37553414 PMCID: PMC10409748 DOI: 10.1038/s41598-023-40017-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 08/03/2023] [Indexed: 08/10/2023] Open
Abstract
Looking at the mouth region is thought to be a useful strategy for speech-perception tasks. The tendency to look at the eyes versus the mouth of another person during speech processing has thus far mainly been studied using screen-based paradigms. In this study, we estimated the eye-mouth-index (EMI) of 38 adult participants in a live setting. Participants were seated across the table from an experimenter, who read sentences out loud for the participant to remember in both a familiar (English) and unfamiliar (Finnish) language. No statistically significant difference in the EMI between the familiar and the unfamiliar languages was observed. Total relative looking time at the mouth also did not predict the number of correctly identified sentences. Instead, we found that the EMI was higher during an instruction phase than during the speech-processing task. Moreover, we observed high intra-individual correlations in the EMI across the languages and different phases of the experiment. We conclude that there are stable individual differences in looking at the eyes versus the mouth of another person. Furthermore, this behavior appears to be flexible and dependent on the requirements of the situation (speech processing or not).
Collapse
Affiliation(s)
- Charlotte Viktorsson
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Niilo V Valtakari
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
- Center of Neurodevelopmental Disorders (KIND), Division of Neuropsychiatry, Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Maja Rudling
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
16
|
Reuscher TF, Toreini P, Maedche A. The State of the Art of Diagnostic Multiparty Eye Tracking in Synchronous Computer-Mediated Collaboration. J Eye Mov Res 2023; 16:10.16910/jemr.16.2.4. [PMID: 38046524 PMCID: PMC10690675 DOI: 10.16910/jemr.16.2.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2023] Open
Abstract
In recent years, innovative multiparty eye tracking setups have been introduced to synchronously capture eye movements of multiple individuals engaged in computer-mediated collaboration. Despite its great potential for studying cognitive processes within groups, the method was primarily used as an interactive tool to enable and evaluate shared gaze visualizations in remote interaction. We conducted a systematic literature review to provide a comprehensive overview of what to consider when using multiparty eye tracking as a diagnostic method in experiments and how to process the collected data to compute and analyze group-level metrics. By synthesizing our findings in an integrative conceptual framework, we identified fundamental requirements for a meaningful implementation. In addition, we derived several implications for future research, as multiparty eye tracking was mainly used to study the correlation between joint attention and task performance in dyadic interaction. We found multidimensional recurrence quantification analysis, a novel method to quantify group-level dynamics in physiological data, to be a promising procedure for addressing some of the highlighted research gaps. In particular, the computation method enables scholars to investigate more complex cognitive processes within larger groups, as it scales up to multiple data streams.
Collapse
|
17
|
Hermans KSFM, Kirtley OJ, Kasanova Z, Achterhof R, Hagemann N, Hiekkaranta AP, Lecei A, Zapata-Fonseca L, Lafit G, Fossion R, Froese T, Myin-Germeys I. Ecological and Convergent Validity of Experimentally and Dynamically Assessed Capacity for Social Contingency Detection Using the Perceptual Crossing Experiment in Adolescence. Assessment 2023; 30:1109-1124. [PMID: 35373600 DOI: 10.1177/10731911221083613] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The Perceptual Crossing Experiment (PCE) captures the capacity for social contingency detection using real-time social interaction dynamics but has not been externally validated. We tested ecological and convergent validity of the PCE in a sample of 208 adolescents from the general population, aged 11 to 19 years. We expected associations between PCE performance and (a) quantity and quality of social interaction in daily life, using Experience Sampling Methodology (ESM; ecological validity) and (b) self-reported social skills using a questionnaire (convergent validity). We also expected PCE performance to better explain variance in ESM social measures than self-reported social skills. Multilevel analyses showed that only self-reported social skills were positively associated with social experience of company in daily life. These initial results do not support ecological and convergent validity of the PCE. However, fueled by novel insights regarding the complexity of capturing social dynamics, we identified promising methodological advances for future validation efforts.
Collapse
Affiliation(s)
- Karlijn S F M Hermans
- KU Leuven, Belgium
- Karlijn S. F. M. Hermans now affiliated to: Developmental and Educational Psychology, Faculty of Behavioral and Social Sciences, Leiden University, Leiden, The Netherlands; Department of Psychology, Education and Child studies, Erasmus School of Social and Behavioral Sciences, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | | | | | | | | | | | | | | | | | - Ruben Fossion
- National Autonomous University of Mexico, Mexico City
| | - Tom Froese
- Okinawa Institute of Science and Technology Graduate University, Japan
| | | |
Collapse
|
18
|
Troje NF. Zoom disrupts eye contact behaviour: problems and solutions. Trends Cogn Sci 2023; 27:417-419. [PMID: 37003879 DOI: 10.1016/j.tics.2023.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 02/20/2023] [Accepted: 02/22/2023] [Indexed: 04/03/2023]
Abstract
Natural, dynamic eye contact behaviour is critical to social interaction but is dysfunctional in video conferencing. In analysing the problem, I introduce the concept of directionality and emphasize the critical role of motion parallax. I then sketch approaches towards re-establishing directionality and enabling natural, dynamic eye contact in video conferences.
Collapse
Affiliation(s)
- Nikolaus F Troje
- Centre for Vision Research and Department of Biology, York University, 4700 Keele Street, Toronto, ON M3J 1P3, Canada.
| |
Collapse
|
19
|
When Attentional and Politeness Demands Clash: The Case of Mutual Gaze Avoidance and Chin Pointing in Quiahije Chatino. JOURNAL OF NONVERBAL BEHAVIOR 2023. [DOI: 10.1007/s10919-022-00423-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
AbstractPointing with the chin is a practice attested worldwide: it is an effective and highly recognizable device for re-orienting the attention of the addressee. For the chin point to be observed, the addressee must attend carefully to the movements of the sender’s head. This demand comes into conflict with the politeness norms of many cultures, since these often require conversationalists to avoid meeting the gaze of their interlocutor, and can require them to look away from their interlocutor’s face and head. In this paper we explore how the chin point is successfully used in just such a culture, among the Chatino indigenous group of Oaxaca, Mexico. We analyze interactions between multiple dyads of Chatino speakers, examining how senders invite visual attention to the pointing gesture, and how addressees signal that attention, while both participants avoid stretches of mutual gaze. We find that in the Chatino context, the senior (or higher-status) party to the conversation is highly consistent in training their gaze away from their interlocutor. This allows their interlocutor to give visual attention to their face without the risk of meeting the gaze of a higher-status sender, and facilitates close attention to head movements including the chin point.Abstracts in Spanish and Quiahije Chatino are published as appendices.Se incluyen como apéndices resúmenes en español y en el chatino de San Juan Quiahije.SonG ktyiC reC inH, ngyaqC skaE ktyiC noE ndaH sonB naF ngaJ noI ngyaqC loE ktyiC reC, ngyaqC ranF chaqE xlyaK qoE chaqF jnyaJ noA ndywiqA renqA KchinA KyqyaC.
Collapse
|
20
|
Looking at faces in the wild. Sci Rep 2023; 13:783. [PMID: 36646709 PMCID: PMC9842722 DOI: 10.1038/s41598-022-25268-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/28/2022] [Indexed: 01/18/2023] Open
Abstract
Faces are key to everyday social interactions, but our understanding of social attention is based on experiments that present images of faces on computer screens. Advances in wearable eye-tracking devices now enable studies in unconstrained natural settings but this approach has been limited by manual coding of fixations. Here we introduce an automatic 'dynamic region of interest' approach that registers eye-fixations to bodies and faces seen while a participant moves through the environment. We show that just 14% of fixations are to faces of passersby, contrasting with prior screen-based studies that suggest faces automatically capture visual attention. We also demonstrate the potential for this new tool to help understand differences in individuals' social attention, and the content of their perceptual exposure to other people. Together, this can form the basis of a new paradigm for studying social attention 'in the wild' that opens new avenues for theoretical, applied and clinical research.
Collapse
|
21
|
Cuello Mejía DA, Sumioka H, Ishiguro H, Shiomi M. Evaluating gaze behaviors as pre-touch reactions for virtual agents. Front Psychol 2023; 14:1129677. [PMID: 36949918 PMCID: PMC10026528 DOI: 10.3389/fpsyg.2023.1129677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 02/06/2023] [Indexed: 03/08/2023] Open
Abstract
Background Reaction behaviors by human-looking agents to nonverbal communication cues significantly affect how they are perceived as well as how they directly affect interactions. Some studies have evaluated such reactions toward several interactions, although few approached before-touch situations and how the agent's reaction is perceived. Specifically, it has not been considered how pre-touch reactions impact the interaction, the influence of gaze behavior in a before-touch situation context and how it can condition the participant's perception and preferences in the interaction. The present study investigated the factors that define pre-touch reactions in a humanoid avatar in a virtual reality environment and how they influence people's perceptions of the avatars. Methods We performed two experiments to assess the differences between approaches from inside and outside the field of view (FoV) and implemented four different gaze behaviors: face-looking, hand-looking, face-then-hand looking and hand-then-face looking behaviors. We also evaluated the participants' preferences based on the perceived human-likeness, naturalness, and likeability. In Experiment 1, we evaluated the number of steps in gaze behavior, the order of the gaze-steps and the gender; Experiment 2 evaluated the number and order of the gaze-steps. Results A two-step gaze behavior was perceived as more human and more natural from both inside and outside the field of view and that a face-first looking behavior when defining only a one-step gaze movement was preferable to hand-first looking behavior from inside the field of view. Regarding the location from where the approach was performed, our results show that a relatively complex gaze movement, including a face-looking behavior, is fundamental for improving the perceptions of agents in before-touch situations. Discussion The inclusion of gaze behavior as part of a possible touch interaction is helpful for developing more responsive avatars and gives another communication channel for increasing the immersion and enhance the experience in Virtual Reality environments, extending the frontiers of haptic interaction and complementing the already studied nonverbal communication cues.
Collapse
Affiliation(s)
- Dario Alfonso Cuello Mejía
- Interaction Science Laboratories, ATR, Kyoto, Japan
- Intelligent Robotics Laboratory, Department of Systems Innovation, Graduate School of Engineer Science, Osaka University, Suita, Osaka, Japan
- *Correspondence: Dario Alfonso Cuello Mejía,
| | | | - Hiroshi Ishiguro
- Intelligent Robotics Laboratory, Department of Systems Innovation, Graduate School of Engineer Science, Osaka University, Suita, Osaka, Japan
| | | |
Collapse
|
22
|
Jhan XD, Wong SK, Ebrahimi E, Lai Y, Huang WC, Babu SV. Effects of Small Talk With a Crowd of Virtual Humans on Users' Emotional and Behavioral Responses. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3767-3777. [PMID: 36049003 DOI: 10.1109/tvcg.2022.3203107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In this contribution, we empirically investigated the effect of small talk on the users' non-verbal behaviors and emotions when users interacted with a crowd of virtual humans (VHs) with positive behavioral dispositions. Users were tasked with collecting items in a virtual marketplace via natural speech-based dialogue with a crowd of virtual pedestrians and vendors. The users were able to engage in natural speech-based conversation in a predefined corpus of small talk content that covered various commonplace small talk topics such as conversations about the weather, general concerns, and entertainment based on similar real-life situations. For instance, the VHs with the small talk ability would ask the users some simple questions to make small talk or remind the users of their belongings. We conducted a between-subjects empirical evaluation to investigate whether the user behaviors and emotions were different between a small talk condition and a non-small talk condition, and examined gender effects of the participants. We collected objective and subjective measures of the users to analyze users' emotions and social interaction behaviors, when in conversation with VHs that either possessed small-talk capability or not, besides task or goal oriented dialogue capabilities. Our result revealed that the VHs with small talk capability could alter the emotions and non-verbal behaviors of the users. Furthermore, the non-verbal behaviors between female and male participants differed greatly in the presence or absence of small talk.
Collapse
|
23
|
Vaitonytė J, Alimardani M, Louwerse MM. Corneal reflections and skin contrast yield better memory of human and virtual faces. Cogn Res Princ Implic 2022; 7:94. [PMID: 36258062 PMCID: PMC9579222 DOI: 10.1186/s41235-022-00445-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 10/07/2022] [Indexed: 11/10/2022] Open
Abstract
Virtual faces have been found to be rated less human-like and remembered worse than photographic images of humans. What it is in virtual faces that yields reduced memory has so far remained unclear. The current study investigated face memory in the context of virtual agent faces and human faces, real and manipulated, considering two factors of predicted influence, i.e., corneal reflections and skin contrast. Corneal reflections referred to the bright points in each eye that occur when the ambient light reflects from the surface of the cornea. Skin contrast referred to the degree to which skin surface is rough versus smooth. We conducted two memory experiments, one with high-quality virtual agent faces (Experiment 1) and the other with the photographs of human faces that were manipulated (Experiment 2). Experiment 1 showed better memory for virtual faces with increased corneal reflections and skin contrast (rougher rather than smoother skin). Experiment 2 replicated these findings, showing that removing the corneal reflections and smoothening the skin reduced memory recognition of manipulated faces, with a stronger effect exerted by the eyes than the skin. This study highlights specific features of the eyes and skin that can help explain memory discrepancies between real and virtual faces and in turn elucidates the factors that play a role in the cognitive processing of faces.
Collapse
Affiliation(s)
- Julija Vaitonytė
- grid.12295.3d0000 0001 0943 3265Department of Cognitive Science and Artificial Intelligence, Tilburg University, Dante Building D 134, Warandelaan 2, 5037 AB Tilburg, The Netherlands
| | - Maryam Alimardani
- grid.12295.3d0000 0001 0943 3265Department of Cognitive Science and Artificial Intelligence, Tilburg University, Dante Building D 134, Warandelaan 2, 5037 AB Tilburg, The Netherlands
| | - Max M. Louwerse
- grid.12295.3d0000 0001 0943 3265Department of Cognitive Science and Artificial Intelligence, Tilburg University, Dante Building D 134, Warandelaan 2, 5037 AB Tilburg, The Netherlands
| |
Collapse
|
24
|
Eye contact avoidance in crowds: A large wearable eye-tracking study. Atten Percept Psychophys 2022; 84:2623-2640. [PMID: 35996058 PMCID: PMC9630249 DOI: 10.3758/s13414-022-02541-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2022] [Indexed: 11/30/2022]
Abstract
Eye contact is essential for human interactions. We investigated whether humans are able to avoid eye contact while navigating crowds. At a science festival, we fitted 62 participants with a wearable eye tracker and instructed them to walk a route. Half of the participants were further instructed to avoid eye contact. We report that humans can flexibly allocate their gaze while navigating crowds and avoid eye contact primarily by orienting their head and eyes towards the floor. We discuss implications for crowd navigation and gaze behavior. In addition, we address a number of issues encountered in such field studies with regard to data quality, control of the environment, and participant adherence to instructions. We stress that methodological innovation and scientific progress are strongly interrelated.
Collapse
|
25
|
A Study of Eye-Tracking Gaze Point Classification and Application Based on Conditional Random Field. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The head-mounted eye-tracking technology is often used to manipulate the motion of servo platform in remote tasks, so as to achieve visual aiming of servo platform, which is a highly integrated human-computer interaction effect. However, it is difficult to achieve accurate manipulation for the uncertain meanings of gaze points in eye-tracking. To solve this problem, a method of classifying gaze points based on a conditional random field is proposed. It first describes the features of gaze points and gaze images, according to the eye visual characteristic. An LSTM model is then introduced to merge these two features. Afterwards, the merge features are learned by CRF model to obtain the classified gaze points. Finally, the meaning of gaze point is classified for target, in order to accurately manipulate the servo platform. The experimental results show that the proposed method can classify more accurate target gaze points for 100 images, the average evaluation values Precision = 86.81%, Recall = 86.79%, We = 86.79%, these are better than relevant methods. In addition, the isolated gaze points can be eliminated, and the meanings of gaze points can be classified to achieve the accuracy of servo platform visual aiming.
Collapse
|
26
|
Effect of Surgical versus Nonsurgical Rhinoplasty on Perception of the Patient. Facial Plast Surg Clin North Am 2022; 30:175-181. [DOI: 10.1016/j.fsc.2022.01.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
27
|
Zubek J, Nagórska E, Komorowska-Mach J, Skowrońska K, Zieliński K, Rączaszek-Leonardi J. Dynamics of Remote Communication: Movement Coordination in Video-Mediated and Face-to-Face Conversations. ENTROPY 2022; 24:e24040559. [PMID: 35455222 PMCID: PMC9031538 DOI: 10.3390/e24040559] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 03/24/2022] [Accepted: 04/11/2022] [Indexed: 02/01/2023]
Abstract
The present pandemic forced our daily interactions to move into the virtual world. People had to adapt to new communication media that afford different ways of interaction. Remote communication decreases the availability and salience of some cues but also may enable and highlight others. Importantly, basic movement dynamics, which are crucial for any interaction as they are responsible for the informational and affective coupling, are affected. It is therefore essential to discover exactly how these dynamics change. In this exploratory study of six interacting dyads we use traditional variability measures and cross recurrence quantification analysis to compare the movement coordination dynamics in quasi-natural dialogues in four situations: (1) remote video-mediated conversations with a self-view mirror image present, (2) remote video-mediated conversations without a self-view, (3) face-to-face conversations with a self-view, and (4) face-to-face conversations without a self-view. We discovered that in remote interactions movements pertaining to communicative gestures were exaggerated, while the stability of interpersonal coordination was greatly decreased. The presence of the self-view image made the gestures less exaggerated, but did not affect the coordination. The dynamical analyses are helpful in understanding the interaction processes and may be useful in explaining phenomena connected with video-mediated communication, such as “Zoom fatigue”.
Collapse
Affiliation(s)
- Julian Zubek
- Human Interactivity and Language Lab, Faculty of Psychology, University of Warsaw, 00-927 Warsaw, Poland; (E.N.); (J.K.-M.); (K.S.); (K.Z.); (J.R.-L.)
- Correspondence:
| | - Ewa Nagórska
- Human Interactivity and Language Lab, Faculty of Psychology, University of Warsaw, 00-927 Warsaw, Poland; (E.N.); (J.K.-M.); (K.S.); (K.Z.); (J.R.-L.)
| | - Joanna Komorowska-Mach
- Human Interactivity and Language Lab, Faculty of Psychology, University of Warsaw, 00-927 Warsaw, Poland; (E.N.); (J.K.-M.); (K.S.); (K.Z.); (J.R.-L.)
- Faculty of Philosophy, University of Warsaw, 00-927 Warsaw, Poland
| | - Katarzyna Skowrońska
- Human Interactivity and Language Lab, Faculty of Psychology, University of Warsaw, 00-927 Warsaw, Poland; (E.N.); (J.K.-M.); (K.S.); (K.Z.); (J.R.-L.)
| | - Konrad Zieliński
- Human Interactivity and Language Lab, Faculty of Psychology, University of Warsaw, 00-927 Warsaw, Poland; (E.N.); (J.K.-M.); (K.S.); (K.Z.); (J.R.-L.)
| | - Joanna Rączaszek-Leonardi
- Human Interactivity and Language Lab, Faculty of Psychology, University of Warsaw, 00-927 Warsaw, Poland; (E.N.); (J.K.-M.); (K.S.); (K.Z.); (J.R.-L.)
| |
Collapse
|
28
|
Gaze-cued shifts of attention and microsaccades are sustained for whole bodies but are transient for body parts. Psychon Bull Rev 2022; 29:1854-1878. [PMID: 35381913 PMCID: PMC9568497 DOI: 10.3758/s13423-022-02087-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/04/2022] [Indexed: 11/21/2022]
Abstract
Gaze direction is an evolutionarily important mechanism in daily social interactions. It reflects a person’s internal cognitive state, spatial locus of interest, and predicts future actions. Studies have used static head images presented foveally and simple synthetic tasks to find that gaze orients attention and facilitates target detection at the cued location in a sustained manner. Little is known about how people’s natural gaze behavior, including eyes, head, and body movements, jointly orient covert attention, microsaccades, and facilitate performance in more ecological dynamic scenes. Participants completed a target person detection task with videos of real scenes. The videos showed people looking toward (valid cue) or away from a target (invalid cue) location. We digitally manipulated the individuals in the videos directing gaze to create three conditions: whole-intact (head and body movements), floating heads (only head movements), and headless bodies (only body movements). We assessed their impact on participants’ behavioral performance and microsaccades during the task. We show that, in isolation, an individual’s head or body orienting toward the target-person direction led to facilitation in detection that is transient in time (200 ms). In contrast, only the whole-intact condition led to sustained facilitation (500 ms). Furthermore, observers executed microsaccades more frequently towards the cued direction for valid trials, but this bias was sustained in time only with the joint presence of head and body parts. Together, the results differ from previous findings with foveally presented static heads. In more real-world scenarios and tasks, sustained attention requires the presence of the whole-intact body of the individuals dynamically directing their gaze.
Collapse
|
29
|
Infrequent faces bias social attention differently in manual and oculomotor measures. Atten Percept Psychophys 2022; 84:829-842. [PMID: 35084707 DOI: 10.3758/s13414-021-02432-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2021] [Indexed: 11/08/2022]
Abstract
Although attention is thought to be spontaneously biased by social cues like faces and eyes, recent data have demonstrated that when extraneous content, context, and task factors are controlled, attentional biasing is abolished in manual responses while still occurring sparingly in oculomotor measures. Here, we investigated how social attentional biasing was affected by face novelty by measuring responses to frequently presented (i.e., those with lower novelty) and infrequently presented (i.e., those with higher novelty) face identities. Using a dot-probe task, participants viewed either the same face and house identity that was frequently presented on half of the trials or sixteen different face and house identities that were infrequently presented on the other half of the trials. A response target occurred with equal probability at the previous location of the eyes or mouth of the face or the top or bottom of the house. Experiment 1 measured manual responses to the target while participants maintained central fixation. Experiment 2 additionally measured participants' natural oculomotor behaviour when their eye movements were not restricted. Across both experiments, no evidence of social attentional biasing was found in manual data. However, in Experiment 2, there was a reliable oculomotor bias towards the eyes of infrequently presented upright faces. Together, these findings suggest that face novelty does not facilitate manual measures of social attention, but it appears to promote spontaneous oculomotor biasing towards the eyes of infrequently presented novel faces.
Collapse
|
30
|
Balconi M, Fronda G, Cassioli F, Crivelli D. Face-to-face vs. remote digital settings in job assessment interviews: A multilevel hyperscanning protocol for the investigation of interpersonal attunement. PLoS One 2022; 17:e0263668. [PMID: 35130314 PMCID: PMC8820616 DOI: 10.1371/journal.pone.0263668] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 01/25/2022] [Indexed: 12/30/2022] Open
Abstract
The digitalization process for organizations, which was inevitably accelerated by the COVID-19 pandemic, raises relevant challenges for Human Resource Management (HRM) because every technological implementation has a certain impact on human beings. Between many organizational HRM practices, recruitment and assessment interviews represent a significant moment where a social interaction provides the context for evaluating candidates’ skills. It is therefore relevant to investigate how different interaction frames and relational conditions affect such task, with a specific focus on the differences between face-to-face (FTF) and remote computer-mediated (RCM) interaction settings. In particular, the possibility of qualifying and quantifying the mechanisms shaping the efficiency of interaction in the recruiter-candidate dyad—i.e. interpersonal attunement—is potentially insightful. We here present a neuroscientific protocol aimed at elucidating the impact of FTF vs. RCM modalities on social dynamics within assessment interviews. Specifically, the hyperscanning approach, understood as the concurrent recording and integrated analysis of behavioural-physiological responses of interacting agents, will be used to evaluate recruiter-candidate dyads while they are involved in either FTF or RCM conditions. Specifically, the protocol has been designed to collect self-report, oculometric, autonomic (electrodermal activity, heart rate, heart rate variability), and neurophysiological (electroencephalography) metrics from both inter-agents to explore the perceived quality of the interaction, automatic visual-attentional patterns of inter-agents, as well as their cognitive workload and emotional engagement. The proposed protocol will provide a theoretical evidence-based framework to assess possible differences between FTF vs. RMC settings in complex social interactions, with a specific focus on job interviews.
Collapse
Affiliation(s)
- Michela Balconi
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Università Cattolica del Sacro Cuore, Milano, Italy
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Giulia Fronda
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Università Cattolica del Sacro Cuore, Milano, Italy
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Federico Cassioli
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Università Cattolica del Sacro Cuore, Milano, Italy
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Davide Crivelli
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Università Cattolica del Sacro Cuore, Milano, Italy
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
- * E-mail:
| |
Collapse
|
31
|
Selective visual attention during public speaking in an immersive context. Atten Percept Psychophys 2022; 84:396-407. [PMID: 35064557 PMCID: PMC8993214 DOI: 10.3758/s13414-021-02430-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2021] [Indexed: 02/03/2023]
Abstract
It has recently become feasible to study selective visual attention to social cues in increasingly ecologically valid ways. In this secondary analysis, we examined gaze behavior in response to the actions of others in a social context. Participants (N = 84) were asked to give a 5-minute speech to a five-member audience that had been filmed in 360° video, displayed in a virtual reality headset containing a built-in eye tracker. Audience members were coached to make movements that would indicate interest or lack of interest (e.g., nodding vs. looking away). The goal of this paper was to analyze whether these actions influenced the speaker's gaze. We found that participants showed reliable evidence of gaze towards audience member actions in general, and towards audience member actions involving their phone specifically (compared with other actions like looking away or leaning back). However, there were no differences in gaze towards actions reflecting interest (like nodding) compared with actions reflecting lack of interest (like looking away). Participants were more likely to look away from audience member actions as well, but there were no specific actions that elicited looking away more or less. Taken together, these findings suggest that the actions of audience members are broadly influential in motivating gaze behaviors in a realistic, contextually embedded (public speaking) setting. Further research is needed to examine the ways in which these findings can be elucidated in more controlled laboratory environments as well as in the real world.
Collapse
|
32
|
Holleman GA, Hooge ITC, Huijding J, Deković M, Kemner C, Hessels RS. Gaze and speech behavior in parent–child interactions: The role of conflict and cooperation. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02532-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractA primary mode of human social behavior is face-to-face interaction. In this study, we investigated the characteristics of gaze and its relation to speech behavior during video-mediated face-to-face interactions between parents and their preadolescent children. 81 parent–child dyads engaged in conversations about cooperative and conflictive family topics. We used a dual-eye tracking setup that is capable of concurrently recording eye movements, frontal video, and audio from two conversational partners. Our results show that children spoke more in the cooperation-scenario whereas parents spoke more in the conflict-scenario. Parents gazed slightly more at the eyes of their children in the conflict-scenario compared to the cooperation-scenario. Both parents and children looked more at the other's mouth region while listening compared to while speaking. Results are discussed in terms of the role that parents and children take during cooperative and conflictive interactions and how gaze behavior may support and coordinate such interactions.
Collapse
|
33
|
Potthoff J, Schienle A. Effects of Self-Esteem on Self-Viewing: An Eye-Tracking Investigation on Mirror Gazing. Behav Sci (Basel) 2021; 11:164. [PMID: 34940099 PMCID: PMC8698327 DOI: 10.3390/bs11120164] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 11/24/2021] [Accepted: 11/26/2021] [Indexed: 12/14/2022] Open
Abstract
While some people enjoy looking at their faces in the mirror, others experience emotional distress. Despite these individual differences concerning self-viewing in the mirror, systematic investigations on this topic have not been conducted so far. The present eye-tracking study examined whether personality traits (self-esteem, narcissism propensity, self-disgust) are associated with gaze behavior (gaze duration, fixation count) during free mirror viewing of one's face. Sixty-eight adults (mean age = 23.5 years; 39 females, 29 males) viewed their faces in the mirror and watched a video of an unknown person matched for gender and age (control condition) for 90 s each. The computed regression analysis showed that higher self-esteem was associated with a shorter gaze duration for both self-face and other-face. This effect may reflect a less critical evaluation of the faces.
Collapse
Affiliation(s)
- Jonas Potthoff
- Institute of Psychology, University of Graz, 8010 Graz, Austria;
| | | |
Collapse
|
34
|
Tracking developmental differences in real-world social attention across adolescence, young adulthood and older adulthood. Nat Hum Behav 2021; 5:1381-1390. [PMID: 33986520 PMCID: PMC7611872 DOI: 10.1038/s41562-021-01113-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 04/12/2021] [Indexed: 02/03/2023]
Abstract
Detecting and responding appropriately to social information in one's environment is a vital part of everyday social interactions. Here, we report two preregistered experiments that examine how social attention develops across the lifespan, comparing adolescents (10-19 years old), young (20-40 years old) and older (60-80 years old) adults. In two real-world tasks, participants were immersed in different social interaction situations-a face-to-face conversation and navigating an environment-and their attention to social and non-social content was recorded using eye-tracking glasses. The results revealed that, compared with young adults, adolescents and older adults attended less to social information (that is, the face) during face-to-face conversation, and to people when navigating the real world. Thus, we provide evidence that real-world social attention undergoes age-related change, and these developmental differences might be a key mechanism that influences theory of mind among adolescents and older adults, with potential implications for predicting successful social interactions in daily life.
Collapse
|
35
|
Han NX, Chakravarthula PN, Eckstein MP. Peripheral facial features guiding eye movements and reducing fixational variability. J Vis 2021; 21:7. [PMID: 34347018 PMCID: PMC8340657 DOI: 10.1167/jov.21.8.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Face processing is a fast and efficient process due to its evolutionary and social importance. A majority of people direct their first eye movement to a featureless point just below the eyes that maximizes accuracy in recognizing a person's identity and gender. Yet, the exact properties or features of the face that guide the first eye movements and reduce fixational variability are unknown. Here, we manipulated the presence of the facial features and the spatial configuration of features to investigate their effect on the location and variability of first and second fixations to peripherally presented faces. Our results showed that observers can utilize the face outline, individual facial features, and feature spatial configuration to guide the first eye movements to their preferred point of fixation. The eyes have a preferential role in guiding the first eye movements and reducing fixation variability. Eliminating the eyes or altering their position had the greatest influence on the location and variability of fixations and resulted in the largest detriment to face identification performance. The other internal features (nose and mouth) also contribute to reducing fixation variability. A subsequent experiment measuring detection of single features showed that the eyes have the highest detectability (relative to other features) in the visual periphery providing a strong sensory signal to guide the oculomotor system. Together, the results suggest a flexible multiple-cue approach that might be a robust solution to cope with how the varying eccentricities in the real world influence the ability to resolve individual feature properties and the preferential role of the eyes.
Collapse
Affiliation(s)
- Nicole X Han
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA.,
| | - Puneeth N Chakravarthula
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA.,
| | - Miguel P Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA.,
| |
Collapse
|
36
|
Dawson J, Foulsham T. Your turn to speak? Audiovisual social attention in the lab and in the wild. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.1958038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Jessica Dawson
- Psychology Department, University of Essex, Colchester, UK
| | - Tom Foulsham
- Psychology Department, University of Essex, Colchester, UK
| |
Collapse
|
37
|
Harris CB, Van Bergen P, Harris SA, McIlwain N, Arguel A. Here's looking at you: eye gaze and collaborative recall. PSYCHOLOGICAL RESEARCH 2021; 86:769-779. [PMID: 34095971 DOI: 10.1007/s00426-021-01533-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 05/13/2021] [Indexed: 11/25/2022]
Abstract
In everyday life, we remember together often. Surprisingly, research reliably shows costs of collaboration. People remember less in groups than the same number of individuals remember separately. However, there is evidence that some groups are more successful than others, depending on factors such as group relationship and verbal communication strategies. To understand further the characteristics of more successful vs. less successful collaborative groups, we examined whether non-verbal eye gaze behaviour was associated with group outcomes. We used eye tracking glasses to measure how much collaborating dyads looked at each other during collaborative recall, and examined whether individual differences in eye- and face-directed gaze were associated with collaborative performance. Increased eye- and face-directed gaze was associated with higher collaborative recall performance, more explicit strategy use, more post-collaborative benefits, and increased memory overlap. However, it was also associated with pre-collaborative recall, indicating that gaze during collaboration may at least partially reflect pre-existing abilities. This research helps elucidate individual differences that underlie the outcomes of collaborative recall, and suggests that non-verbal communication differentiates more vs. less successful collaborative groups.
Collapse
Affiliation(s)
- Celia B Harris
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Sydney, Australia.
| | | | - Sophia A Harris
- Department of Cognitive Science, Macquarie University, Sydney, Australia
| | - Nina McIlwain
- Department of Cognitive Science, Macquarie University, Sydney, Australia
| | - Amael Arguel
- Department of Cognitive Psychology and Ergonomics, University of Toulouse-Jean Jaures, Toulouse, France
| |
Collapse
|
38
|
Chakravarthula PN, Tsank Y, Eckstein MP. Eye movement strategies in face ethnicity categorization vs. face identification tasks. Vision Res 2021; 186:59-70. [PMID: 34052698 DOI: 10.1016/j.visres.2021.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 04/28/2021] [Accepted: 05/11/2021] [Indexed: 11/24/2022]
Abstract
A quick look at a face allows us to identify the person, their gender, and emotion. Humans direct their first eye movement towards points on the face that vary moderately across these common tasks and maximize performance. However, not known is the extent to which humans alter their oculomotor strategies to maximize accuracy in more specialized face categorization tasks. We studied the eye movements of Indian observers during a North vs. South Indian face categorization task and compared them to those in a person-identification task. We found that observers did not alter their first eye movement strategy for the ethnic categorization task, i.e., they directed their first fixations to a similar preferred point as in the person-identification task. To assess whether using a similar preferred point of fixation for both tasks resulted in a performance cost for the categorization task, we measured performance as a function of fixation position along the face. Fixating away from the preferred point of fixation reduced observer performance in the person identification task, but not in the ethnicity categorization task. We used computational modeling to assess whether the results could be explained by an interaction between the distribution of task information across the face and the foveated properties of the visual system. A foveated ideal observer analysis revealed a spatially more distributed task information and lower dependence of performance on the point of fixation for the ethnicity categorization task relative to the person identification. We conclude that, unlike the person identification task, humans can access the information for the ethnicity categorization task from various points of fixation. Thus, the observer strategy to utilize the typical person identification first eye movement for the ethnicity categorization task is a simple solution that incurs little or no performance cost.
Collapse
Affiliation(s)
- Puneeth N Chakravarthula
- Department of Psychological and Brain Science, University of California, Santa Barbara, United States
| | - Yuliy Tsank
- Department of Psychological and Brain Science, University of California, Santa Barbara, United States
| | - Miguel P Eckstein
- Department of Psychological and Brain Science, University of California, Santa Barbara, United States.
| |
Collapse
|
39
|
Haensel JX, Smith TJ, Senju A. Cultural differences in mutual gaze during face-to-face interactions: A dual head-mounted eye-tracking study. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.1928354] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Affiliation(s)
- Jennifer X. Haensel
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Tim J. Smith
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Atsushi Senju
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| |
Collapse
|
40
|
Avidan G, Behrmann M. Spatial Integration in Normal Face Processing and Its Breakdown in Congenital Prosopagnosia. Annu Rev Vis Sci 2021; 7:301-321. [PMID: 34014762 DOI: 10.1146/annurev-vision-113020-012740] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Congenital prosopagnosia (CP), a life-long impairment in face processing that occurs in the absence of any apparent brain damage, provides a unique model in which to explore the psychological and neural bases of normal face processing. The goal of this review is to offer a theoretical and conceptual framework that may account for the underlying cognitive and neural deficits in CP. This framework may also provide a novel perspective in which to reconcile some conflicting results that permits the expansion of the research in this field in new directions. The crux of this framework lies in linking the known behavioral and neural underpinnings of face processing and their impairments in CP to a model incorporating grid cell-like activity in the entorhinal cortex. Moreover, it stresses the involvement of active, spatial scanning of the environment with eye movements and implicates their critical role in face encoding and recognition. To begin with, we describe the main behavioral and neural characteristics of CP, and then lay down the building blocks of our proposed model, referring to the existing literature supporting this new framework. We then propose testable predictions and conclude with open questions for future research stemming from this model. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Galia Avidan
- Department of Psychology and Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, Beer-Sheva 8410501, Israel;
| | - Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
| |
Collapse
|
41
|
Acarturk C, Indurkya B, Nawrocki P, Sniezynski B, Jarosz M, Usal KA. Gaze aversion in conversational settings: An investigation based on mock job interview. J Eye Mov Res 2021; 14. [PMID: 34122746 PMCID: PMC8188832 DOI: 10.16910/jemr.14.1.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We report the results of an empirical study on gaze aversion during dyadic human-to-human
conversation in an interview setting. To address various methodological challenges in assessing
gaze-to-face contact, we followed an approach where the experiment was conducted
twice, each time with a different set of interviewees. In one of them the interviewer’s gaze
was tracked with an eye tracker, and in the other the interviewee’s gaze was tracked. The
gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time
Markov Chains. The results show that the interviewer made more frequent and longer gaze
contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze
aversions, whereas the interviewee made sideways aversions (left or right). We discuss the
relevance of this research for Human-Robot Interaction, and discuss some future research
problems.
Collapse
Affiliation(s)
- Cengiz Acarturk
- Department of Cognitive Science, Middle East Technical University, Turkey
| | - Bipin Indurkya
- Department of Cognitive Science, Jagiellonian University, Poland
| | - Piotr Nawrocki
- Institute of Computer Science, AGH University of Science and Technology,, Poland
| | | | - Mateusz Jarosz
- Institute of Computer Science, AGH University of Science and Technology,, Poland
| | - Kerem Alp Usal
- Department of Cognitive Science, Middle East Technical University, Turkey
| |
Collapse
|
42
|
Niedźwiecka A. Eye contact effect: The role of vagal regulation and reactivity, and self-regulation of attention. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-01682-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractEye contact is a crucial aspect of social interactions that may enhance an individual’s cognitive performance (i.e. the eye contact effect) or hinder it (i.e. face-to-face interference effect). In this paper, I focus on the influence of eye contact on cognitive performance in tasks engaging executive functions. I present a hypothesis as to why some individuals benefit from eye contact while others do not. I propose that the relations between eye contact and executive functioning are modulated by an individual’s autonomic regulation and reactivity and self-regulation of attention. In particular, I propose that individuals with more optimal autonomic regulation and reactivity, and more effective self-regulation of attention benefit from eye contact. Individuals who are less well regulated and over- or under-reactive and who do not employ effective strategies of self-regulation of attention may not benefit from eye contact and may perform better when eye contact is absent. I present some studies that justify the proposed hypothesis and point to a method that could be employed to test them. This approach could help to better understand the complex mechanisms underlying the individual differences in participant’s cognitive performance during tasks engaging executive functions.
Collapse
|
43
|
D'Amelio A, Boccignone G. Gazing at Social Interactions Between Foraging and Decision Theory. Front Neurorobot 2021; 15:639999. [PMID: 33859558 PMCID: PMC8042312 DOI: 10.3389/fnbot.2021.639999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 03/09/2021] [Indexed: 11/30/2022] Open
Abstract
Finding the underlying principles of social attention in humans seems to be essential for the design of the interaction between natural and artificial agents. Here, we focus on the computational modeling of gaze dynamics as exhibited by humans when perceiving socially relevant multimodal information. The audio-visual landscape of social interactions is distilled into a number of multimodal patches that convey different social value, and we work under the general frame of foraging as a tradeoff between local patch exploitation and landscape exploration. We show that the spatio-temporal dynamics of gaze shifts can be parsimoniously described by Langevin-type stochastic differential equations triggering a decision equation over time. In particular, value-based patch choice and handling is reduced to a simple multi-alternative perceptual decision making that relies on a race-to-threshold between independent continuous-time perceptual evidence integrators, each integrator being associated with a patch.
Collapse
Affiliation(s)
- Alessandro D'Amelio
- PHuSe Lab, Department of Computer Science, Universitá degli Studi di Milano, Milan, Italy
| | - Giuseppe Boccignone
- PHuSe Lab, Department of Computer Science, Universitá degli Studi di Milano, Milan, Italy
| |
Collapse
|
44
|
Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest. Behav Res Methods 2021; 53:2037-2048. [PMID: 33742418 PMCID: PMC8516759 DOI: 10.3758/s13428-021-01544-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/14/2021] [Indexed: 01/14/2023]
Abstract
The assessment of gaze behaviour is essential for understanding the psychology of communication. Mobile eye-tracking glasses are useful to measure gaze behaviour during dynamic interactions. Eye-tracking data can be analysed by using manually annotated areas-of-interest. Computer vision algorithms may alternatively be used to reduce the amount of manual effort, but also the subjectivity and complexity of these analyses. Using additional re-identification (Re-ID) algorithms, different participants in the interaction can be distinguished. The aim of this study was to compare the results of manual annotation of mobile eye-tracking data with the results of a computer vision algorithm. We selected the first minute of seven randomly selected eye-tracking videos of consultations between physicians and patients in a Dutch Internal Medicine out-patient clinic. Three human annotators and a computer vision algorithm annotated mobile eye-tracking data, after which interrater reliability was assessed between the areas-of-interest annotated by the annotators and the computer vision algorithm. Additionally, we explored interrater reliability when using lengthy videos and different area-of-interest shapes. In total, we analysed more than 65 min of eye-tracking videos manually and with the algorithm. Overall, the absolute normalized difference between the manual and the algorithm annotations of face-gaze was less than 2%. Our results show high interrater agreements between human annotators and the algorithm with Cohen’s kappa ranging from 0.85 to 0.98. We conclude that computer vision algorithms produce comparable results to those of human annotators. Analyses by the algorithm are not subject to annotator fatigue or subjectivity and can therefore advance eye-tracking analyses.
Collapse
|
45
|
Bosworth RG, Stone A. Rapid development of perceptual gaze control in hearing native signing Infants and children. Dev Sci 2021; 24:e13086. [PMID: 33484575 DOI: 10.1111/desc.13086] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 11/23/2020] [Accepted: 01/19/2021] [Indexed: 11/30/2022]
Abstract
Children's gaze behavior reflects emergent linguistic knowledge and real-time language processing of speech, but little is known about naturalistic gaze behaviors while watching signed narratives. Measuring gaze patterns in signing children could uncover how they master perceptual gaze control during a time of active language learning. Gaze patterns were recorded using a Tobii X120 eye tracker, in 31 non-signing and 30 signing hearing infants (5-14 months) and children (2-8 years) as they watched signed narratives on video. Intelligibility of the signed narratives was manipulated by presenting them naturally and in video-reversed ("low intelligibility") conditions. This video manipulation was used because it distorts semantic content, while preserving most surface phonological features. We examined where participants looked, using linear mixed models with Language Group (non-signing vs. signing) and Video Condition (Forward vs. Reversed), controlling for trial order. Non-signing infants and children showed a preference to look at the face as well as areas below the face, possibly because their gaze was drawn to the moving articulators in signing space. Native signing infants and children demonstrated resilient, face-focused gaze behavior. Moreover, their gaze behavior was unchanged for video-reversed signed narratives, similar to what was seen for adult native signers, possibly because they already have efficient highly focused gaze behavior. The present study demonstrates that human perceptual gaze control is sensitive to visual language experience over the first year of life and emerges early, by 6 months of age. Results have implications for the critical importance of early visual language exposure for deaf infants. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=2ahWUluFAAg.
Collapse
Affiliation(s)
- Rain G Bosworth
- National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY, USA
| | - Adam Stone
- Department of Psychology, University of California, San Diego, CA, USA
| |
Collapse
|
46
|
Dindar K, Loukusa S, Helminen TM, Mäkinen L, Siipo A, Laukka S, Rantanen A, Mattila ML, Hurtig T, Ebeling H. Social-Pragmatic Inferencing, Visual Social Attention and Physiological Reactivity to Complex Social Scenes in Autistic Young Adults. J Autism Dev Disord 2021; 52:73-88. [PMID: 33638804 PMCID: PMC8732855 DOI: 10.1007/s10803-021-04915-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/04/2021] [Indexed: 11/29/2022]
Abstract
This study examined social-pragmatic inferencing, visual social attention and physiological reactivity to complex social scenes. Participants were autistic young adults (n = 14) and a control group of young adults (n = 14) without intellectual disability. Results indicate between-group differences in social-pragmatic inferencing, moment-level social attention and heart rate variability (HRV) reactivity. A key finding suggests associations between increased moment-level social attention to facial emotion expressions, better social-pragmatic inferencing and greater HRV suppression in autistic young adults. Supporting previous research, better social-pragmatic inferencing was found associated with less autistic traits.
Collapse
Affiliation(s)
- Katja Dindar
- Research Unit of Logopedics, Faculty of Humanities, University of Oulu, PO Box 1000, 90014, Oulu, Finland.
| | - Soile Loukusa
- Research Unit of Logopedics, Faculty of Humanities, University of Oulu, PO Box 1000, 90014, Oulu, Finland
| | - Terhi M Helminen
- Psychology, Faculty of Social Sciences, Tampere University, Tampere, Finland
| | - Leena Mäkinen
- Research Unit of Logopedics, Faculty of Humanities, University of Oulu, PO Box 1000, 90014, Oulu, Finland
| | - Antti Siipo
- Department of Educational Sciences and Teacher Education, Faculty of Education, University of Oulu, Oulu, Finland
| | - Seppo Laukka
- Learning Research Laboratory, Faculty of Education, University of Oulu, Oulu, Finland
| | - Antti Rantanen
- Learning Research Laboratory, Faculty of Education, University of Oulu, Oulu, Finland
| | - Marja-Leena Mattila
- PEDEGO Research Unit, Clinic of Child Psychiatry, Faculty of Medicine, Oulu University Hospital, University of Oulu, Oulu, Finland
| | - Tuula Hurtig
- PEDEGO Research Unit, Clinic of Child Psychiatry, Faculty of Medicine, Oulu University Hospital, University of Oulu, Oulu, Finland.,Research Unit of Clinical Neuroscience, Psychiatry, Faculty of Medicine, University of Oulu, Oulu, Finland
| | - Hanna Ebeling
- PEDEGO Research Unit, Clinic of Child Psychiatry, Faculty of Medicine, Oulu University Hospital, University of Oulu, Oulu, Finland
| |
Collapse
|
47
|
Tsuchiya KJ, Hakoshima S, Hara T, Ninomiya M, Saito M, Fujioka T, Kosaka H, Hirano Y, Matsuo M, Kikuchi M, Maegaki Y, Harada T, Nishimura T, Katayama T. Diagnosing Autism Spectrum Disorder Without Expertise: A Pilot Study of 5- to 17-Year-Old Individuals Using Gazefinder. Front Neurol 2021; 11:603085. [PMID: 33584502 PMCID: PMC7876254 DOI: 10.3389/fneur.2020.603085] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2020] [Accepted: 12/30/2020] [Indexed: 11/13/2022] Open
Abstract
Atypical eye gaze is an established clinical sign in the diagnosis of autism spectrum disorder (ASD). We propose a computerized diagnostic algorithm for ASD, applicable to children and adolescents aged between 5 and 17 years using Gazefinder, a system where a set of devices to capture eye gaze patterns and stimulus movie clips are equipped in a personal computer with a monitor. We enrolled 222 individuals aged 5–17 years at seven research facilities in Japan. Among them, we extracted 39 individuals with ASD without any comorbid neurodevelopmental abnormalities (ASD group), 102 typically developing individuals (TD group), and an independent sample of 24 individuals (the second control group). All participants underwent psychoneurological and diagnostic assessments, including the Autism Diagnostic Observation Schedule, second edition, and an examination with Gazefinder (2 min). To enhance the predictive validity, a best-fit diagnostic algorithm of computationally selected attributes originally extracted from Gazefinder was proposed. The inputs were classified automatically into either ASD or TD groups, based on the attribute values. We cross-validated the algorithm using the leave-one-out method in the ASD and TD groups and tested the predictability in the second control group. The best-fit algorithm showed an area under curve (AUC) of 0.84, and the sensitivity, specificity, and accuracy were 74, 80, and 78%, respectively. The AUC for the cross-validation was 0.74 and that for validation in the second control group was 0.91. We confirmed that the diagnostic performance of the best-fit algorithm is comparable to the diagnostic assessment tools for ASD.
Collapse
Affiliation(s)
- Kenji J Tsuchiya
- Research Center for Child Mental Development, Hamamatsu University School of Medicine, Hamamatsu, Japan.,Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan
| | - Shuji Hakoshima
- Healthcare Business Division, Development Center, JVCKENWOOD Corporation, Yokohama, Japan
| | - Takeshi Hara
- Center for Healthcare Information Technology, Tokai National Higher Education and Research System, Gifu, Japan.,Faculty of Engineering, Gifu University, Gifu, Japan
| | - Masaru Ninomiya
- Healthcare Business Division, Development Center, JVCKENWOOD Corporation, Yokohama, Japan
| | - Manabu Saito
- Department of Neuropsychiatry, Graduate School of Medicine, Hirosaki University, Hirosaki, Japan.,Research Center for Child Mental Development, Graduate School of Medicine, Hirosaki University, Hirosaki, Japan
| | - Toru Fujioka
- Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan.,Department of Science of Human Development, Faculty of Education, Humanities and Social Sciences, University of Fukui, Fukui, Japan.,Research Center for Child Mental Development, University of Fukui, Fukui, Japan
| | - Hirotaka Kosaka
- Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan.,Research Center for Child Mental Development, University of Fukui, Fukui, Japan.,Department of Neuropsychiatry, Faculty of Medical Sciences, University of Fukui, Fukui, Japan
| | - Yoshiyuki Hirano
- Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan.,Research Center for Child Mental Development, Chiba University, Chiba, Japan
| | - Muneaki Matsuo
- Department of Pediatrics, Faculty of Medicine, Saga University, Saga, Japan
| | - Mitsuru Kikuchi
- Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan.,Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan.,Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | | | - Taeko Harada
- Research Center for Child Mental Development, Hamamatsu University School of Medicine, Hamamatsu, Japan.,Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan
| | - Tomoko Nishimura
- Research Center for Child Mental Development, Hamamatsu University School of Medicine, Hamamatsu, Japan.,Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan
| | - Taiichi Katayama
- Department of Child Development, United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University, and University of Fukui, Suita, Japan
| |
Collapse
|
48
|
Abstract
There is a long history of interest in looking behavior during human interaction. With the advance of (wearable) video-based eye trackers, it has become possible to measure gaze during many different interactions. We outline the different types of eye-tracking setups that currently exist to investigate gaze during interaction. The setups differ mainly with regard to the nature of the eye-tracking signal (head- or world-centered) and the freedom of movement allowed for the participants. These features place constraints on the research questions that can be answered about human interaction. We end with a decision tree to help researchers judge the appropriateness of specific setups.
Collapse
|
49
|
Vettori S, Van der Donck S, Nys J, Moors P, Van Wesemael T, Steyaert J, Rossion B, Dzhelyova M, Boets B. Combined frequency-tagging EEG and eye-tracking measures provide no support for the "excess mouth/diminished eye attention" hypothesis in autism. Mol Autism 2020; 11:94. [PMID: 33228763 PMCID: PMC7686749 DOI: 10.1186/s13229-020-00396-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/02/2020] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Scanning faces is important for social interactions. Difficulty with the social use of eye contact constitutes one of the clinical symptoms of autism spectrum disorder (ASD). It has been suggested that individuals with ASD look less at the eyes and more at the mouth than typically developing (TD) individuals, possibly due to gaze aversion or gaze indifference. However, eye-tracking evidence for this hypothesis is mixed. While gaze patterns convey information about overt orienting processes, it is unclear how this is manifested at the neural level and how relative covert attention to the eyes and mouth of faces might be affected in ASD. METHODS We used frequency-tagging EEG in combination with eye tracking, while participants watched fast flickering faces for 1-min stimulation sequences. The upper and lower halves of the faces were presented at 6 Hz and 7.5 Hz or vice versa in different stimulation sequences, allowing to objectively disentangle the neural saliency of the eyes versus mouth region of a perceived face. We tested 21 boys with ASD (8-12 years old) and 21 TD control boys, matched for age and IQ. RESULTS Both groups looked longer at the eyes than the mouth, without any group difference in relative fixation duration to these features. TD boys looked significantly more to the nose, while the ASD boys looked more outside the face. EEG neural saliency data partly followed this pattern: neural responses to the upper or lower face half were not different between groups, but in the TD group, neural responses to the lower face halves were larger than responses to the upper part. Face exploration dynamics showed that TD individuals mostly maintained fixations within the same facial region, whereas individuals with ASD switched more often between the face parts. LIMITATIONS Replication in large and independent samples may be needed to validate exploratory results. CONCLUSIONS Combined eye-tracking and frequency-tagged neural responses show no support for the excess mouth/diminished eye gaze hypothesis in ASD. The more exploratory face scanning style observed in ASD might be related to their increased feature-based face processing style.
Collapse
Affiliation(s)
- Sofie Vettori
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium.
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium.
| | - Stephanie Van der Donck
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| | - Jannes Nys
- Department of Physics and Astronomy, Ghent University, Ghent, Belgium
- IDLab - Department of Computer Science, University of Antwerp - IMEC, Antwerp, Belgium
| | - Pieter Moors
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium
| | - Tim Van Wesemael
- Department of Electrical Engineering (ESAT), Stadius Center for Dynamical Systems, Signal Processing and Data Analytics, Leuven, Belgium
| | - Jean Steyaert
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| | - Bruno Rossion
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
- CNRS, CRAN - UMR 7039, Université de Lorraine, 54000, Nancy, France
- CHRU-Nancy, Service de Neurologie, Université de Lorraine, 54000, Nancy, France
| | - Milena Dzhelyova
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| |
Collapse
|
50
|
Hessels RS, Benjamins JS, van Doorn AJ, Koenderink JJ, Holleman GA, Hooge ITC. Looking behavior and potential human interactions during locomotion. J Vis 2020; 20:5. [PMID: 33007079 PMCID: PMC7545070 DOI: 10.1167/jov.20.10.5] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
As humans move through parts of their environment, they meet others that may or may not try to interact with them. Where do people look when they meet others? We had participants wearing an eye tracker walk through a university building. On the way, they encountered nine “walkers.” Walkers were instructed to e.g. ignore the participant, greet him or her, or attempt to hand out a flyer. The participant's gaze was mostly directed to the currently relevant body parts of the walker. Thus, the participants gaze depended on the walker's action. Individual differences in participant's looking behavior were consistent across walkers. Participants who did not respond to the walker seemed to look less at that walker, although this difference was not statistically significant. We suggest that models of gaze allocation should take social motivation into account.
Collapse
Affiliation(s)
- Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, and Social, Health and Organizational Psychology, Utrecht University, Utrecht, the Netherlands.,
| | - Andrea J van Doorn
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Jan J Koenderink
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Gijs A Holleman
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| |
Collapse
|