1
|
Yamasaki D, Nagai M. Emotion-gaze interaction affects time-to-collision estimates, but not preferred interpersonal distance towards looming faces. Front Psychol 2024; 15:1414702. [PMID: 39323584 PMCID: PMC11423545 DOI: 10.3389/fpsyg.2024.1414702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 06/10/2024] [Indexed: 09/27/2024] Open
Abstract
Estimating the time until impending collision (time-to-collision, TTC) of approaching or looming individuals and maintaining a comfortable distance from others (interpersonal distance, IPD) are commonly required in daily life and contribute to survival and social goals. Despite accumulating evidence that facial expressions and gaze direction interactively influence face processing, it remains unclear how these facial features affect the spatiotemporal processing of looming faces. We examined whether facial expressions (fearful vs. neutral) and gaze direction (direct vs. averted) interact on the judgments of TTC and IPD for looming faces, based on the shared signal hypothesis that fear signals the existence of threats in the environment when coupled with averted gaze. Experiment 1 demonstrated that TTC estimates were reduced for fearful faces compared to neutral ones only when the concomitant gaze was averted. In Experiment 2, the emotion-gaze interaction was not observed in the IPD regulation, which is arguably sensitive to affective responses to faces. The results suggest that fearful-averted faces modulate the cognitive extrapolation process of looming motion by communicating environmental threats rather than by altering subjective fear or perceived emotional intensity of faces. The TTC-specific effect may reflect an enhanced defensive response to unseen threats implied by looming fearful-averted faces. Our findings provide insight into how the visual system processes facial features to ensure bodily safety and comfortable interpersonal communication in dynamic environments.
Collapse
Affiliation(s)
- Daiki Yamasaki
- Research Organization of Open, Innovation and Collaboration, Ritsumeikan University, Osaka, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Masayoshi Nagai
- College of Comprehensive Psychology, Ritsumeikan University, Osaka, Japan
| |
Collapse
|
2
|
Lasagna CA, Tso IF, Blain SD, Pleskac TJ. Cognitive Mechanisms of Aberrant Self-Referential Social Perception in Psychosis and Bipolar Disorder: Insights From Computational Modeling. Schizophr Bull 2024:sbae147. [PMID: 39258381 DOI: 10.1093/schbul/sbae147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
BACKGROUND AND HYPOTHESIS Individuals with schizophrenia (SZ) and bipolar disorder (BD) show disruptions in self-referential gaze perception-a social perceptual process related to symptoms and functioning. However, our current mechanistic understanding of these dysfunctions and relationships is imprecise. STUDY DESIGN The present study used mathematical modeling to uncover cognitive processes driving gaze perception abnormalities in SZ and BD, and how they relate to cognition, symptoms, and social functioning. We modeled the behavior of 28 SZ, 38 BD, and 34 controls (HC) in a self-referential gaze perception task using drift-diffusion models parameterized to index key cognitive components: drift rate (evidence accumulation efficiency), drift bias (perceptual bias), start point (expectation bias), threshold separation (response caution), and nondecision time (encoding/motor processes). STUDY RESULTS Results revealed that aberrant gaze perception in SZ and BD was driven by less efficient evidence accumulation, perceptual biases predisposing self-referential responses, and greater caution (SZ only). Across SZ and HC, poorer social functioning was related to greater expectation biases. Within SZ, perceptual and expectancy biases were associated with hallucination and delusion severity, respectively. CONCLUSIONS These findings indicate that diminished evidence accumulation and perceptual biases may underlie altered gaze perception in patients and that SZ may engage in compensatory cautiousness, sacrificing response speed to preserve accuracy. Moreover, biases at the belief and perceptual levels may relate to symptoms and functioning. Computational modeling can, therefore, be used to achieve a more nuanced, cognitive process-level understanding of the mechanisms of social cognitive difficulties, including gaze perception, in individuals with SZ and BD.
Collapse
Affiliation(s)
- Carly A Lasagna
- Department of Psychology, University of Michigan, Ann Arbor, MI, USA
| | - Ivy F Tso
- Department of Psychiatry & Behavioral Health, The Ohio State University, Columbus, OH, USA
| | - Scott D Blain
- Department of Psychiatry & Behavioral Health, The Ohio State University, Columbus, OH, USA
| | - Timothy J Pleskac
- Department of Psychology, Indiana University-Bloomington, Bloomington, IN, USA
| |
Collapse
|
3
|
Simon Iv J, Rich EL. Neural populations in macaque anterior cingulate cortex encode social image identities. Nat Commun 2024; 15:7500. [PMID: 39209844 PMCID: PMC11362159 DOI: 10.1038/s41467-024-51825-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 08/16/2024] [Indexed: 09/04/2024] Open
Abstract
The anterior cingulate cortex gyrus (ACCg) has been implicated in prosocial behaviors and reasoning about social cues. While this indicates that ACCg is involved in social behavior, it remains unclear whether ACCg neurons also encode social information during goal-directed actions without social consequences. To address this, we assessed how social information is processed by ACCg neurons in a reward localization task. Here we show that neurons in the ACCg of female rhesus monkeys differentiate the identities of conspecifics in task images, even when identity was task-irrelevant. This was in contrast to the prearcuate cortex (PAC), which has not been strongly linked to social behavior, where neurons differentiated identities in both social and nonsocial images. Many neurons in the ACCg also categorically distinguished social from nonsocial trials, but this encoding was only slightly more common in ACCg compared to the PAC. Together, our results suggest that ACCg neurons are uniquely sensitive to social information that differentiates individuals, which may underlie its role in complex social reasoning.
Collapse
Affiliation(s)
- Joseph Simon Iv
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Lipschultz Center for Cognitive Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Erin L Rich
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Lipschultz Center for Cognitive Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
4
|
Boer J, Boonstra N, Kronenberg L, Stekelenburg R, Sizoo B. Variations in the Appearance and Interpretation of Interpersonal Eye Contact in Social Categorizations and Psychiatric Populations Worldwide: A Scoping Review with a Critical Appraisal of the Literature. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2024; 21:1092. [PMID: 39200701 PMCID: PMC11354482 DOI: 10.3390/ijerph21081092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 08/13/2024] [Accepted: 08/15/2024] [Indexed: 09/02/2024]
Abstract
BACKGROUND Eye contact is one of the most fundamental forms of interhuman communication. However, to date, there has been no comprehensive research comparing how eye contact is made and interpreted in all possible populations worldwide. This study presents a summary of the existing literature on these modalities stratified to social categorizations and psychiatric disorders. METHOD A scoping review with critical appraisal of the literature according to the Joanna Briggs Institute (JBI) methodology. Databases AnthroSource, Medline, CINAHL, the Psychology and Behavioral Sciences Collection (EBSCO) and PsychInfo were searched. RESULTS 7068 articles were screened for both the grey literature and reference lists, of which 385 were included, 282 for social categorizations and 103 for psychiatric disorders. In total, 603 thematic clustered outcomes of variations were included. Methodological quality was generally moderate to good. CONCLUSIONS There is a great degree of variation in the presentation and interpretation of eye contact between and within populations. It remains unclear why specific variations occur in populations. Additionally, no gold standard for how eye contact should be used or interpreted emerged from the studies. Further research into the reason for differences in eye contact between and within populations is recommended.
Collapse
Affiliation(s)
- Jos Boer
- Department of Neuroscience, UMC Utrecht, Universiteitsweg 100, 3584 CG Utrecht, The Netherlands;
| | - Nynke Boonstra
- Department of Neuroscience, UMC Utrecht, Universiteitsweg 100, 3584 CG Utrecht, The Netherlands;
| | - Linda Kronenberg
- Dimence Groep, Nico Bolkesteinlaan 1, 7416 SB Deventer, The Netherlands;
| | - Ruben Stekelenburg
- Lectoraat Innovatie van Beweegzorg, University of Applied Sciences Utrecht, Padualaan 101, 3584 CH Utrecht, The Netherlands;
| | - Bram Sizoo
- Department of Clinical Psychology, University of Amsterdam, Nieuwe Achtergracht 129-B, 1018 WS Amsterdam, The Netherlands;
| |
Collapse
|
5
|
Lasagna CA, Tso IF, Blain SD, Pleskac TJ. Cognitive Mechanisms of Aberrant Self-Referential Social Perception in Psychosis and Bipolar Disorder: Insights from Computational Modeling. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.03.30.24304780. [PMID: 39072038 PMCID: PMC11275667 DOI: 10.1101/2024.03.30.24304780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Background and Hypothesis Individuals with schizophrenia (SZ) and bipolar disorder (BD) show disruptions in self-referential gaze perception-a social perceptual process related to symptoms and functioning. However, our current mechanistic understanding of these dysfunctions and relationships is imprecise. Study Design The present study used mathematical modeling to uncover cognitive processes driving gaze perception abnormalities in SZ and BD, and how they relate to cognition, symptoms, and social functioning. We modeled the behavior of 28 SZ, 38 BD, and 34 controls (HC) in a self-referential gaze perception task using drift-diffusion models (DDM) parameterized to index key cognitive components: drift rate (evidence accumulation efficiency), drift bias (perceptual bias), start point (expectation bias), threshold separation (response caution), and non- decision time (encoding/motor processes). Study Results Results revealed that aberrant gaze perception in SZ and BD was driven by less efficient evidence accumulation, perceptual biases predisposing self-referential responses, and greater caution (SZ only). Across SZ and HC, poorer social functioning was related to greater expectation biases. Within SZ, perceptual and expectancy biases were associated with hallucination and delusion severity, respectively. Conclusions These findings indicate that diminished evidence accumulation and perceptual biases may underlie altered gaze perception in patients and that SZ may engage in compensatory cautiousness, sacrificing response speed to preserve accuracy. Moreover, biases at the belief and perceptual levels may relate to symptoms and functioning. Computational modeling can, therefore, be used to achieve a more nuanced, cognitive process-level understanding of the mechanisms of social cognitive difficulties, including gaze perception, in individuals with SZ and BD.
Collapse
|
6
|
Lavit Nicora M, Prajod P, Mondellini M, Tauro G, Vertechy R, André E, Malosio M. Gaze detection as a social cue to initiate natural human-robot collaboration in an assembly task. Front Robot AI 2024; 11:1394379. [PMID: 39086514 PMCID: PMC11288793 DOI: 10.3389/frobt.2024.1394379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 06/26/2024] [Indexed: 08/02/2024] Open
Abstract
Introduction: In this work we explore a potential approach to improve human-robot collaboration experience by adapting cobot behavior based on natural cues from the operator. Methods: Inspired by the literature on human-human interactions, we conducted a wizard-of-oz study to examine whether a gaze towards the cobot can serve as a trigger for initiating joint activities in collaborative sessions. In this study, 37 participants engaged in an assembly task while their gaze behavior was analyzed. We employed a gaze-based attention recognition model to identify when the participants look at the cobot. Results: Our results indicate that in most cases (83.74%), the joint activity is preceded by a gaze towards the cobot. Furthermore, during the entire assembly cycle, the participants tend to look at the cobot mostly around the time of the joint activity. Given the above results, a fully integrated system triggering joint action only when the gaze is directed towards the cobot was piloted with 10 volunteers, of which one characterized by high-functioning Autism Spectrum Disorder. Even though they had never interacted with the robot and did not know about the gaze-based triggering system, most of them successfully collaborated with the cobot and reported a smooth and natural interaction experience. Discussion: To the best of our knowledge, this is the first study to analyze the natural gaze behavior of participants working on a joint activity with a robot during a collaborative assembly task and to attempt the full integration of an automated gaze-based triggering system.
Collapse
Affiliation(s)
- Matteo Lavit Nicora
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council of Italy, Lecco, Italy
- Industrial Engineering Department, University of Bologna, Bologna, Italy
| | - Pooja Prajod
- Human-Centered Artificial Intelligence, University of Augsburg, Augsburg, Germany
| | - Marta Mondellini
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council of Italy, Lecco, Italy
- Catholic University of the Sacred Heart, Psychology Department, Milan, Italy
| | - Giovanni Tauro
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council of Italy, Lecco, Italy
- Industrial Engineering Department, University of Bologna, Bologna, Italy
| | - Rocco Vertechy
- Industrial Engineering Department, University of Bologna, Bologna, Italy
| | - Elisabeth André
- Human-Centered Artificial Intelligence, University of Augsburg, Augsburg, Germany
| | - Matteo Malosio
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council of Italy, Lecco, Italy
| |
Collapse
|
7
|
Prinsen J, Alaerts K. In the eye of the beholder: Social traits predict motor simulation during naturalistic action perception. Neuropsychologia 2024; 199:108889. [PMID: 38670526 DOI: 10.1016/j.neuropsychologia.2024.108889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 03/22/2024] [Accepted: 04/22/2024] [Indexed: 04/28/2024]
Abstract
Previous research has robustly demonstrated that eye contact between actor and observer promotes the simulation of perceived actions into the observer's own motor system, which in turn facilitates social perception and communication. The socially relevant connotation embedded in eye contact may however be different for individuals with differing social traits. Here, we examined how "normal" (i.e. non-clinical) variability in self-reported social responsiveness/autistic traits, social anxiety and interpersonal relationship style (secure, avoidant or anxious attachment) influences neural motor simulation during action observation in different gaze conditions. To do so, we analyzed an existing dataset involving 124 adult participants (age range: 18-35 years) who underwent transcranial magnetic stimulation (TMS) while observing an actor performing simple hand actions and simultaneously engaging in eye contact or gazing away from the observer. Motor evoked potential (MEP) amplitudes were adopted as an index of motor resonance. Regression-based analyses highlighted the role of social responsiveness and secure attachment in shaping motor resonance, indicating that socially responsive motor resonance during dyadic gaze (i.e., MEPdirect > MEPaverted) was only observed in participants displaying high levels of these traits. Furthermore, a clustering analysis identified two to three distinct subgroups of participants with unique social trait profiles, showing a clear differentiation in motor resonant patterns upon different gaze cues that is in accordance with a recent neurobiological framework of attachment. Together, results demonstrate that motor resonance within a given social interaction may serve as a sensitive tracker of socio-interactive engagement, which allows to capture subclinical inter-individual variation in relevant social traits.
Collapse
Affiliation(s)
- Jellina Prinsen
- Neurorehabilitation Research Group, Department of Rehabilitation Sciences, KU Leuven, Leuven, Belgium.
| | - Kaat Alaerts
- Neurorehabilitation Research Group, Department of Rehabilitation Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
8
|
Charbonneau M, Curioni A, McEllin L, Strachan JWA. Flexible Cultural Learning Through Action Coordination. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:201-222. [PMID: 37458767 DOI: 10.1177/17456916231182923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
The cultural transmission of technical know-how has proven vital to the success of our species. The broad diversity of learning contexts and social configurations, as well as the various kinds of coordinated interactions they involve, speaks to our capacity to flexibly adapt to and succeed in transmitting vital knowledge in various learning contexts. Although often recognized by ethnographers, the flexibility of cultural learning has so far received little attention in terms of cognitive mechanisms. We argue that a key feature of the flexibility of cultural learning is that both the models and learners recruit cognitive mechanisms of action coordination to modulate their behavior contingently on the behavior of their partner, generating a process of mutual adaptation supporting the successful transmission of technical skills in diverse and fluctuating learning environments. We propose that the study of cultural learning would benefit from the experimental methods, results, and insights of joint-action research and, complementarily, that the field of joint-action research could expand its scope by integrating a learning and cultural dimension. Bringing these two fields of research together promises to enrich our understanding of cultural learning, its contextual flexibility, and joint action coordination.
Collapse
Affiliation(s)
- Mathieu Charbonneau
- Africa Institute for Research in Economics and Social Sciences, Université Mohammed VI Polytechnique
| | | | - Luke McEllin
- Department of Cognitive Science, Central European University
| | | |
Collapse
|
9
|
Chen T, Helminen TM, Linnunsalo S, Hietanen JK. Autonomic and facial electromyographic responses to watching eyes. Iperception 2024; 15:20416695231226059. [PMID: 38268784 PMCID: PMC10807318 DOI: 10.1177/20416695231226059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 12/27/2023] [Indexed: 01/26/2024] Open
Abstract
We measured participants' psychophysiological responses and gaze behavior while viewing a stimulus person's direct and averted gaze in three different conditions manipulating the participants' experience of being watched. The results showed that skin conductance responses and heart rate deceleration responses were greater to direct than averted gaze only in the condition in which the participants had the experience of being watched by the other individual. In contrast, gaze direction had no effects on these responses when the participants were manipulated to believe that the other individual could not watch them or when the stimulus person was presented in a pre-recorded video. Importantly, the eye tracking measures showed no differences in participants' looking behavior between these stimulus presentation conditions. The results of facial electromyography responses suggested that direct gaze elicited greater zygomatic and periocular responses than averted gaze did, independent of the presentation condition. It was concluded that the affective arousal and attention-orienting indexing autonomic responses to eye contact are driven by the experience of being watched. In contrast, the facial responses seem to reflect automatized affiliative responses which can be elicited even in conditions in which seeing another's direct gaze does not signal that the self is being watched.
Collapse
Affiliation(s)
- Tingji Chen
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Terhi M Helminen
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland
| | - Samuli Linnunsalo
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland
| | - Jari K Hietanen
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland
| |
Collapse
|
10
|
Babinet MN, Demily C, Gobin E, Laurent C, Maillet T, Michael GA. The Time Course of Information Processing During Eye Direction Perception. Exp Psychol 2023; 70:324-335. [PMID: 38602119 DOI: 10.1027/1618-3169/a000606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Gaze directed at the observer (direct gaze) is a highly salient social signal. Despite the existence of a preferential orientation toward direct gaze, none of the studies carried out so far seem to have explicitly studied the time course of information processing during gaze direction judgment. In an eye direction judgment task, participants were presented with a sketch of a face. A temporal asynchrony was introduced between the presentation of the eyes and that of the rest of the face. Indeed, the face could be presented before the eyes, the eyes could be presented before the face, or the face and the eyes could be presented simultaneously. In a second time, the face direction was also manipulated. The results suggest that the time course of information processing during eye direction judgment follows a continuum that makes it possible to perceive the eyes first and then to use the facial context to judge the direction of gaze. Furthermore, the congruency between the direction of gaze and that of the face confirms this observation. Although these results are discussed in the light of existing theories about the mechanisms underlying gaze processing, our data provide new information suggesting that, despite their power to capture attention, the eyes probably have to stand out from a more general spatial configuration (i.e., the face) in order for their direction to be adequately processed.
Collapse
Affiliation(s)
- Marie-Noëlle Babinet
- Centre de Référence Maladies Rares Troubles du Comportement d'Origine Génétique (GénoPsy Lyon), Centre d'excellence Autisme iMIND, Le Vinatier Etablissement Lyonnais référent en psychiatrie et santé mentale, UMR 5229, CNRS & Université Lyon 1, Lyon, France
- Département de Sciences Cognitives, Psychologie Cognitive & Neuropsychologie, Institut de Psychologie, Unité de Recherche Étude des Mécanismes Cognitifs (EA 3082), Université Lumiére Lyon 2, Lyon, France
| | - Caroline Demily
- Centre de Référence Maladies Rares Troubles du Comportement d'Origine Génétique (GénoPsy Lyon), Centre d'excellence Autisme iMIND, Le Vinatier Etablissement Lyonnais référent en psychiatrie et santé mentale, UMR 5229, CNRS & Université Lyon 1, Lyon, France
| | - Eloïse Gobin
- Département de Sciences Cognitives, Psychologie Cognitive & Neuropsychologie, Institut de Psychologie, Unité de Recherche Étude des Mécanismes Cognitifs (EA 3082), Université Lumiére Lyon 2, Lyon, France
| | - Clémence Laurent
- Département de Sciences Cognitives, Psychologie Cognitive & Neuropsychologie, Institut de Psychologie, Unité de Recherche Étude des Mécanismes Cognitifs (EA 3082), Université Lumiére Lyon 2, Lyon, France
| | - Thomas Maillet
- Département de Sciences Cognitives, Psychologie Cognitive & Neuropsychologie, Institut de Psychologie, Unité de Recherche Étude des Mécanismes Cognitifs (EA 3082), Université Lumiére Lyon 2, Lyon, France
| | - George A Michael
- Département de Sciences Cognitives, Psychologie Cognitive & Neuropsychologie, Institut de Psychologie, Unité de Recherche Étude des Mécanismes Cognitifs (EA 3082), Université Lumiére Lyon 2, Lyon, France
| |
Collapse
|
11
|
Kauttonen J, Paekivi S, Kauramäki J, Tikka P. Unraveling dyadic psycho-physiology of social presence between strangers during an audio drama - a signal-analysis approach. Front Psychol 2023; 14:1153968. [PMID: 37928563 PMCID: PMC10622809 DOI: 10.3389/fpsyg.2023.1153968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 10/04/2023] [Indexed: 11/07/2023] Open
Abstract
A mere co-presence of an unfamiliar person may modulate an individual's attentive engagement with specific events or situations to a significant degree. To understand better how such social presence affects experiences, we recorded a set of parallel multimodal facial and psychophysiological data with subjects (N = 36) who listened to dramatic audio scenes alone or when facing an unfamiliar person. Both a selection of 6 s affective sound clips (IADS-2) followed by a 27 min soundtrack extracted from a Finnish episode film depicted familiar and often intense social situations familiar from the everyday world. Considering the systemic complexity of both the chosen naturalistic stimuli and expected variations in the experimental social situation, we applied a novel combination of signal analysis methods using inter-subject correlation (ISC) analysis, Representational Similarity Analysis (RSA) and Recurrence Quantification Analysis (RQA) followed by gradient boosting classification. We report our findings concerning three facial signals, gaze, eyebrow and smile that can be linked to socially motivated facial movements. We found that ISC values of pairs, whether calculated on true pairs or any two individuals who had a partner, were lower than the group with single individuals. Thus, audio stimuli induced more unique responses in those subjects who were listening to it in the presence of another person, while individual listeners tended to yield a more uniform response as it was driven by dramatized audio stimulus alone. Furthermore, our classifiers models trained using recurrence properties of gaze, eyebrows and smile signals demonstrated distinctive differences in the recurrence dynamics of signals from paired subjects and revealed the impact of individual differences on the latter. We showed that the presence of an unfamiliar co-listener that modifies social dynamics of dyadic listening tasks can be detected reliably from visible facial modalities. By applying our analysis framework to a broader range of psycho-physiological data, together with annotations of the content, and subjective reports of participants, we expected more detailed dyadic dependencies to be revealed. Our work contributes towards modeling and predicting human social behaviors to specific types of audio-visually mediated, virtual, and live social situations.
Collapse
Affiliation(s)
- Janne Kauttonen
- Competences, RDI and Digitalization, Haaga-Helia University of Applied Sciences, Helsinki, Finland
- School of Arts, Design and Architecture, Aalto University, Espoo, Finland
- Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Sander Paekivi
- Max Planck Institute for the Physics of Complex Systems, Dresden, Germany
| | - Jaakko Kauramäki
- School of Arts, Design and Architecture, Aalto University, Espoo, Finland
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Pia Tikka
- School of Arts, Design and Architecture, Aalto University, Espoo, Finland
- Enactive Virtuality Lab, Baltic Film, Media and Arts School (BFM), Centre of Excellence in Media Innovation and Digital Culture (MEDIT), Tallinn University, Tallinn, Estonia
| |
Collapse
|
12
|
Abubshait A, Kompatsiari K, Cardellicchio P, Vescovo E, De Tommaso D, Fadiga L, D'Ausilio A, Wykowska A. Modulatory Effects of Communicative Gaze on Attentional Orienting Are Driven by Dorsomedial Prefrontal Cortex but Not Right Temporoparietal Junction. J Cogn Neurosci 2023; 35:1670-1680. [PMID: 37432740 DOI: 10.1162/jocn_a_02032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Communicative gaze (e.g., mutual or averted) has been shown to affect attentional orienting. However, no study to date has clearly separated the neural basis of the pure social component that modulates attentional orienting in response to communicative gaze from other processes that might be a combination of attentional and social effects. We used TMS to isolate the purely social effects of communicative gaze on attentional orienting. Participants completed a gaze-cueing task with a humanoid robot who engaged either in mutual or in averted gaze before shifting its gaze. Before the task, participants received either sham stimulation (baseline), stimulation of right TPJ (rTPJ), or dorsomedial prefrontal cortex (dmPFC). Results showed, as expected, that communicative gaze affected attentional orienting in baseline condition. This effect was not evident for rTPJ stimulation. Interestingly, stimulation to rTPJ also canceled out attentional orienting altogether. On the other hand, dmPFC stimulation eliminated the socially driven difference in attention orienting between the two gaze conditions while maintaining the basic general attentional orienting effect. Thus, our results allowed for separation of the pure social effect of communicative gaze on attentional orienting from other processes that are a combination of social and generic attentional components.
Collapse
Affiliation(s)
| | | | | | - Enrico Vescovo
- Istituto Italiano di Tecnologia, Ferrara, Italy
- Universita di Ferrara, Italy
| | | | - Luciano Fadiga
- Istituto Italiano di Tecnologia, Ferrara, Italy
- Universita di Ferrara, Italy
| | | | | |
Collapse
|
13
|
Lombardi M, Roselli C, Kompatsiari K, Rospo F, Natale L, Wykowska A. The impact of facial expression and communicative gaze of a humanoid robot on individual Sense of Agency. Sci Rep 2023; 13:10113. [PMID: 37344497 PMCID: PMC10284854 DOI: 10.1038/s41598-023-36864-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 06/13/2023] [Indexed: 06/23/2023] Open
Abstract
Sense of Agency (SoA) is the feeling of control over one's actions and their outcomes. A well-established implicit measure of SoA is the temporal interval estimation paradigm, in which participants estimate the time interval between a voluntary action and its sensory consequence. In the present study, we aimed to investigate whether the valence of action outcome modulated implicit SoA. The valence was manipulated through interaction partner's (i) positive/negative facial expression, or (ii) type of gaze (gaze contact or averted gaze). The interaction partner was the humanoid robot iCub. In Experiment 1, participants estimated the time interval between the onset of their action (head movement towards the robot), and the robot's facial expression (happy vs. sad face). Experiment 2 was identical, but the outcome of participants' action was the type of robot's gaze (gaze contact vs. averted). In Experiment 3, we assessed-in a within-subject design-the combined effect of robot's type of facial expression and type of gaze. Results showed that, while the robot's facial expression did not affect participants' SoA (Experiment 1), the type of gaze affected SoA in both Experiment 2 and Experiment 3. Overall, our findings showed that the robot's gaze is a more potent factor than facial expression in modulating participants' implicit SoA.
Collapse
Affiliation(s)
- Maria Lombardi
- Italian Institute of Technology, Via Morego 30, 16163, Genoa, Italy
| | - Cecilia Roselli
- Italian Institute of Technology, Via Morego 30, 16163, Genoa, Italy
| | | | - Federico Rospo
- Italian Institute of Technology, Via Morego 30, 16163, Genoa, Italy
| | - Lorenzo Natale
- Italian Institute of Technology, Via Morego 30, 16163, Genoa, Italy
| | - Agnieszka Wykowska
- Italian Institute of Technology, Via Morego 30, 16163, Genoa, Italy.
- Italian Institute of Technology, Via Enrico Melen 83, 16152, Genoa, Italy.
| |
Collapse
|
14
|
Wang Y, Zhang M, Wu J, Zhang H, Yang H, Guo S, Lin Z, Lu C. Effects of the Interactive Features of Virtual Partner on Individual Exercise Level and Exercise Perception. Behav Sci (Basel) 2023; 13:bs13050434. [PMID: 37232671 DOI: 10.3390/bs13050434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/17/2023] [Accepted: 05/19/2023] [Indexed: 05/27/2023] Open
Abstract
BACKGROUND We designed an exercise system in which the user is accompanied by a virtual partner (VP) and tested bodyweight squat performance with different interactive VP features to explore the comprehensive impact of these VP features on the individual's exercise level (EL) and exercise perception. METHODS This experiment used three interactive features of VP, including body movement (BM), eye gaze (EG), and sports performance (SP), as independent variables, and the exercise level (EL), subjective exercise enjoyment, attitude toward the team formed with the VP, and local muscle fatigue degree of the exerciser as observational indicators. We designed a 2 (with or without VP's BM) × 2 (with or without VP's EG) × 2 (with or without VP's SP) within-participants factorial experiment. A total of 40 college students were invited to complete 320 groups of experiments. RESULTS (1) Regarding EL, the main effects of BM and SP were significant (p < 0.001). The pairwise interaction effects of the three independent variables on EL were all significant (p < 0.05). (2) Regarding exercise perception, the main effects of BM (p < 0.001) and EG (p < 0.001) on subjective exercise enjoyment were significant. The main effect of BM on the attitude toward the sports team formed with the VP was significant (p < 0.001). The interaction effect of BM and SP on the attitude toward the sports team formed with the VP was significant (p < 0.001). (3) Regarding the degree of local muscle fatigue, the main effects of BM, EG, and SP and their interaction effects were not significant (p > 0.05). CONCLUSION BM and EG from the VP elevate EL and exercise perception during squat exercises, while the VP with SP inhibited the EL and harmed exercise perception. The conclusions of this study can provide references to guide the interactive design of VP-accompanied exercise systems.
Collapse
Affiliation(s)
- Yinghao Wang
- Industrial Design and Research Institute, Zhejiang University of Technology, Hangzhou 310023, China
| | - Mengsi Zhang
- School of Design and Architecture, Zhejiang University of Technology, Hangzhou 310023, China
| | - Jianfeng Wu
- Industrial Design and Research Institute, Zhejiang University of Technology, Hangzhou 310023, China
| | - Haonan Zhang
- School of Design and Architecture, Zhejiang University of Technology, Hangzhou 310023, China
| | - Hongchun Yang
- Industrial Design and Research Institute, Zhejiang University of Technology, Hangzhou 310023, China
| | - Songyang Guo
- School of Design and Architecture, Zhejiang University of Technology, Hangzhou 310023, China
| | - Zishuo Lin
- School of Design and Architecture, Zhejiang University of Technology, Hangzhou 310023, China
| | - Chunfu Lu
- Industrial Design and Research Institute, Zhejiang University of Technology, Hangzhou 310023, China
| |
Collapse
|
15
|
When Attentional and Politeness Demands Clash: The Case of Mutual Gaze Avoidance and Chin Pointing in Quiahije Chatino. JOURNAL OF NONVERBAL BEHAVIOR 2023. [DOI: 10.1007/s10919-022-00423-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
AbstractPointing with the chin is a practice attested worldwide: it is an effective and highly recognizable device for re-orienting the attention of the addressee. For the chin point to be observed, the addressee must attend carefully to the movements of the sender’s head. This demand comes into conflict with the politeness norms of many cultures, since these often require conversationalists to avoid meeting the gaze of their interlocutor, and can require them to look away from their interlocutor’s face and head. In this paper we explore how the chin point is successfully used in just such a culture, among the Chatino indigenous group of Oaxaca, Mexico. We analyze interactions between multiple dyads of Chatino speakers, examining how senders invite visual attention to the pointing gesture, and how addressees signal that attention, while both participants avoid stretches of mutual gaze. We find that in the Chatino context, the senior (or higher-status) party to the conversation is highly consistent in training their gaze away from their interlocutor. This allows their interlocutor to give visual attention to their face without the risk of meeting the gaze of a higher-status sender, and facilitates close attention to head movements including the chin point.Abstracts in Spanish and Quiahije Chatino are published as appendices.Se incluyen como apéndices resúmenes en español y en el chatino de San Juan Quiahije.SonG ktyiC reC inH, ngyaqC skaE ktyiC noE ndaH sonB naF ngaJ noI ngyaqC loE ktyiC reC, ngyaqC ranF chaqE xlyaK qoE chaqF jnyaJ noA ndywiqA renqA KchinA KyqyaC.
Collapse
|
16
|
Explicit vs. implicit spatial processing in arrow vs. eye-gaze spatial congruency effects. PSYCHOLOGICAL RESEARCH 2023; 87:242-259. [PMID: 35192045 PMCID: PMC9873763 DOI: 10.1007/s00426-022-01659-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 01/31/2022] [Indexed: 01/28/2023]
Abstract
Arrows and gaze stimuli lead to opposite spatial congruency effects. While standard congruency effects are observed for arrows (faster responses for congruent conditions), responses are faster when eye-gaze stimuli are presented on the opposite side of the gazed-at location (incongruent trials), leading to a reversed congruency effect (RCE). Here, we explored the effects of implicit vs. explicit processing of arrows and eye-gaze direction. Participants were required to identify the direction (explicit task) or the colour (implicit task) of left or right looking/pointing gaze or arrows, presented to either the left or right of the fixation point. When participants responded to the direction of stimuli, standard congruency effects for arrows and RCE for eye-gaze stimuli were observed. However, when participants responded to the colour of stimuli, no congruency effects were observed. These results suggest that it is necessary to explicitly pay attention to the direction of eye-gaze and arrows for the congruency effect to occur. The same pattern of data was observed when participants responded either manually or verbally, demonstrating that manual motor components are not responsible for the results observed. These findings are not consistent with some hypotheses previously proposed to explain the RCE observed with eye-gaze stimuli and, therefore, call for an alternative plausible hypothesis.
Collapse
|
17
|
Hadley LV, Culling JF. Timing of head turns to upcoming talkers in triadic conversation: Evidence for prediction of turn ends and interruptions. Front Psychol 2022; 13:1061582. [PMID: 36605274 PMCID: PMC9807761 DOI: 10.3389/fpsyg.2022.1061582] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Accepted: 11/24/2022] [Indexed: 12/24/2022] Open
Abstract
In conversation, people are able to listen to an utterance and respond within only a few hundred milliseconds. It takes substantially longer to prepare even a simple utterance, suggesting that interlocutors may make use of predictions about when the talker is about to end. But it is not only the upcoming talker that needs to anticipate the prior talker ending-listeners that are simply following the conversation could also benefit from predicting the turn end in order to shift attention appropriately with the turn switch. In this paper, we examined whether people predict upcoming turn ends when watching conversational turns switch between others by analysing natural conversations. These conversations were between triads of older adults in different levels and types of noise. The analysis focused on the observer during turn switches between the other two parties using head orientation (i.e. saccades from one talker to the next) to identify when their focus moved from one talker to the next. For non-overlapping utterances, observers started to turn to the upcoming talker before the prior talker had finished speaking in 17% of turn switches (going up to 26% when accounting for motor-planning time). For overlapping utterances, observers started to turn towards the interrupter before they interrupted in 18% of turn switches (going up to 33% when accounting for motor-planning time). The timing of head turns was more precise at lower than higher noise levels, and was not affected by noise type. These findings demonstrate that listeners in natural group conversation situations often exhibit head movements that anticipate the end of one conversational turn and the beginning of another. Furthermore, this work demonstrates the value of analysing head movement as a cue to social attention, which could be relevant for advancing communication technology such as hearing devices.
Collapse
Affiliation(s)
- Lauren V. Hadley
- Hearing Sciences – Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| | - John F. Culling
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
18
|
Liu J, Yang J, Huang L, Zhou L, Xie J, Hu Z. Masked face is looking at me: Face mask increases the feeling of being looked at during the COVID-19 pandemic. Front Neurosci 2022; 16:1056793. [PMID: 36507359 PMCID: PMC9730803 DOI: 10.3389/fnins.2022.1056793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 11/08/2022] [Indexed: 11/25/2022] Open
Abstract
Background As the COVID-19 global pandemic unfolded, governments recommended wearing face masks as a protective measure. Recent studies have found that a face mask influences perception; but how it affects social perception, especially the judgment of being looked at, is still unknown. This study investigated how wearing a mask influences the judgment of gaze direction by conducting a cone of direct gaze (CoDG) task. Methods In Experiment 1, three types of masked faces were considered to investigate whether the effect of masks on CoDG is modulated by mask types. Experiment 2 was to further validate the results of Experiment 1 by adding a learning phase to help participants better distinguish N95 and surgical masks. Furthermore, to investigate whether the effect of masks derives from its social significance, a face with only the eye-region (a mouth-cut face) was used as the stimuli in Experiment 3. Results The results of Experiment 1 found that wearing masks widens the CoDG, irrespective of the mask type. Experiment 2 replicated the results of Experiment 1. Experiment 3 found that the CoDG of N95-masked faces was wider than the mouth-cut and non-masked faces, while no significant difference existed between the CoDG of mouth-cut and non-masked faces, illustrating that the influence of wearing masks on CoDG was due to high-level social significance rather than low-level facial feature information. Conclusion The results show that face mask increases the feeling of being looked at during the COVID-19 Pandemic. The present findings are of significance for understanding the impact of wearing masks on human social cognition in the context of COVID-19.
Collapse
Affiliation(s)
- Jiakun Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| | - Jiajia Yang
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| | - Lihui Huang
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| | - Li Zhou
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| | - Jinxi Xie
- Jinhua Middle School, Suining, China
| | - Zhonghua Hu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China,Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China,*Correspondence: Zhonghua Hu, ; orcid.org/0000-0002-9213-457X
| |
Collapse
|
19
|
Social, affective, and non-motoric bodily cues to the Sense of Agency: A systematic review of the experience of control. Neurosci Biobehav Rev 2022; 142:104900. [DOI: 10.1016/j.neubiorev.2022.104900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 09/15/2022] [Accepted: 09/29/2022] [Indexed: 10/31/2022]
|
20
|
Face inversion does not affect the reversed congruency effect of gaze. Psychon Bull Rev 2022:10.3758/s13423-022-02208-8. [DOI: 10.3758/s13423-022-02208-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/19/2022] [Indexed: 11/08/2022]
|
21
|
Bylianto LO, Chan KQ. Face masks inhibit facial cues for approachability and trustworthiness: an eyetracking study. CURRENT PSYCHOLOGY 2022; 42:1-12. [PMID: 36217421 PMCID: PMC9535231 DOI: 10.1007/s12144-022-03705-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/24/2022] [Indexed: 11/03/2022]
Abstract
Wearing face masks during the Covid-19 pandemic has undeniable benefits from our health perspective. However, the interpersonal costs on social interactions may have been underappreciated. Because masks obscure critical facial regions signaling approach/avoidance intent and social trust, this implies that facial inference of approachability and trustworthiness may be severely discounted. Here, in our eyetracking experiment, we show that people judged masked faces as less approachable and trustworthy. Further analyses showed that the attention directed towards the eye region relative to the mouth region mediated the effect on approachability, but not on trustworthiness. This is because for masked faces, with the mouth region obscured, visual attention is then automatically diverted away from the mouth and towards the eye region, which is an undiagnostic cue for judging a target's approachability. Together, these findings support that mask-wearing inhibits the critical facial cues needed for social judgements. Supplementary Information The online version contains supplementary material available at 10.1007/s12144-022-03705-8.
Collapse
|
22
|
Psalti ISM, Pichlhöfer C. Virtual sociodrama. ZEITSCHRIFT FÜR PSYCHODRAMA UND SOZIOMETRIE 2022. [PMCID: PMC9415242 DOI: 10.1007/s11620-022-00693-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
In this article of the Journal of Psychodrama and Sociometry we address the question of whether the online delivery modality of sociodrama, enforced by external conditions, lends itself to a discussion of Moreno’s methodology in the context of liminality. The concepts of liminality, creativity, collective creativity (CC) and creative resilience (CR) are discussed in relation to the value of sociodrama in mining spontaneous and devised liminal spaces within the greater liminal experience of the COVID-19 pandemic. Key Morenean concepts (somatisation, concretisation) are presented as spontaneous enhancers of group collaboration with references to their validation by neuroscientific research. The authors briefly introduce the concepts of ‘g’ factor and ‘UC-ego’ which are induced by the video communications services with references to their impact on collective creativity. The case study of the sociodrama network iSCAN with its capacity for collective creative resilience (CCR) demonstrates how sociodrama emerges as the strategic collective and creative response to external changes.
Collapse
|
23
|
Ghost on the Windshield: Employing a Virtual Human Character to Communicate Pedestrian Acknowledgement and Vehicle Intention. INFORMATION 2022. [DOI: 10.3390/info13090420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Pedestrians base their street-crossing decisions on vehicle-centric as well as driver-centric cues. In the future, however, drivers of autonomous vehicles will be preoccupied with non-driving related activities and will thus be unable to provide pedestrians with relevant communicative cues. External human–machine interfaces (eHMIs) hold promise for filling the expected communication gap by providing information about a vehicle’s situational awareness and intention. In this paper, we present an eHMI concept that employs a virtual human character (VHC) to communicate pedestrian acknowledgement and vehicle intention (non-yielding; cruising; yielding). Pedestrian acknowledgement is communicated via gaze direction while vehicle intention is communicated via facial expression. The effectiveness of the proposed anthropomorphic eHMI concept was evaluated in the context of a monitor-based laboratory experiment where the participants performed a crossing intention task (self-paced, two-alternative forced choice) and their accuracy in making appropriate street-crossing decisions was measured. In each trial, they were first presented with a 3D animated sequence of a VHC (male; female) that either looked directly at them or clearly to their right while producing either an emotional (smile; angry expression; surprised expression), a conversational (nod; head shake), or a neutral (neutral expression; cheek puff) facial expression. Then, the participants were asked to imagine they were pedestrians intending to cross a one-way street at a random uncontrolled location when they saw an autonomous vehicle equipped with the eHMI approaching from the right and indicate via mouse click whether they would cross the street in front of the oncoming vehicle or not. An implementation of the proposed concept where non-yielding intention is communicated via the VHC producing either an angry expression, a surprised expression, or a head shake; cruising intention is communicated via the VHC puffing its cheeks; and yielding intention is communicated via the VHC nodding, was shown to be highly effective in ensuring the safety of a single pedestrian or even two co-located pedestrians without compromising traffic flow in either case. The implications for the development of intuitive, culture-transcending eHMIs that can support multiple pedestrians in parallel are discussed.
Collapse
|
24
|
Jackson CD, Seymour KK. Holistic processing of gaze cues during interocular suppression. Sci Rep 2022; 12:7717. [PMID: 35546346 PMCID: PMC9095640 DOI: 10.1038/s41598-022-11927-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Accepted: 04/28/2022] [Indexed: 11/09/2022] Open
Abstract
Direct eye contact is preferentially processed over averted gaze and has been shown to gain privileged access to conscious awareness during interocular suppression. This advantage might be driven by local features associated with direct gaze, such as the amount of visible sclera. Alternatively, a holistic representation of gaze direction, which depends on the integration of head and eye information, might drive the effects. Resolving this question is interesting because it speaks to whether the processing of higher-level social information in the visual system, such as facial characteristics that rely on holistic processing, is dependent on conscious awareness. The Wollaston Illusion is a visual illusion that allows researchers to manipulate perceived gaze direction while keeping local eye features constant. Here we used this illusion to elucidate the driving factor facilitating the direct gaze advantage during interocular suppression. Using continuous flash suppression, we rendered Wollaston faces with direct and averted gaze (initially) invisible. These faces conveyed different gaze directions but contained identical eye regions. Our results showed clear evidence for a direct gaze advantage with Wollaston faces, indicating that holistic representations of gaze direction may drive the direct gaze advantage during interocular suppression.
Collapse
Affiliation(s)
- Cooper D Jackson
- School of Psychology, Western Sydney University, Penrith, NSW, Australia
| | - Kiley K Seymour
- School of Psychology, Western Sydney University, Penrith, NSW, Australia. .,The MARCS Institute, Western Sydney University, Westmead, NSW, Australia. .,Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
| |
Collapse
|
25
|
Direct eye gaze enhances the ventriloquism effect. Atten Percept Psychophys 2022; 84:2293-2302. [PMID: 35359228 PMCID: PMC9481494 DOI: 10.3758/s13414-022-02468-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2022] [Indexed: 11/08/2022]
Abstract
The “ventriloquism effect” describes an illusory phenomenon where the perceived location of an auditory stimulus is pulled toward the location of a visual stimulus. Ventriloquists use this phenomenon to create an illusion where an inanimate puppet is perceived to speak. Ventriloquists use the expression and suppression of their own and the puppet’s mouth movements as well the direction of their respective eye gaze to maximize the illusion. While the puppet’s often exaggerated mouth movements have been demonstrated to enhance the ventriloquism effect, the contribution of direct eye gaze remains unknown. In Experiment 1, participants viewed an image of a person’s face while hearing a temporally synchronous recording of a voice originating from different locations on the azimuthal plane. The eyes of the facial stimuli were either looking directly at participants or were closed. Participants were more likely to misperceive the location of a range of voice locations as coming from a central position when the eye gaze of the facial stimuli were directed toward them. Thus, direct gaze enhances the ventriloquist effect by attracting participants’ perception of the voice locations toward the location of the face. In an exploratory analysis, we furthermore found no evidence for an other-race effect between White vs Asian listeners. In Experiment 2, we replicated the effect of direct eye gaze on the ventriloquism effect, also showing that faces per se attract perceived sound locations compared with audio-only sound localization. Showing a modulation of the ventriloquism effect by socially-salient eye gaze information thus adds to previous findings reporting top-down influences on this effect.
Collapse
|
26
|
Liu J, Hu J, Li Q, Zhao X, Liu Y, Liu S. Atypical processing pattern of gaze cues in dynamic situations in autism spectrum disorders. Sci Rep 2022; 12:4120. [PMID: 35260744 PMCID: PMC8904572 DOI: 10.1038/s41598-022-08080-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 03/02/2022] [Indexed: 11/22/2022] Open
Abstract
Psychological studies have generally shown that individuals with Autism Spectrum Disorder (ASD) have particularity in the processing of social information by using static or abstract images. Yet, a recent study showed that there was no difference in their use of social or non-social cues in dynamic interactive situations. To establish the cause of the inconsistent results, we added gaze cues in different directions to the chase detection paradigm to explore whether they would affect the performance of participants with ASD. Meanwhile, eye-tracking methodology was used to investigate whether the processing patterns of gaze cues were different between individuals with ASD and TD. In this study, unlike typical controls, participants with ASD showed no detection advantage when the direction of gaze was consistent with the direction of movement (oriented condition). The results suggested that individuals with ASD may utilize an atypical processing pattern, which makes it difficult for them to use social information contained in oriented gaze cues in dynamic interactive situations.
Collapse
Affiliation(s)
- Jia Liu
- College of Psychology, Liaoning Normal University, Dalian, 116029, China
| | - Jinsheng Hu
- College of Psychology, Liaoning Normal University, Dalian, 116029, China.
| | - Qi Li
- College of Psychology, Liaoning Normal University, Dalian, 116029, China
| | - Xiaoning Zhao
- College of Psychology, Liaoning Normal University, Dalian, 116029, China
| | - Ying Liu
- College of Psychology, Liaoning Normal University, Dalian, 116029, China
| | - Shuqing Liu
- College of Basic Medical Sciences, Dalian Medical University, Dalian, 116044, China
| |
Collapse
|
27
|
Schiano Lomoriello A, Sessa P, Doro M, Konvalinka I. Shared Attention Amplifies the Neural Processing of Emotional Faces. J Cogn Neurosci 2022; 34:917-932. [PMID: 35258571 DOI: 10.1162/jocn_a_01841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sharing an experience, without communicating, affects people's subjective perception of the experience, often by intensifying it. We investigated the neural mechanisms underlying shared attention by implementing an EEG study where participants attended to and rated the intensity of emotional faces, simultaneously or independently. Participants performed the task in three experimental conditions: (a) alone; (b) simultaneously next to each other in pairs, without receiving feedback of the other's responses (shared without feedback); and (c) simultaneously while receiving the feedback (shared with feedback). We focused on two face-sensitive ERP components: The amplitude of the N170 was greater in the "shared with feedback" condition compared to the alone condition, reflecting a top-down effect of shared attention on the structural encoding of faces, whereas the EPN was greater in both shared context conditions compared to the alone condition, reflecting an enhanced attention allocation in the processing of emotional content of faces, modulated by the social context. Taken together, these results suggest that shared attention amplifies the neural processing of faces, regardless of the valence of facial expressions.
Collapse
|
28
|
Direct Gaze Holds Attention, but Not in Individuals with Obsessive-Compulsive Disorder. Brain Sci 2022; 12:brainsci12020288. [PMID: 35204051 PMCID: PMC8870087 DOI: 10.3390/brainsci12020288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 02/14/2022] [Accepted: 02/17/2022] [Indexed: 11/29/2022] Open
Abstract
The attentional response to eye-gaze stimuli is still largely unexplored in individuals with obsessive-compulsive disorder (OCD). Here, we focused on an attentional phenomenon according to which a direct-gaze face can hold attention in a perceiver. Individuals with OCD and a group of matched healthy controls were asked to discriminate, through a speeded manual response, a peripheral target. Meanwhile, a task-irrelevant face displaying either direct gaze (in the eye-contact condition) or averted gaze (in the no-eye-contact condition) was also presented at the centre of the screen. Overall, the latencies were slower for faces with direct gaze than for faces with averted gaze; however, this difference was reliable in the healthy control group but not in the OCD group. This suggests the presence of an unusual attentional response to direct gaze in this clinical population.
Collapse
|
29
|
Macinska S, Jellema T. Memory for facial expressions on the autism spectrum: The influence of gaze direction and type of expression. Autism Res 2022; 15:870-880. [PMID: 35150078 DOI: 10.1002/aur.2682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 01/19/2022] [Accepted: 01/24/2022] [Indexed: 11/10/2022]
Abstract
Face memory research in autism has largely neglected memory for facial expressions, in favor of memory for identity. This study in three experiments examined the role of gaze direction and type of expression on memory for facial expressions in relation to the autism spectrum. In the learning phase, four combinations of facial expressions (joy/anger) and gaze direction (toward/away), displayed by 16 different identities, were presented. In a subsequent surprise test the same identities were presented displaying neutral expressions, and the expression of each identity had to be recalled. In Experiment 1, typically-developed (TD) individuals with low and high Autism Quotient (AQ) scores were tested with three repetitions of each emotion/gaze combination, which did not produce any modulations. In Experiment 2, another group of TD individuals with low and high AQ scores were tested with eight repetitions, resulting in a "happy advantage" and a "direct gaze advantage", but no interactions. In Experiment 3, individuals with high-functioning autism (HFA) and a matched TD group were tested using eight repetitions. The HFA group revealed no emotion or gaze effects, while the matched TD group showed both a happy and a direct gaze advantage, and again no interaction. The results suggest that in autistic individuals the memory for facial expressions is intact, but is not modulated by the person's expression type and gaze direction. We discuss whether anomalous implicit learning of facial cues could have contributed to these findings, its relevance for social intuition, and its possible contribution to social deficits in autism. LAY SUMMARY: It has often been found that memory for someone's face (facial identity) is less good in autism. However, it is not yet known whether memory for someone's facial expression is also less good in autism. In this study, the memory for expressions of joy and anger was investigated in typically-developed (TD) individuals who possessed either few or many autistic-like traits (Experiments 1 and 2), and in individuals with high-functioning autism (Experiment 3). The gaze direction was also varied (directed either toward, or away from, the observer). We found that TD individuals best remembered expressions of joy, and remembered expressions of both joy and anger better when the gaze was directed at them. These effects did not depend on the extent to which they possessed autistic-like traits. Autistic participants remembered the facial expression of a previously encountered person as good as TD participants did. However, in contrast to the TD participants, the memory of autistic participants was not influenced by the expression type and gaze direction of the previously encountered persons. We discuss whether this may lead to difficulties in the development of social intuition, which in turn could give rise to difficulties in social interaction that are characteristic for autism.
Collapse
|
30
|
Stuart N, Whitehouse A, Palermo R, Bothe E, Badcock N. Eye Gaze in Autism Spectrum Disorder: A Review of Neural Evidence for the Eye Avoidance Hypothesis. J Autism Dev Disord 2022; 53:1884-1905. [PMID: 35119604 PMCID: PMC10123036 DOI: 10.1007/s10803-022-05443-z] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2022] [Indexed: 12/27/2022]
Abstract
Reduced eye contact early in life may play a role in the developmental pathways that culminate in a diagnosis of autism spectrum disorder. However, there are contradictory theories regarding the neural mechanisms involved. According to the amygdala theory of autism, reduced eye contact results from a hypoactive amygdala that fails to flag eyes as salient. However, the eye avoidance hypothesis proposes the opposite-that amygdala hyperactivity causes eye avoidance. This review evaluated studies that measured the relationship between eye gaze and activity in the 'social brain' when viewing facial stimuli. Of the reviewed studies, eight of eleven supported the eye avoidance hypothesis. These results suggest eye avoidance may be used to reduce amygdala-related hyperarousal among people on the autism spectrum.
Collapse
Affiliation(s)
- Nicole Stuart
- University of Western Australia, 35 Stirling Highway, Crawley, WA, 6009, Australia.
| | - Andrew Whitehouse
- Telethon Kids Institute, Perth Children's Hospital, 15 Hospital Avenue, Nedlands, WA, 6009, Australia
| | - Romina Palermo
- University of Western Australia, 35 Stirling Highway, Crawley, WA, 6009, Australia
| | - Ellen Bothe
- University of Western Australia, 35 Stirling Highway, Crawley, WA, 6009, Australia
| | - Nicholas Badcock
- University of Western Australia, 35 Stirling Highway, Crawley, WA, 6009, Australia
| |
Collapse
|
31
|
Breil C, Huestegge L, Böckler A. From eye to arrow: Attention capture by direct gaze requires more than just the eyes. Atten Percept Psychophys 2022; 84:64-75. [PMID: 34729707 PMCID: PMC8794969 DOI: 10.3758/s13414-021-02382-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2021] [Indexed: 11/26/2022]
Abstract
Human attention is strongly attracted by direct gaze and sudden onset motion. The sudden direct-gaze effect refers to the processing advantage for targets appearing on peripheral faces that suddenly establish eye contact. Here, we investigate the necessity of social information for attention capture by (sudden onset) ostensive cues. Six experiments involving 204 participants applied (1) naturalistic faces, (2) arrows, (3) schematic eyes, (4) naturalistic eyes, or schematic facial configurations (5) without or (6) with head turn to an attention-capture paradigm. Trials started with two stimuli oriented towards the observer and two stimuli pointing into the periphery. Simultaneous to target presentation, one direct stimulus changed to averted and one averted stimulus changed to direct, yielding a 2 × 2 factorial design with direction and motion cues being absent or present. We replicated the (sudden) direct-gaze effect for photographic faces, but found no corresponding effects in Experiments 2-6. Hence, a holistic and socially meaningful facial context seems vital for attention capture by direct gaze. STATEMENT OF SIGNIFICANCE: The present study highlights the significance of context information for social attention. Our findings demonstrate that the direct-gaze effect, that is, the prioritization of direct gaze over averted gaze, critically relies on the presentation of a meaningful holistic and naturalistic facial context. This pattern of results is evidence in favor of early effects of surrounding social information on attention capture by direct gaze.
Collapse
Affiliation(s)
- Christina Breil
- Julius-Maximilians-University of Würzburg, Würzburg, Germany.
- Department of Psychology III, University of Würzburg, Röntgenring 11, 97070, Würzburg, Germany.
| | - Lynn Huestegge
- Julius-Maximilians-University of Würzburg, Würzburg, Germany
| | - Anne Böckler
- Leibniz University Hannover, Hannover, Germany
- Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
32
|
Palmer CJ, Bracken SG, Otsuka Y, Clifford CWG. Is there a 'zone of eye contact' within the borders of the face? Cognition 2021; 220:104981. [PMID: 34920299 DOI: 10.1016/j.cognition.2021.104981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 12/01/2021] [Accepted: 12/02/2021] [Indexed: 11/25/2022]
Abstract
Eye contact is a salient feature of everyday interactions, yet it is not obvious what the physical conditions are under which we feel that we have eye contact with another person. Here we measure the range of locations that gaze can fall on a person's face to elicit a sense of eye contact. Participants made judgements about eye contact while viewing rendered images of faces with finely-varying gaze direction at a close interpersonal distance (50 cm). The 'zone of eye contact' tends to peak between the two eyes and is often surprisingly narrower than the observer's actual eye region. Indeed, the zone tends to extend further across the face in height than in width. This shares an interesting parallel with the 'cyclopean eye' of visual perspective - our sense of looking out from a single point in space despite the physical separation of our two eyes. The distribution of eye-contact strength across the face can be modelled at the individual-subject level as a 2D Gaussian function. Perception of eye contact is more precise than the sense of having one's face looked at, which captures a wider range of gaze locations in both the horizontal and vertical dimensions, at least at the close viewing distance used in the present study. These features of eye-contact perception are very similar cross-culturally, tested here in Australian and Japanese university students. However, the shape and position of the zone of eye contact does vary depending on recent sensory experience: adaptation to faces with averted gaze causes a pronounced shift and widening of the zone across the face, and judgements about eye contact also show a positive serial dependence. Together, these results provide insight into the conditions under which eye contact is felt, with respect to face morphology, culture, and sensory context.
Collapse
Affiliation(s)
- Colin J Palmer
- School of Psychology, UNSW Sydney, New South Wales 2052, Australia.
| | - Sophia G Bracken
- School of Psychology, UNSW Sydney, New South Wales 2052, Australia
| | - Yumiko Otsuka
- Department of Humanities and Social Sciences, Ehime University, Matsuyama, Ehime, Japan; Faculty of Science and Engineering, Waseda University, Japan
| | | |
Collapse
|
33
|
Kesner L, Adámek P, Grygarová D. How Neuroimaging Can Aid the Interpretation of Art. Front Hum Neurosci 2021; 15:702473. [PMID: 34594192 PMCID: PMC8476868 DOI: 10.3389/fnhum.2021.702473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 08/10/2021] [Indexed: 11/24/2022] Open
Abstract
Cognitive neuroscience of art continues to be criticized for failing to provide interesting results about art itself. In particular, results of brain imaging experiments have not yet been utilized in interpretation of particular works of art. Here we revisit a recent study in which we explored the neuronal and behavioral response to painted portraits with a direct versus an averted gaze. We then demonstrate how fMRI results can be related to the art historical interpretation of a specific painting. The evidentiary status of neuroimaging data is not different from any other extra-pictorial facts that art historians uncover in their research and relate to their account of the significance of a work of art. They are not explanatory in a strong sense, yet they provide supportive evidence for the art writer’s inference about the intended meaning of a given work. We thus argue that brain imaging can assume an important role in the interpretation of particular art works.
Collapse
Affiliation(s)
- Ladislav Kesner
- National Institute of Mental Health, Klecany, Czechia.,Faculty of Arts, Masaryk University, Brno, Czechia
| | - Petr Adámek
- National Institute of Mental Health, Klecany, Czechia.,Third Faculty of Medicine, Charles University, Prague, Czechia
| | | |
Collapse
|
34
|
Hudson A, Durston AJ, McCrackin SD, Itier RJ. Emotion, Gender and Gaze Discrimination Tasks do not Differentially Impact the Neural Processing of Angry or Happy Facial Expressions-a Mass Univariate ERP Analysis. Brain Topogr 2021; 34:813-833. [PMID: 34596796 DOI: 10.1007/s10548-021-00873-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 09/20/2021] [Indexed: 10/20/2022]
Abstract
Facial expression processing is a critical component of social cognition yet, whether it is influenced by task demands at the neural level remains controversial. Past ERP studies have found mixed results with classic statistical analyses, known to increase both Type I and Type II errors, which Mass Univariate statistics (MUS) control better. However, MUS open-access toolboxes can use different fundamental statistics, which may lead to inconsistent results. Here, we compared the output of two MUS toolboxes, LIMO and FMUT, on the same data recorded during the processing of angry and happy facial expressions investigated under three tasks in a within-subjects design. Both toolboxes revealed main effects of emotion during the N170 timing and main effects of task during later time points typically associated with the LPP component. Neither toolbox yielded an interaction between the two factors at the group level, nor at the individual level in LIMO, confirming that the neural processing of these two face expressions is largely independent from task demands. Behavioural data revealed main effects of task on reaction time and accuracy, but no influence of expression or an interaction between the two. Expression processing and task demands are discussed in the context of the consistencies and discrepancies between the two toolboxes and existing literature.
Collapse
Affiliation(s)
- Anna Hudson
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| | - Amie J Durston
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| | - Sarah D McCrackin
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| | - Roxane J Itier
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada.
| |
Collapse
|
35
|
Colombatto C, van Buren B, Scholl BJ. Hidden intentions: Visual awareness prioritizes perceived attention even without eyes or faces. Cognition 2021; 217:104901. [PMID: 34592478 DOI: 10.1016/j.cognition.2021.104901] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 08/29/2021] [Accepted: 08/31/2021] [Indexed: 01/01/2023]
Abstract
Eye contact is a powerful social signal, and it readily captures attention. Recent work has suggested that direct gaze is prioritized even unconsciously: faces rendered invisible via interocular suppression enter awareness faster when they look directly at (vs. away from) you. Such effects may be driven in a relatively low level way by the special visual properties of eyes, per se, but here we asked whether they might instead arise from the perception of a deeper property: being the focus of another agent's attention and/or intentions. We report five experiments which collectively explore whether visual awareness also prioritizes distinctly non-eye-like stimuli that nevertheless convey directedness. We first showed that directed (vs. averted) 'mouth' shapes also break through into awareness faster, after being rendered invisible by continuous flash suppression - a direct 'gaze' effect without any eyes. But such effects could still be specific to faces (if not eyes), so we next asked whether the prioritization of directed intentions would still occur even for stimuli that have no faces at all. In fact, even simple geometric shapes can be seen as intentional, as when numerous randomly scattered cones are all consistently pointing at you. And indeed, even such directed (vs. averted) cones entered awareness faster - a direct 'gaze' effect without any facial cues. Additional control experiments ruled out effects of both symmetry and response biases. We conclude that the perception of directed intentions is sufficient to boost objects into awareness, and that putative eye-contact effects might instead reflect more general phenomena of 'mind contact'.
Collapse
|
36
|
Meermeier A, Jording M, Alayoubi Y, Vogel DHV, Vogeley K, Tepest R. Brief Report: Preferred Processing of Social Stimuli in Autism: A Perception Task. J Autism Dev Disord 2021; 52:3286-3293. [PMID: 34532839 PMCID: PMC9213359 DOI: 10.1007/s10803-021-05195-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2021] [Indexed: 11/05/2022]
Abstract
In this study we investigate whether persons with autism spectrum disorder (ASD) perceive social images differently than control participants (CON) in a graded perception task in which stimuli emerged from noise before dissipating into noise again. We presented either social stimuli (humans) or non-social stimuli (objects or animals). ASD were slower to recognize images during their emergence, but as fast as CON when indicating the dissipation of the image irrespective of its content. Social stimuli were recognized faster and remained discernable longer in both diagnostic groups. Thus, ASD participants show a largely intact preference for the processing of social images. An exploratory analysis of response subsets reveals subtle differences between groups that could be investigated in future studies.
Collapse
Affiliation(s)
- A Meermeier
- University Hospital Cologne, NRW, Kerpener Strasse 62, Geb. 31, 50931, Cologne, Germany.
| | - M Jording
- University Hospital Cologne, NRW, Kerpener Strasse 62, Geb. 31, 50931, Cologne, Germany.,Forschungszentrum Jülich, INM3, NRW, Wilhelm-Johnen-Straße 1, 52428, Jülich, Germany
| | - Y Alayoubi
- University Hospital Cologne, NRW, Kerpener Strasse 62, Geb. 31, 50931, Cologne, Germany
| | - David H V Vogel
- University Hospital Cologne, NRW, Kerpener Strasse 62, Geb. 31, 50931, Cologne, Germany.,Forschungszentrum Jülich, INM3, NRW, Wilhelm-Johnen-Straße 1, 52428, Jülich, Germany
| | - K Vogeley
- University Hospital Cologne, NRW, Kerpener Strasse 62, Geb. 31, 50931, Cologne, Germany.,Forschungszentrum Jülich, INM3, NRW, Wilhelm-Johnen-Straße 1, 52428, Jülich, Germany
| | - R Tepest
- University Hospital Cologne, NRW, Kerpener Strasse 62, Geb. 31, 50931, Cologne, Germany
| |
Collapse
|
37
|
Belkaid M, Kompatsiari K, De Tommaso D, Zablith I, Wykowska A. Mutual gaze with a robot affects human neural activity and delays decision-making processes. Sci Robot 2021; 6:eabc5044. [PMID: 34516747 DOI: 10.1126/scirobotics.abc5044] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
In most everyday life situations, the brain needs to engage not only in making decisions but also in anticipating and predicting the behavior of others. In such contexts, gaze can be highly informative about others’ intentions, goals, and upcoming decisions. Here, we investigated whether a humanoid robot’s gaze (mutual or averted) influences the way people strategically reason in a social decision-making context. Specifically, participants played a strategic game with the robot iCub while we measured their behavior and neural activity by means of electroencephalography (EEG). Participants were slower to respond when iCub established mutual gaze before their decision, relative to averted gaze. This was associated with a higher decision threshold in the drift diffusion model and accompanied by more synchronized EEG alpha activity. In addition, we found that participants reasoned about the robot’s actions in both conditions. However, those who mostly experienced the averted gaze were more likely to adopt a self-oriented strategy, and their neural activity showed higher sensitivity to outcomes. Together, these findings suggest that robot gaze acts as a strong social signal for humans, modulating response times, decision threshold, neural synchronization, as well as choice strategies and sensitivity to outcomes. This has strong implications for all contexts involving human-robot interaction, from robotics to clinical applications.
Collapse
|
38
|
Lobmaier JS, Savic B, Baumgartner T, Knoch D. The Cone of Direct Gaze: A Stable Trait. Front Psychol 2021; 12:682395. [PMID: 34267708 PMCID: PMC8275972 DOI: 10.3389/fpsyg.2021.682395] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 05/31/2021] [Indexed: 01/26/2023] Open
Abstract
Direct eye gaze is a potent stimulus in social interactions and is often associated with interest and approach orientation. Yet, there is remarkable variability in the range of gaze lines that people accept as being direct. A measure that is frequently used to quantify the range of gaze angles within which an observer assumes mutual gaze is the cone of direct gaze (CoDG). While individual differences in CoDG have often been examined, studies that systematically investigate the stability of an observers' CoDG over time are scarce. In two experiments, we measured the CoDG using an established paradigm and repeated the measurement after 5 min and/or after 1 week. We found high inter-individual variation, but high agreement within participants (ICCs between 0.649 and 0.855). We conclude that the CoDG can be seen as a rather stable measure, much like a personality trait.
Collapse
Affiliation(s)
- Janek S Lobmaier
- Department of Social Neuroscience and Social Psychology, Institute of Psychology, University of Bern, Bern, Switzerland
| | - Branislav Savic
- Department of Social Neuroscience and Social Psychology, Institute of Psychology, University of Bern, Bern, Switzerland
| | - Thomas Baumgartner
- Department of Social Neuroscience and Social Psychology, Institute of Psychology, University of Bern, Bern, Switzerland
| | - Daria Knoch
- Department of Social Neuroscience and Social Psychology, Institute of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
39
|
Stephenson LJ, Edwards SG, Bayliss AP. From Gaze Perception to Social Cognition: The Shared-Attention System. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2021; 16:553-576. [PMID: 33567223 PMCID: PMC8114330 DOI: 10.1177/1745691620953773] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
When two people look at the same object in the environment and are aware of each other's attentional state, they find themselves in a shared-attention episode. This can occur through intentional or incidental signaling and, in either case, causes an exchange of information between the two parties about the environment and each other's mental states. In this article, we give an overview of what is known about the building blocks of shared attention (gaze perception and joint attention) and focus on bringing to bear new findings on the initiation of shared attention that complement knowledge about gaze following and incorporate new insights from research into the sense of agency. We also present a neurocognitive model, incorporating first-, second-, and third-order social cognitive processes (the shared-attention system, or SAS), building on previous models and approaches. The SAS model aims to encompass perceptual, cognitive, and affective processes that contribute to and follow on from the establishment of shared attention. These processes include fundamental components of social cognition such as reward, affective evaluation, agency, empathy, and theory of mind.
Collapse
|
40
|
Burra N, Kerzel D. Meeting another's gaze shortens subjective time by capturing attention. Cognition 2021; 212:104734. [PMID: 33887652 DOI: 10.1016/j.cognition.2021.104734] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 04/09/2021] [Accepted: 04/10/2021] [Indexed: 01/01/2023]
Abstract
Gaze directed at the observer (direct gaze) is an important and highly salient social signal with multiple effects on cognitive processes and behavior. It is disputed whether the effect of direct gaze is caused by attentional capture or increased arousal. Time estimation may provide an answer because attentional capture predicts an underestimation of time whereas arousal predicts an overestimation. In a temporal bisection task, observers were required to classify the duration of a stimulus as short or long. Stimulus duration was selected randomly between 988 and 1479 ms. When gaze was directed at the observer, participants underestimated stimulus duration, suggesting that effects of direct gaze are caused by attentional capture, not increased arousal. Critically, this effect was limited to dynamic stimuli where gaze appeared to move toward the participant. The underestimation was present with stimuli showing a full face, but also with stimuli showing only the eye region, inverted faces and high-contrast eye-like stimuli. However, it was absent with static pictures of full faces and dynamic nonfigurative stimuli. Because the effect of direct gaze depended on motion, which is common in naturalistic scenes, more consideration needs to be given to the ecological validity of stimuli in the study of social attention.
Collapse
Affiliation(s)
- Nicolas Burra
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland.
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Education, Université de Genève, Switzerland
| |
Collapse
|
41
|
Ramamoorthy N, Jamieson O, Imaan N, Plaisted-Grant K, Davis G. Enhanced detection of gaze toward an object: Sociocognitive influences on visual search. Psychon Bull Rev 2021; 28:494-502. [PMID: 33174087 PMCID: PMC8062376 DOI: 10.3758/s13423-020-01841-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/26/2020] [Indexed: 11/17/2022]
Abstract
Another person's gaze direction is a rich source of social information, especially eyes gazing toward prominent or relevant objects. To guide attention to these important stimuli, visual search mechanisms may incorporate sophisticated coding of eye-gaze and its spatial relationship to other objects. Alternatively, any guidance might reflect the action of simple perceptual 'templates' tuned to visual features of socially relevant objects, or intrinsic salience of direct-gazing eyes for human vision. Previous findings that direct gaze (toward oneself) is prioritised over averted gaze do not distinguish between these accounts. To resolve this issue, we compared search for eyes gazing toward a prominent object versus gazing away, finding more efficient search for eyes 'gazing toward' the object. This effect was most clearly seen in target-present trials when gaze was task-relevant. Visual search mechanisms appear to specify gazer-object relations, a computational building-block of theory of mind.
Collapse
Affiliation(s)
| | - Oliver Jamieson
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Nahiyan Imaan
- Department of Psychology, University of Cambridge, Cambridge, UK
| | | | - Greg Davis
- Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
42
|
Kompatsiari K, Bossi F, Wykowska A. Eye contact during joint attention with a humanoid robot modulates oscillatory brain activity. Soc Cogn Affect Neurosci 2021; 16:383-392. [PMID: 33416877 PMCID: PMC7990063 DOI: 10.1093/scan/nsab001] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 11/27/2020] [Accepted: 01/08/2021] [Indexed: 11/14/2022] Open
Abstract
Eye contact established by a human partner has been shown to affect various cognitive processes of the receiver. However, little is known about humans' responses to eye contact established by a humanoid robot. Here, we aimed at examining humans' oscillatory brain response to eye contact with a humanoid robot. Eye contact (or lack thereof) was embedded in a gaze-cueing task and preceded the phase of gaze-related attentional orienting. In addition to examining the effect of eye contact on the recipient, we also tested its impact on gaze-cueing effects (GCEs). Results showed that participants rated eye contact as more engaging and responded with higher desynchronization of alpha-band activity in left fronto-central and central electrode clusters when the robot established eye contact with them, compared to no eye contact condition. However, eye contact did not modulate GCEs. The results are interpreted in terms of the functional roles involved in alpha central rhythms (potentially interpretable also as mu rhythm), including joint attention and engagement in social interaction.
Collapse
Affiliation(s)
- Kyveli Kompatsiari
- Italian Institute of Technology, Social Cognition in Human-Robot Interaction (S4HRI), Genova 16152, Italy
| | | | - Agnieszka Wykowska
- Italian Institute of Technology, Social Cognition in Human-Robot Interaction (S4HRI), Genova 16152, Italy
| |
Collapse
|
43
|
Casanova M, Clavreul A, Soulard G, Delion M, Aubin G, Ter Minassian A, Seguier R, Menei P. Immersive Virtual Reality and Ocular Tracking for Brain Mapping During Awake Surgery: Prospective Evaluation Study. J Med Internet Res 2021; 23:e24373. [PMID: 33759794 PMCID: PMC8074984 DOI: 10.2196/24373] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 01/26/2021] [Accepted: 02/16/2021] [Indexed: 01/14/2023] Open
Abstract
Background Language mapping during awake brain surgery is currently a standard procedure. However, mapping is rarely performed for other cognitive functions that are important for social interaction, such as visuospatial cognition and nonverbal language, including facial expressions and eye gaze. The main reason for this omission is the lack of tasks that are fully compatible with the restrictive environment of an operating room and awake brain surgery procedures. Objective This study aims to evaluate the feasibility and safety of a virtual reality headset equipped with an eye-tracking device that is able to promote an immersive visuospatial and social virtual reality (VR) experience for patients undergoing awake craniotomy. Methods We recruited 15 patients with brain tumors near language and/or motor areas. Language mapping was performed with a naming task, DO 80, presented on a computer tablet and then in 2D and 3D via the VRH. Patients were also immersed in a visuospatial and social VR experience. Results None of the patients experienced VR sickness, whereas 2 patients had an intraoperative focal seizure without consequence; there was no reason to attribute these seizures to virtual reality headset use. The patients were able to perform the VR tasks. Eye tracking was functional, enabling the medical team to analyze the patients’ attention and exploration of the visual field of the virtual reality headset directly. Conclusions We found that it is possible and safe to immerse the patient in an interactive virtual environment during awake brain surgery, paving the way for new VR-based brain mapping procedures. Trial Registration ClinicalTrials.gov NCT03010943; https://clinicaltrials.gov/ct2/show/NCT03010943.
Collapse
Affiliation(s)
- Morgane Casanova
- Équipe Facial Analysis Synthesis & Tracking Institue of Electronics and Digital Technologies, CentraleSupélec, Rennes, France
| | - Anne Clavreul
- Département de Neurochirurgie, Centre hospitalier universitaire d'Angers, Angers, France.,Centre de Recherche en Cancérologie et Immunologie Nantes Angers, Université d'Angers, Centre hospitalier universitaire d'Angers, Angers, France
| | - Gwénaëlle Soulard
- Département de Neurochirurgie, Centre hospitalier universitaire d'Angers, Angers, France.,Centre de Recherche en Cancérologie et Immunologie Nantes Angers, Université d'Angers, Centre hospitalier universitaire d'Angers, Angers, France
| | - Matthieu Delion
- Département de Neurochirurgie, Centre hospitalier universitaire d'Angers, Angers, France
| | - Ghislaine Aubin
- Département de Neurochirurgie, Centre hospitalier universitaire d'Angers, Angers, France
| | - Aram Ter Minassian
- Département d'Anesthésie-Réanimation, Centre hospitalier universitaire d'Angers, Angers, France
| | - Renaud Seguier
- Équipe Facial Analysis Synthesis & Tracking Institue of Electronics and Digital Technologies, CentraleSupélec, Rennes, France
| | - Philippe Menei
- Département de Neurochirurgie, Centre hospitalier universitaire d'Angers, Angers, France.,Centre de Recherche en Cancérologie et Immunologie Nantes Angers, Université d'Angers, Centre hospitalier universitaire d'Angers, Angers, France
| |
Collapse
|
44
|
Feeling through another's eyes: Perceived gaze direction impacts ERP and behavioural measures of positive and negative affective empathy. Neuroimage 2020; 226:117605. [PMID: 33271267 DOI: 10.1016/j.neuroimage.2020.117605] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 11/06/2020] [Accepted: 11/25/2020] [Indexed: 12/19/2022] Open
Abstract
Looking at the eyes informs us about the thoughts and emotions of those around us, and impacts our own emotional state. However, it is unknown how perceiving direct and averted gaze impacts our ability to share the gazer's positive and negative emotions, abilities referred to as positive and negative affective empathy. We presented 44 participants with contextual sentences describing positive, negative and neutral events happening to other people (e.g. "Her newborn was saved/killed/fed yesterday afternoon."). These were designed to elicit positive, negative, or little to no empathy, and were followed by direct or averted gaze images of the individuals described. Participants rated their affective empathy for the individual and their own emotional valence on each trial. Event-related potentials time-locked to face-onset and associated with empathy and emotional processing were recorded to investigate whether they were modulated by gaze direction. Relative to averted gaze, direct gaze was associated with increased positive valence in the positive and neutral conditions and with increased positive empathy ratings. A similar pattern was found at the neural level, using robust mass-univariate statistics. The N100, thought to reflect an automatic activation of emotion areas, was modulated by gaze in the affective empathy conditions, with opposite effect directions in positive and negative conditions.. The P200, an ERP component sensitive to positive stimuli, was modulated by gaze direction only in the positive empathy condition. Positive and negative trials were processed similarly at the early N200 processing stage, but later diverged, with only negative trials modulating the EPN, P300 and LPP components. These results suggest that positive and negative affective empathy are associated with distinct time-courses, and that perceived gaze direction uniquely modulates positive empathy, highlighting the importance of studying empathy with face stimuli.
Collapse
|
45
|
Attention neglects a stare-in-the-crowd: Unanticipated consequences of prediction-error coding. Cognition 2020; 207:104519. [PMID: 33228968 DOI: 10.1016/j.cognition.2020.104519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 11/12/2020] [Accepted: 11/12/2020] [Indexed: 11/24/2022]
Abstract
Direct gaze - someone looking at you - is an important and subjectively-salient stimulus. Its processing is thought to be enhanced by the brain's internalised predictions - priors - that effectively specify it as the most likely gaze direction. Current consensus holds that, befitting its presumed importance, direct gaze attracts attention more powerfully than other gazes. Conversely, some Predictive Coding (PC) models, in which exogenous attention is drawn to stimuli that violate predictions, may be construed as making the opposite claim - i.e., exogenous attention should be biased away from direct gaze (which conforms to internal predictions), toward averted gaze (which does not). Here, searching displays with salient, 'odd-one-out' gazes, we observed attentional bias (in rapid, initial saccades) toward averted gaze, as would be expected by PC models. However, this pattern obtained only when conditions highlighted gaze-uniqueness. We speculate that, in our experiments, task requirements determined how prediction influenced perception.
Collapse
|
46
|
When Agents Become Partners: A Review of the Role the Implicit Plays in the Interaction with Artificial Social Agents. MULTIMODAL TECHNOLOGIES AND INTERACTION 2020. [DOI: 10.3390/mti4040081] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The way we interact with computers has significantly changed over recent decades. However, interaction with computers still falls behind human to human interaction in terms of seamlessness, effortlessness, and satisfaction. We argue that simultaneously using verbal, nonverbal, explicit, implicit, intentional, and unintentional communication channels addresses these three aspects of the interaction process. To better understand what has been done in the field of Human Computer Interaction (HCI) in terms of incorporating the type channels mentioned above, we reviewed the literature on implicit nonverbal interaction with a specific emphasis on the interaction between humans on the one side, and robot and virtual humans on the other side. These Artificial Social Agents (ASA) are increasingly used as advanced tools for solving not only physical but also social tasks. In the literature review, we identify domains of interaction between humans and artificial social agents that have shown exponential growth over the years. The review highlights the value of incorporating implicit interaction capabilities in Human Agent Interaction (HAI) which we believe will lead to satisfying human and artificial social agent team performance. We conclude the article by presenting a case study of a system that harnesses subtle nonverbal, implicit interaction to increase the state of relaxation in users. This “Virtual Human Breathing Relaxation System” works on the principle of physiological synchronisation between a human and a virtual, computer-generated human. The active entrainment concept behind the relaxation system is generic and can be applied to other human agent interaction domains of implicit physiology-based interaction.
Collapse
|
47
|
Nash A, Ridout N, Nash RA. Facing away from the interviewer: Evidence of little benefit to eyewitnesses' memory performance. APPLIED COGNITIVE PSYCHOLOGY 2020. [DOI: 10.1002/acp.3723] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Alena Nash
- Department of Psychology, School of Life and Health Sciences Aston University UK
| | - Nathan Ridout
- Department of Psychology, School of Life and Health Sciences Aston University UK
| | - Robert A. Nash
- Department of Psychology, School of Life and Health Sciences Aston University UK
| |
Collapse
|
48
|
Conditional effects of gaze on automatic imitation: the role of autistic traits. Sci Rep 2020; 10:15512. [PMID: 32968117 PMCID: PMC7511335 DOI: 10.1038/s41598-020-72513-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 08/31/2020] [Indexed: 01/03/2023] Open
Abstract
Establishing direct gaze has been shown to enhance the tendency to automatically imitate the other person's actions, an effect that seems to be reduced in autism. Most previous studies, however, used experimental tasks that may have confounded the measurement of automatic imitation with spatial compatibility effects. This calls into question whether gaze cues regulate automatic imitation, or instead affect domain-general processes of response inhibition. Using a task that disentangled imitative from spatial compatibility effects, the current study re-examined the role of autistic traits on the modulation of automatic imitation by direct and averted gaze cues. While our results do not provide evidence for an overall significant influence of gaze on neither automatic imitation nor spatial compatibility, autistic traits were predictive of a reduced inhibition of imitative behaviour following averted gaze. Nonetheless, exploratory analyses suggested that the observed modulation by autistic traits may actually be better explained by the effects of concomitant social anxiety symptoms. In addition, the ethnicity of the imitated agent was identified as another potential modulator of the gaze effects on automatic imitation. Overall, our findings highlight the contextual nature of automatic imitation, but call for a reconsideration of the role of gaze on imitative behaviour.
Collapse
|
49
|
Dosso JA, Huynh M, Kingstone A. I spy without my eye: Covert attention in human social interactions. Cognition 2020; 202:104388. [DOI: 10.1016/j.cognition.2020.104388] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 06/25/2020] [Accepted: 06/26/2020] [Indexed: 12/12/2022]
|
50
|
Snell-Rood E, Snell-Rood C. The developmental support hypothesis: adaptive plasticity in neural development in response to cues of social support. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190491. [PMID: 32475336 PMCID: PMC7293157 DOI: 10.1098/rstb.2019.0491] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/20/2020] [Indexed: 12/13/2022] Open
Abstract
Across mammals, cues of developmental support, such as touching, licking or attentiveness, stimulate neural development, behavioural exploration and even overall body growth. Why should such fitness-related traits be so sensitive to developmental conditions? Here, we review what we term the 'developmental support hypothesis', a potential adaptive explanation of this plasticity. Neural development can be a costly process, in terms of time, energy and exposure. However, environmental variability may sometimes compromise parental care during this costly developmental period. We propose this environmental variation has led to the evolution of adaptive plasticity of neural and behavioural development in response to cues of developmental support, where neural development is stimulated in conditions that support associated costs. When parental care is compromised, offspring grow less and adopt a more resilient and stress-responsive strategy, improving their chances of survival in difficult conditions, similar to existing ideas on the adaptive value of early-life programming of stress. The developmental support hypothesis suggests new research directions, such as testing the adaptive value of reduced neural growth and metabolism in stressful conditions, and expanding the range of potential cues animals may attend to as indicators of developmental support. Considering evolutionary and ecologically appropriate cues of social support also has implications for promoting healthy neural development in humans. This article is part of the theme issue 'Life history and learning: how childhood, caregiving and old age shape cognition and culture in humans and other animals'.
Collapse
Affiliation(s)
- Emilie Snell-Rood
- Department of Ecology, Evolution and Behavior, University of Minnesota, 1479 Gortner Avenue, Gortner 140, St Paul, MN 55108, USA
| | - Claire Snell-Rood
- School of Public Health, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|