1
|
Oliveira M, Brands J, Mashudi J, Liefooghe B, Hortensius R. Perceptions of artificial intelligence system's aptitude to judge morality and competence amidst the rise of Chatbots. Cogn Res Princ Implic 2024; 9:47. [PMID: 39019988 PMCID: PMC11255178 DOI: 10.1186/s41235-024-00573-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 07/02/2024] [Indexed: 07/19/2024] Open
Abstract
This paper examines how humans judge the capabilities of artificial intelligence (AI) to evaluate human attributes, specifically focusing on two key dimensions of human social evaluation: morality and competence. Furthermore, it investigates the impact of exposure to advanced Large Language Models on these perceptions. In three studies (combined N = 200), we tested the hypothesis that people will find it less plausible that AI is capable of judging the morality conveyed by a behavior compared to judging its competence. Participants estimated the plausibility of AI origin for a set of written impressions of positive and negative behaviors related to morality and competence. Studies 1 and 3 supported our hypothesis that people would be more inclined to attribute AI origin to competence-related impressions compared to morality-related ones. In Study 2, we found this effect only for impressions of positive behaviors. Additional exploratory analyses clarified that the differentiation between the AI origin of competence and morality judgments persisted throughout the first half year after the public launch of popular AI chatbot (i.e., ChatGPT) and could not be explained by participants' general attitudes toward AI, or the actual source of the impressions (i.e., AI or human). These findings suggest an enduring belief that AI is less adept at assessing the morality compared to the competence of human behavior, even as AI capabilities continued to advance.
Collapse
Affiliation(s)
- Manuel Oliveira
- Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Justus Brands
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| | - Judith Mashudi
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| | - Baptist Liefooghe
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| | - Ruud Hortensius
- Department of Psychology, Utrecht University, Utrecht, The Netherlands.
| |
Collapse
|
2
|
Jastrzab LE, Chaudhury B, Ashley SA, Koldewyn K, Cross ES. Beyond human-likeness: Socialness is more influential when attributing mental states to robots. iScience 2024; 27:110070. [PMID: 38947497 PMCID: PMC11214418 DOI: 10.1016/j.isci.2024.110070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 03/08/2024] [Accepted: 05/17/2024] [Indexed: 07/02/2024] Open
Abstract
We sought to replicate and expand previous work showing that the more human-like a robot appears, the more willing people are to attribute mind-like capabilities and socially engage with it. Forty-two participants played games against a human, a humanoid robot, a mechanoid robot, and a computer algorithm while undergoing functional neuroimaging. We confirmed that the more human-like the agent, the more participants attributed a mind to them. However, exploratory analyses revealed that the perceived socialness of an agent appeared to be as, if not more, important for mind attribution. Our findings suggest top-down knowledge cues may be equally or possibly more influential than bottom-up stimulus cues when exploring mind attribution in non-human agents. While further work is now required to test this hypothesis directly, these preliminary findings hold important implications for robotic design and to understand and test the flexibility of human social cognition when people engage with artificial agents.
Collapse
Affiliation(s)
- Laura E. Jastrzab
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
| | - Bishakha Chaudhury
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
| | - Sarah A. Ashley
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
- Division of Psychiatry, Institute of Mental Health, University College London, London, UK
| | - Kami Koldewyn
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
| | - Emily S. Cross
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
- Chair for Social Brain Sciences, Department of Humanities, Social and Political Sciences, ETHZ, Zürich, Switzerland
| |
Collapse
|
3
|
Bouquet CA, Belletier C, Monceau S, Chausse P, Croizet JC, Huguet P, Ferrand L. Joint action with human and robotic co-actors: Self-other integration is immune to the perceived humanness of the interacting partner. Q J Exp Psychol (Hove) 2024; 77:70-89. [PMID: 36803063 DOI: 10.1177/17470218231158481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2023]
Abstract
When performing a joint action task, we automatically represent the action and/or task constraints of the co-actor with whom we are interacting. Current models suggest that, not only physical similarity, but also abstract, conceptual features shared between self and the interacting partner play a key role in the emergence of joint action effects. Across two experiments, we investigated the influence of the perceived humanness of a robotic agent on the extent to which we integrate the action of that agent into our own action/task representation, as indexed by the Joint Simon Effect (JSE). The presence (vs. absence) of a prior verbal interaction was used to manipulate robot's perceived humanness. In Experiment 1, using a within-participant design, we had participants perform the joint Go/No-go Simon task with two different robots. Before performing the joint task, one robot engaged in a verbal interaction with the participant and the other robot did not. In Experiment 2, we employed a between-participants design to contrast these two robot conditions as well as a human partner condition. In both experiments, a significant Simon effect emerged during joint action and its amplitude was not modulated by the humanness of the interacting partner. Experiment 2 further showed that the JSE obtained in robot conditions did not differ from that measured in the human partner condition. These findings contradict current theories of joint action mechanisms according to which perceived self-other similarity is a crucial determinant of self-other integration in shared task settings.
Collapse
Affiliation(s)
- Cédric A Bouquet
- CNRS, LAPSCO, Université Clermont Auvergne, Clermont-Ferrand, France
- CNRS, CeRCA, Université de Poitiers, Poitiers, France
| | - Clément Belletier
- CNRS, LAPSCO, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Sophie Monceau
- CNRS, LAPSCO, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Pierre Chausse
- CNRS, LAPSCO, Université Clermont Auvergne, Clermont-Ferrand, France
| | | | - Pascal Huguet
- CNRS, LAPSCO, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Ludovic Ferrand
- CNRS, LAPSCO, Université Clermont Auvergne, Clermont-Ferrand, France
| |
Collapse
|
4
|
Konvalinka I, Kompatsiari K, Li Q. The fine-grained temporal dynamics of social timing: a window into sociality of embodied social agents. Comment on "The evolution of social timing" by L. Verga, S. A. Kotz, & A. Ravignani. Phys Life Rev 2023; 47:95-98. [PMID: 37804719 DOI: 10.1016/j.plrev.2023.09.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 09/24/2023] [Indexed: 10/09/2023]
Affiliation(s)
- Ivana Konvalinka
- Section for Cognitive Systems, DTU Compute, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark.
| | - Kyveli Kompatsiari
- Section for Cognitive Systems, DTU Compute, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| | - Qianliang Li
- Section for Cognitive Systems, DTU Compute, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| |
Collapse
|
5
|
Parenti L, Belkaid M, Wykowska A. Differences in Social Expectations About Robot Signals and Human Signals. Cogn Sci 2023; 47:e13393. [PMID: 38133602 DOI: 10.1111/cogs.13393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 11/22/2023] [Accepted: 11/27/2023] [Indexed: 12/23/2023]
Abstract
In our daily lives, we are continually involved in decision-making situations, many of which take place in the context of social interaction. Despite the ubiquity of such situations, there remains a gap in our understanding of how decision-making unfolds in social contexts, and how communicative signals, such as social cues and feedback, impact the choices we make. Interestingly, there is a new social context to which humans are recently increasingly more frequently exposed-social interaction with not only other humans but also artificial agents, such as robots or avatars. Given these new technological developments, it is of great interest to address the question of whether-and in what way-social signals exhibited by non-human agents influence decision-making. The present study aimed to examine whether robot non-verbal communicative behavior has an effect on human decision-making. To this end, we implemented a two-alternative-choice task where participants were to guess which of two presented cups was covering a ball. This game was an adaptation of a "Shell Game." A robot avatar acted as a game partner producing social cues and feedback. We manipulated robot's cues (pointing toward one of the cups) before the participant's decision and the robot's feedback ("thumb up" or no feedback) after the decision. We found that participants were slower (compared to other conditions) when cues were mostly invalid and the robot reacted positively to wins. We argue that this was due to the incongruence of the signals (cue vs. feedback), and thus violation of expectations. In sum, our findings show that incongruence in pre- and post-decision social signals from a robot significantly influences task performance, highlighting the importance of understanding expectations toward social robots for effective human-robot interactions.
Collapse
Affiliation(s)
- Lorenzo Parenti
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
- Department of Psychology, University of Turin
| | - Marwen Belkaid
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
- ETIS UMR 8051, CY Cergy Paris Université, ENSEA, CNRS
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
| |
Collapse
|
6
|
Cracco E, Liepelt R, Brass M, Genschow O. Top-Down Modulation of Motor Priming by Belief About Animacy. Exp Psychol 2023; 70:355-365. [PMID: 38602116 DOI: 10.1027/1618-3169/a000605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Research has shown that people automatically imitate others and that this tendency is stronger when the other person is a human compared with a nonhuman agent. However, a controversial question is whether automatic imitation is also modulated by whether people believe the other person is a human. Although early research supported this hypothesis, not all studies reached the same conclusion and a recent meta-analysis found that there is currently neither evidence in favor nor against an influence of animacy beliefs on automatic imitation. One of the most prominent studies supporting such an influence is the study by Liepelt and Brass (2010), who found that automatic imitation was stronger when participants believed an ambiguous, gloved hand to be human, as opposed to wooden. In this registered report, we provide a high-powered replication of this study (N = 199). In contrast to Liepelt and Brass (2010), we did not find an effect of animacy beliefs on automatic imitation. However, we did find a correlation between automatic imitation and perceived self-other similarity. Together, these results suggest that the gloved hand procedure does not reliably influence automatic imitation, but interindividual differences in perceived similarity do.
Collapse
Affiliation(s)
- Emiel Cracco
- Department of Experimental Clinical and Health Psychology, Ghent University, Belgium
| | - Roman Liepelt
- Department of General Psychology: Judgment, Decision Making, Action, Faculty of Psychology, FernUniversität in Hagen, Germany
| | - Marcel Brass
- Department of Psychology, Humboldt University of Berlin, Germany
| | - Oliver Genschow
- Department of Cognitive, Social- and Economic Psychology, Institute for Management and Organization, Leuphana University, Lüneburg, Germany
| |
Collapse
|
7
|
Miyamoto Y, Uchitomi H, Miyake Y. Effects of avatar shape and motion on mirror neuron system activity. Front Hum Neurosci 2023; 17:1173185. [PMID: 37859767 PMCID: PMC10582709 DOI: 10.3389/fnhum.2023.1173185] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 09/21/2023] [Indexed: 10/21/2023] Open
Abstract
Humanness is an important characteristic for facilitating interpersonal communication, particularly through avatars in the metaverse. In this study, we explored the mirror neuron system (MNS) as a potential neural basis for perceiving humanness in avatars. Although previous research suggests that the MNS may be influenced by human-like shape and motion, the results have been inconsistent due to the diversity and complexity of the MNS investigation. Therefore, this study aims to investigate the effects of shape and motion humanness in avatars on MNS activity. Participants viewed videos of avatars with four different shapes (HumanShape, AngularShape, AbbreviatedShape, and ScatteredShape) and two types of motion (HumanMotion and LinearMotion), and their μ-wave attenuation in the electroencephalogram was evaluated. Results from a questionnaire indicated that HumanMotion was perceived as human-like, while AbbreviatedShape and ScatteredShape were seen as non-human-like. AngularShape's humanity was indefinite. The MNS was activated as expected for avatars with human-like shapes and/or motions. However, for non-human-like motions, there were differences in activity trends depending on the avatar shape. Specifically, avatars with HumanShape and ScatteredShape in LinearMotion activated the MNS, but the MNS was indifferent to AngularShape and AbbreviatedShape. These findings suggest that when avatars make non-human-like motions, the MNS is activated not only for human-like appearance but also for the scattered and exaggerated appearance of the human body in the avatar shape. These findings could enhance inter-avatar communication by considering brain activity.
Collapse
Affiliation(s)
- Yuki Miyamoto
- Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Yokohama, Japan
| | - Hirotaka Uchitomi
- Department of Computer Science, School of Computing, Tokyo Institute of Technology, Yokohama, Japan
| | - Yoshihiro Miyake
- Department of Computer Science, School of Computing, Tokyo Institute of Technology, Yokohama, Japan
| |
Collapse
|
8
|
Schmitz L, Wahn B, Krüger M. Attention allocation in complementary joint action: How joint goals affect spatial orienting. Atten Percept Psychophys 2023:10.3758/s13414-023-02779-1. [PMID: 37684501 DOI: 10.3758/s13414-023-02779-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/23/2023] [Indexed: 09/10/2023]
Abstract
When acting jointly, individuals often attend and respond to the same object or spatial location in complementary ways (e.g., when passing a mug, one person grasps its handle with a precision grip; the other receives it with a whole-hand grip). At the same time, the spatial relation between individuals' actions affects attentional orienting: one is slower to attend and respond to locations another person previously acted upon than to alternate locations ("social inhibition of return", social IOR). Achieving joint goals (e.g., passing a mug), however, often requires complementary return responses to a co-actor's previous location. This raises the question of whether attentional orienting, and hence the social IOR, is affected by the (joint) goal our actions are directed at. The present study addresses this question. Participants responded to cued locations on a computer screen, taking turns with a virtual co-actor. They pursued either an individual goal or performed complementary actions with the co-actor, in pursuit of a joint goal. Four experiments showed that the social IOR was significantly modulated when participant and co-actor pursued a joint goal. This suggests that attentional orienting is affected not only by the spatial but also by the social relation between two agents' actions. Our findings thus extend research on interpersonal perception-action effects, showing that the way another agent's perceived action shapes our own depends on whether we share a joint goal with that agent.
Collapse
Affiliation(s)
- Laura Schmitz
- Institute of Sports Science, Leibniz University Hannover, Hannover, Germany.
- Department of Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| | - Basil Wahn
- Institute of Educational Research, Ruhr University Bochum, Bochum, Germany
| | - Melanie Krüger
- Institute of Sports Science, Leibniz University Hannover, Hannover, Germany
| |
Collapse
|
9
|
Caruana N, Moffat R, Miguel-Blanco A, Cross ES. Perceptions of intelligence & sentience shape children's interactions with robot reading companions. Sci Rep 2023; 13:7341. [PMID: 37147422 PMCID: PMC10162967 DOI: 10.1038/s41598-023-32104-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 03/21/2023] [Indexed: 05/07/2023] Open
Abstract
The potential for robots to support education is being increasingly studied and rapidly realised. However, most research evaluating education robots has neglected to examine the fundamental features that make them more or less effective, given the needs and expectations of learners. This study explored how children's perceptions, expectations and experiences are shaped by aesthetic and functional features during interactions with different robot 'reading buddies'. We collected a range of quantitative and qualitative measures of subjective experience before and after children read a book with one of three different robots. An inductive thematic analysis revealed that robots have the potential offer children an engaging and non-judgemental social context to promote reading engagement. This was supported by children's perceptions of robots as being intelligent enough to read, listen and comprehend the story, particularly when they had the capacity to talk. A key challenge in the use of robots for this purpose was the unpredictable nature of robot behaviour, which remains difficult to perfectly control and time using either human operators or autonomous algorithms. Consequently, some children found the robots' responses distracting. We provide recommendations for future research seeking to position seemingly sentient and intelligent robots as an assistive tool within and beyond education settings.
Collapse
Affiliation(s)
- Nathan Caruana
- School of Psychological Sciences, Macquarie University, Level 3, 16 University Ave, Sydney, NSW, 2109, Australia.
| | - Ryssa Moffat
- School of Psychological Sciences, Macquarie University, Level 3, 16 University Ave, Sydney, NSW, 2109, Australia
| | - Aitor Miguel-Blanco
- School of Psychological Sciences, Macquarie University, Level 3, 16 University Ave, Sydney, NSW, 2109, Australia
| | - Emily S Cross
- School of Psychological Sciences, Macquarie University, Level 3, 16 University Ave, Sydney, NSW, 2109, Australia.
- Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, Australia.
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.
- MARCS Institute for Brain, Behaviour and Development, University of Western Sydney, Sydney, Australia.
- Department of Humanities, Social & Political Sciences (D-GESS) and the Department of Health Sciences and Technology (D-HEST), ETH Zurich, Zurich, Switzerland.
| |
Collapse
|
10
|
Hsieh TY, Chaudhury B, Cross ES. Human–Robot Cooperation in Economic Games: People Show Strong Reciprocity but Conditional Prosociality Toward Robots. Int J Soc Robot 2023; 15:791-805. [DOI: 10.1007/s12369-023-00981-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/15/2023] [Indexed: 07/19/2023]
Abstract
AbstractUnderstanding how people socially engage with robots is becoming increasingly important as these machines are deployed in social settings. We investigated 70 participants’ situational cooperation tendencies towards a robot using prisoner’s dilemma games, manipulating the incentives for cooperative decisions to be high or low. We predicted that people would cooperate more often with the robot in high-incentive conditions. We also administered subjective measures to explore the relationships between people’s cooperative decisions and their social value orientation, attitudes towards robots, and anthropomorphism tendencies. Our results showed incentive structure did not predict human cooperation overall, but did influence cooperation in early rounds, where participants cooperated significantly more in high-incentive conditions. Exploratory analyses further revealed that participants played a tit-for-tat strategy against the robot (whose decisions were random), and only behaved prosocially toward the robot when they had achieved high scores themselves. These findings highlight how people make social decisions when their individual profit is at odds with collective profit with a robot, and advance understanding on human–robot interactions in collaborative contexts.
Collapse
|
11
|
Sahaï A, Caspar E, De Beir A, Grynszpan O, Pacherie E, Berberian B. Modulations of one's sense of agency during human-machine interactions: A behavioural study using a full humanoid robot. Q J Exp Psychol (Hove) 2023; 76:606-620. [PMID: 35400221 DOI: 10.1177/17470218221095841] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Although previous investigations reported a reduced sense of agency when individuals act with traditional machines, little is known about the mechanisms underpinning interactions with human-like automata. The aim of this study was twofold: (1) to investigate the effect of the machine's physical appearance on the individuals' sense of agency and (2) to explore the cognitive mechanisms underlying the individuals' sense of agency when they are engaged in a joint task. Twenty-eight participants performed a joint Simon task together with another human or an automated artificial system as a co-agent. The physical appearance of the automated artificial system was manipulated so that participants could cooperate either with a servomotor or a full humanoid robot during the joint task. Both participants' response times and temporal estimations of action-output delays (i.e., an implicit measure of agency) were collected. Results showed that participants' sense of agency for self- and other-generated actions sharply declined during interactions with the servomotor compared with the human-human interactions. Interestingly, participants' sense of agency for self- and other-generated actions was reinforced when participants interacted with the humanoid robot compared with the servomotor. These results are discussed further.
Collapse
Affiliation(s)
- Aïsha Sahaï
- Département d'Etudes Cognitives, ENS, EHESS, CNRS, PSL Research University, Institut Jean-Nicod, Paris, France.,Département Traitement de l'Information et Systèmes, ONERA, The French Aerospace Lab, Salon-de-Provence, France
| | - Emilie Caspar
- Department of Experimental Psychology, Social & Moral Brain Lab, Ghent University, Ghent, Belgium
| | - Albert De Beir
- Robotics & Multibody Mechanics Research Group, Vrij Universiteit Brussel (VUB), Bruxelles, Belgium
| | - Ouriel Grynszpan
- Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur, LIMSI-CNRS, Université Paris-Sud, Orsay, France
| | - Elisabeth Pacherie
- Département d'Etudes Cognitives, ENS, EHESS, CNRS, PSL Research University, Institut Jean-Nicod, Paris, France
| | - Bruno Berberian
- Département Traitement de l'Information et Systèmes, ONERA, The French Aerospace Lab, Salon-de-Provence, France
| |
Collapse
|
12
|
Mendez MF. A Functional and Neuroanatomical Model of Dehumanization. Cogn Behav Neurol 2023; 36:42-47. [PMID: 36149395 PMCID: PMC9991937 DOI: 10.1097/wnn.0000000000000316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 04/26/2022] [Indexed: 11/25/2022]
Abstract
The dehumanization of others is a major scourge of mankind; however, despite its significance, physicians have little understanding of the neurobiological mechanisms for this behavior. We can learn much about dehumanization from its brain-behavior localization and its manifestations in people with brain disorders. Dehumanization as an act of denying to others human qualities includes two major forms. Animalistic dehumanization (also called infrahumanization) results from increased inhibition of prepotent tendencies for emotional feelings and empathy for others. The mechanism may be increased activity in the inferior frontal gyrus. In contrast, mechanistic dehumanization results from a loss of perception of basic human nature and decreased mind-attribution. The mechanism may be hypofunction of a mentalization network centered in the ventromedial prefrontal cortex and adjacent subgenual anterior cingulate cortex. Whereas developmental factors may promote animalistic dehumanization, brain disorders, such as frontotemporal dementia, primarily promote mechanistic dehumanization. The consideration of these two processes as distinct, with different neurobiological origins, could help guide efforts to mitigate expression of this behavior.
Collapse
Affiliation(s)
- Mario F. Mendez
- Department of Neurology, University of California Los Angeles, Los Angeles, California
- Psychiatry and Behavioral Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California
- Neurology Service, Neurobehavior Unit, V.A. Greater Los Angeles Healthcare System, Los Angeles, California
| |
Collapse
|
13
|
Vaitonytė J, Alimardani M, Louwerse MM. Scoping review of the neural evidence on the uncanny valley. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
|
14
|
Hogenhuis A, Hortensius R. Domain-specific and domain-general neural network engagement during human-robot interactions. Eur J Neurosci 2022; 56:5902-5916. [PMID: 36111622 PMCID: PMC9828180 DOI: 10.1111/ejn.15823] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 08/03/2022] [Indexed: 01/12/2023]
Abstract
To what extent do domain-general and domain-specific neural network engagement generalize across interactions with human and artificial agents? In this exploratory study, we analysed a publicly available functional MRI (fMRI) data set (n = 22) to probe the similarities and dissimilarities in neural architecture while participants conversed with another person or a robot. Incorporating trial-by-trial dynamics of the interactions, listening and speaking, we used whole-brain, region-of-interest and functional connectivity analyses to test response profiles within and across social or non-social, domain-specific and domain-general networks, that is, the person perception, theory-of-mind, object-specific, language and multiple-demand networks. Listening to a robot compared to a human resulted in higher activation in the language network, especially in areas associated with listening comprehension, and in the person perception network. No differences in activity of the theory-of-mind network were found. Results from the functional connectivity analysis showed no difference between interactions with a human or robot in within- and between-network connectivity. Together, these results suggest that although largely similar regions are activated when speaking to a human and to a robot, activity profiles during listening point to a dissociation at a lower level or perceptual level, but not higher order cognitive level.
Collapse
Affiliation(s)
- Ann Hogenhuis
- Liberal Arts and SciencesUtrecht UniversityUtrechtThe Netherlands
| | - Ruud Hortensius
- Department of PsychologyUtrecht UniversityUtrechtThe Netherlands
| |
Collapse
|
15
|
Diana F, Kawahara M, Saccardi I, Hortensius R, Tanaka A, Kret ME. A Cross-Cultural Comparison on Implicit and Explicit Attitudes Towards Artificial Agents. Int J Soc Robot 2022; 15:1439-1455. [PMID: 37654700 PMCID: PMC10465401 DOI: 10.1007/s12369-022-00917-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
Historically, there has been a great deal of confusion in the literature regarding cross-cultural differences in attitudes towards artificial agents and preferences for their physical appearance. Previous studies have almost exclusively assessed attitudes using self-report measures (i.e., questionnaires). In the present study, we sought to expand our knowledge on the influence of cultural background on explicit and implicit attitudes towards robots and avatars. Using the Negative Attitudes Towards Robots Scale and the Implicit Association Test in a Japanese and Dutch sample, we investigated the effect of culture and robots' body types on explicit and implicit attitudes across two experiments (total n = 669). Partly overlapping with our hypothesis, we found that Japanese individuals had a more positive explicit attitude towards robots compared to Dutch individuals, but no evidence of such a difference was found at the implicit level. As predicted, the implicit preference towards humans was moderate in both cultural groups, but in contrast to what we expected, neither culture nor robot embodiment influenced this preference. These results suggest that only at the explicit but not implicit level, cultural differences appear in attitudes towards robots. Supplementary Information The online version contains supplementary material available at 10.1007/s12369-022-00917-7.
Collapse
Affiliation(s)
- Fabiola Diana
- Comparative Psychology and Affective Neuroscience Lab, Cognitive Psychology Unit, Leiden University, Wassenaarseweg 52, 2333 AK, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition (LIBC), Leiden University, Albinusdreef 2, 2333 ZA, Leiden, The Netherlands
| | - Misako Kawahara
- Department of Psychology, Tokyo Woman’s Christian University, 2-6-1 Zempukuji, Suginamiku, Tokyo 167-8585 Japan
| | - Isabella Saccardi
- Department of Information and Computing Sciences, Utrecht University, Princeton Square 5, 3584 CC Utrecht, The Netherlands
| | - Ruud Hortensius
- Department of Psychology, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands
| | - Akihiro Tanaka
- Department of Psychology, Tokyo Woman’s Christian University, 2-6-1 Zempukuji, Suginamiku, Tokyo 167-8585 Japan
| | - Mariska E. Kret
- Comparative Psychology and Affective Neuroscience Lab, Cognitive Psychology Unit, Leiden University, Wassenaarseweg 52, 2333 AK, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition (LIBC), Leiden University, Albinusdreef 2, 2333 ZA, Leiden, The Netherlands
| |
Collapse
|
16
|
Croijmans I, van Erp L, Bakker A, Cramer L, Heezen S, Van Mourik D, Weaver S, Hortensius R. No Evidence for an Effect of the Smell of Hexanal on Trust in Human-Robot Interaction. Int J Soc Robot 2022; 15:1-10. [PMID: 36128582 PMCID: PMC9477175 DOI: 10.1007/s12369-022-00918-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2022] [Indexed: 11/06/2022]
Abstract
The level of interpersonal trust among people is partially determined through the sense of smell. Hexanal, a molecule which smell resembles freshly cut grass, can increase trust in people. Here, we ask the question if smell can be leveraged to facilitate human-robot interaction and test whether hexanal also increases the level of trust during collaboration with a social robot. In a preregistered double-blind, placebo-controlled study, we tested if trial-by-trial and general trust during perceptual decision making in collaboration with a social robot is affected by hexanal across two samples (n = 46 and n = 44). It was hypothesized that unmasked hexanal and hexanal masked by eugenol, a molecule with a smell resembling clove, would increase the level of trust in human-robot interaction, compared to eugenol alone or a control condition consisting of only the neutral smelling solvent propylene glycol. Contrasting previous findings in human interaction, no significant effect of unmasked or eugenol-masked hexanal on trust in robots was observed. These findings indicate that the conscious or nonconscious impact of smell on trust might not generalise to interactions with social robots. One explanation could be category- and context-dependency of smell leading to a mismatch between the natural smell of hexanal, a smell also occurring in human sweat, and the mechanical physical or mental representation of the robot.
Collapse
Affiliation(s)
- Ilja Croijmans
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Laura van Erp
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Annelie Bakker
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Lara Cramer
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Sophie Heezen
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Dana Van Mourik
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Sterre Weaver
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Ruud Hortensius
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| |
Collapse
|
17
|
Polakow T, Laban G, Teodorescu A, Busemeyer JR, Gordon G. Social robot advisors: effects of robot judgmental fallacies and context. INTEL SERV ROBOT 2022. [DOI: 10.1007/s11370-022-00438-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
18
|
Hortensius R, Chaudhury B, Hoffmann M, Cross E. Tracking human interactions with a commercially-available robot over multiple days. OPEN RESEARCH EUROPE 2022; 2:97. [PMID: 37645308 PMCID: PMC10445930 DOI: 10.12688/openreseurope.14824.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/25/2022] [Indexed: 07/19/2023]
Abstract
Background: As research examining human-robot interaction moves from the laboratory to the real world, studies seeking to examine how people interact with robots face the question of which robotic platform to employ to collect data in situ. To facilitate the study of a broad range of individuals, from children to clinical populations, across diverse environments, from homes to schools, a robust, reproducible, low-cost and easy-to-use robotic platform is needed. Methods: We describe how a commercially available off-the-shelf robot, Cozmo, can be used to study embodied human-robot interactions in a wide variety of settings, including the user's home. We describe the steps required to use this affordable and flexible platform for longitudinal human-robot interaction studies. First, we outline the technical specifications and requirements of this platform and accessories. We then show how log files containing detailed data on the human-robot interaction can be collected and extracted. Finally, we detail the types of information that can be retrieved from these data. Results: We present findings from a validation that mapped the behavioural repertoire of the Cozmo robot and introduce an accompanying interactive emotion classification tool to use with this robot. This tool combined with the data extracted from the log files can provide the necessary details to understand the psychological consequences of long-term interactions. Conclusions: This low-cost robotic platform has the potential to provide the field with a variety of valuable new possibilities to study the social cognitive processes underlying human-robot interactions within and beyond the research laboratory, which are user-driven and unconstrained in both time and place.
Collapse
Affiliation(s)
- Ruud Hortensius
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| | - Bishakha Chaudhury
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Martin Hoffmann
- Department of Computer Science, Humboldt University Berlin, Berlin, Germany
| | - Emily Cross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
- Department of Cognitive Science, Macquarie University, Sydney, Australia
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| |
Collapse
|
19
|
Xiao C, Zhao L. Robotic Chef Versus Human Chef: The Effects of Anthropomorphism, Novel Cues, and Cooking Difficulty Level on Food Quality Prediction. Int J Soc Robot 2022; 14:1697-1710. [PMID: 35910296 PMCID: PMC9309233 DOI: 10.1007/s12369-022-00896-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/31/2022] [Indexed: 11/29/2022]
Abstract
Robots have been increasingly common in hospitality and tourism, especially being favored under the threat of COVID-19. However, people generally do not think robots are appropriate for cooking food in hotels and restaurants, possibly because they hold low quality predictions for robot-cooked food. This study aimed to investigate the factors influencing people's quality prediction for robot-cooked food. In three experiments, participants viewed pictures of human and robotic chefs and dishes cooked by them, and then made food quality predictions and rated their perceptions of the chefs. The results showed that participants predicted the foods cooked by robotic chefs were above average quality; however, they consistently held lower food quality prediction for robotic chefs than human chefs, regardless of dishes' cooking difficulty level, novel cues in chefs and food, or the anthropomorphism level of robotic chefs. The results also showed that increasing the appearance of robotic chefs from low or medium to high anthropomorphism, or enabling robotic chefs to cook high cooking difficulty level food could promote food quality prediction. These results revealed the current acceptance of robot-cooked food, suggesting possible ways to improve food quality predictions.
Collapse
Affiliation(s)
- Chengli Xiao
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023 China
| | - Liqian Zhao
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023 China
| |
Collapse
|
20
|
De Castro Martins C, Chaminade T, Cavazza M. Causal Analysis of Activity in Social Brain Areas During Human-Agent Conversation. FRONTIERS IN NEUROERGONOMICS 2022; 3:843005. [PMID: 38235459 PMCID: PMC10790851 DOI: 10.3389/fnrgo.2022.843005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 04/11/2022] [Indexed: 01/19/2024]
Abstract
This article investigates the differences in cognitive and neural mechanisms between human-human and human-virtual agent interaction using a dataset recorded in an ecologically realistic environment. We use Convergent Cross Mapping (CCM) to investigate functional connectivity between pairs of regions involved in the framework of social cognitive neuroscience, namely the fusiform gyrus, superior temporal sulcus (STS), temporoparietal junction (TPJ), and the dorsolateral prefrontal cortex (DLPFC)-taken as prefrontal asymmetry. Our approach is a compromise between investigating local activation in specific regions and investigating connectivity networks that may form part of larger networks. In addition to concording with previous studies, our results suggest that the right TPJ is one of the most reliable areas for assessing processes occurring during human-virtual agent interactions, both in a static and dynamic sense.
Collapse
Affiliation(s)
| | - Thierry Chaminade
- Institut de Neurosciences de la Timone (INT, UMR7289), Aix-Marseille University-CNRS, Marseille, France
| | | |
Collapse
|
21
|
Thellman S, de Graaf M, Ziemke T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a
computer < robot < human
pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Collapse
|
22
|
Hsieh TY, Cross ES. People's dispositional cooperative tendencies towards robots are unaffected by robots' negative emotional displays in prisoner's dilemma games. Cogn Emot 2022; 36:995-1019. [PMID: 35389323 DOI: 10.1080/02699931.2022.2054781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
The study explores the impact of robots' emotional displays on people's tendency to cooperate with a robot opponent in prisoner's dilemma games. Participants played iterated prisoner's dilemma games with a non-expressive robot (as a measure of cooperative baseline), followed by an angry, and a sad robot, in turn. Based on the Emotion as Social Information model, we expected participants with higher cooperative predispositions to cooperate less when a robot displayed anger, and cooperate more when the robot displayed sadness. Contrarily, according to this model, participants with lower cooperative predispositions should cooperate more with an angry robot and less with a sad robot. The results of 60 participants failed to support the predictions. Only the participants' cooperative predispositions significantly predicted their cooperative tendencies during gameplay. Participants who cooperated more in the baseline measure also cooperated more with the robots displaying sadness and anger. In exploratory analyses, we found that participants who accurately recognised the robots' sad and angry displays tended to cooperate less with them overall. The study highlights the impact of personal factors in human-robot cooperation, and how these factors might surpass the influence of bottom-up emotional displays by the robots in the present experimental scenario.
Collapse
Affiliation(s)
- Te-Yi Hsieh
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland
| | - Emily S Cross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland.,Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|
23
|
de Jong D, Hortensius R, Hsieh TY, Cross ES. Empathy and Schadenfreude in Human-Robot Teams. J Cogn 2021; 4:35. [PMID: 34430794 PMCID: PMC8344963 DOI: 10.5334/joc.177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 06/30/2021] [Indexed: 01/09/2023] Open
Abstract
Intergroup dynamics shape the ways in which we interact with other people. We feel more empathy towards ingroup members compared to outgroup members, and can even feel pleasure when an outgroup member experiences misfortune, known as schadenfreude. Here, we test the extent to which these intergroup biases emerge during interactions with robots. We measured trial-by-trial fluctuations in emotional reactivity to the outcome of a competitive reaction time game to assess both empathy and schadenfreude in arbitrary human-human and human-robot teams. Across four experiments (total n = 361), we observed a consistent empathy and schadenfreude bias driven by team membership. People felt more empathy towards ingroup members than outgroup members and more schadenfreude towards outgroup members. The existence of an intergroup bias did not depend on the nature of the agent: the same effects were observed for human-human and human-robot teams. People reported similar levels of empathy and schadenfreude towards a human and robot player. The human likeness of the robot did not consistently influence this intergroup bias. In other words, similar empathy and schadenfreude biases were observed for both humanoid and mechanoid robots. For all teams, this bias was influenced by the level of team identification; individuals who identified more with their team showed stronger intergroup empathy and schadenfreude bias. Together, we show that similar intergroup dynamics that shape our interactions with people can also shape interactions with robots. Our results highlight the importance of taking intergroup biases into account when examining social dynamics of human-robot interactions.
Collapse
Affiliation(s)
- Dorina de Jong
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Scotland, UK
- Istituto Italiano di Tecnologia, Center for Translational Neurophysiology of Speech and Communication, (CTNSC), Ferrara, Italy
- Università di Ferrara, Dipartimento di Scienze Biomediche e Chirurgico Specialistiche, Ferrara, Italy
| | - Ruud Hortensius
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Scotland, UK
- Department of Psychology, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - Te-Yi Hsieh
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Scotland, UK
| | - Emily S. Cross
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Scotland, UK
- Department of Cognitive Science, Macquarie University, 16 University Ave, Sydney, NSW 2109, Australia
| |
Collapse
|
24
|
Hortensius R, Kent M, Darda KM, Jastrzab L, Koldewyn K, Ramsey R, Cross ES. Exploring the relationship between anthropomorphism and theory-of-mind in brain and behaviour. Hum Brain Mapp 2021; 42:4224-4241. [PMID: 34196439 PMCID: PMC8356980 DOI: 10.1002/hbm.25542] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 05/07/2021] [Accepted: 05/11/2021] [Indexed: 01/14/2023] Open
Abstract
The process of understanding the minds of other people, such as their emotions and intentions, is mimicked when individuals try to understand an artificial mind. The assumption is that anthropomorphism, attributing human‐like characteristics to non‐human agents and objects, is an analogue to theory‐of‐mind, the ability to infer mental states of other people. Here, we test to what extent these two constructs formally overlap. Specifically, using a multi‐method approach, we test if and how anthropomorphism is related to theory‐of‐mind using brain (Experiment 1) and behavioural (Experiment 2) measures. In a first exploratory experiment, we examine the relationship between dispositional anthropomorphism and activity within the theory‐of‐mind brain network (n = 108). Results from a Bayesian regression analysis showed no consistent relationship between dispositional anthropomorphism and activity in regions of the theory‐of‐mind network. In a follow‐up, pre‐registered experiment, we explored the relationship between theory‐of‐mind and situational and dispositional anthropomorphism in more depth. Participants (n = 311) watched a short movie while simultaneously completing situational anthropomorphism and theory‐of‐mind ratings, as well as measures of dispositional anthropomorphism and general theory‐of‐mind. Only situational anthropomorphism predicted the ability to understand and predict the behaviour of the film's characters. No relationship between situational or dispositional anthropomorphism and general theory‐of‐mind was observed. Together, these results suggest that while the constructs of anthropomorphism and theory‐of‐mind might overlap in certain situations, they remain separate and possibly unrelated at the personality level. These findings point to a possible dissociation between brain and behavioural measures when considering the relationship between theory‐of‐mind and anthropomorphism.
Collapse
Affiliation(s)
- Ruud Hortensius
- Department of Psychology, Utrecht University, Utrecht, The Netherlands.,Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, Scotland, UK
| | - Michaela Kent
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, Scotland, UK.,Faculty of Neuroscience, Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
| | - Kohinoor M Darda
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, Scotland, UK.,Department of Cognitive Science, Macquarie University, Sydney, New South Wales, Australia
| | - Laura Jastrzab
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, Scotland, UK.,Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Bangor, Wales, UK
| | - Kami Koldewyn
- Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Bangor, Wales, UK
| | - Richard Ramsey
- Department of Psychology, Macquarie University, Sydney, New South Wales, Australia
| | - Emily S Cross
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, Scotland, UK.,Department of Cognitive Science, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
25
|
Marchesi S, Bossi F, Ghiglino D, De Tommaso D, Wykowska A. I Am Looking for Your Mind: Pupil Dilation Predicts Individual Differences in Sensitivity to Hints of Human-Likeness in Robot Behavior. Front Robot AI 2021; 8:653537. [PMID: 34222350 PMCID: PMC8249729 DOI: 10.3389/frobt.2021.653537] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 05/25/2021] [Indexed: 11/13/2022] Open
Abstract
The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants’ pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.
Collapse
Affiliation(s)
- Serena Marchesi
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, Genova, Italy.,Department of Computer Science, Faculty of Science and Engineering, Manchester University, Manchester, United Kingdom
| | - Francesco Bossi
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, Genova, Italy.,IMT School for Advanced Studies, Lucca, Italy
| | - Davide Ghiglino
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, Genova, Italy.,Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi, Università di Genova, Genova, Italy
| | - Davide De Tommaso
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, Genova, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
26
|
Considering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery. NPJ Digit Med 2021; 4:93. [PMID: 34083689 PMCID: PMC8175735 DOI: 10.1038/s41746-021-00464-x] [Citation(s) in RCA: 70] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 05/11/2021] [Indexed: 02/07/2023] Open
Abstract
Natural language computer applications are becoming increasingly sophisticated and, with the recent release of Generative Pre-trained Transformer 3, they could be deployed in healthcare-related contexts that have historically comprised human-to-human interaction. However, for GPT-3 and similar applications to be considered for use in health-related contexts, possibilities and pitfalls need thoughtful exploration. In this article, we briefly introduce some opportunities and cautions that would accompany advanced Natural Language Processing applications deployed in eHealth.
Collapse
|
27
|
Riddoch KA, Cross ES. "Hit the Robot on the Head With This Mallet" - Making a Case for Including More Open Questions in HRI Research. Front Robot AI 2021; 8:603510. [PMID: 33718438 PMCID: PMC7947676 DOI: 10.3389/frobt.2021.603510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 01/04/2021] [Indexed: 11/13/2022] Open
Abstract
Researchers continue to devise creative ways to explore the extent to which people perceive robots as social agents, as opposed to objects. One such approach involves asking participants to inflict 'harm' on a robot. Researchers are interested in the length of time between the experimenter issuing the instruction and the participant complying, and propose that relatively long periods of hesitation might reflect empathy for the robot, and perhaps even attribution of human-like qualities, such as agency and sentience. In a recent experiment, we adapted the so-called 'hesitance to hit' paradigm, in which participants were instructed to hit a humanoid robot on the head with a mallet. After standing up to do so (signaling intent to hit the robot), participants were stopped, and then took part in a semi-structured interview to probe their thoughts and feelings during the period of hesitation. Thematic analysis of the responses indicate that hesitation not only reflects perceived socialness, but also other factors including (but not limited to) concerns about cost, mallet disbelief, processing of the task instruction, and the influence of authority. The open-ended, free responses participants provided also offer rich insights into individual differences with regards to anthropomorphism, perceived power imbalances, and feelings of connection toward the robot. In addition to aiding understanding of this measurement technique and related topics regarding socialness attribution to robots, we argue that greater use of open questions can lead to exciting new research questions and interdisciplinary collaborations in the domain of social robotics.
Collapse
Affiliation(s)
- Katie A. Riddoch
- Social Brain in Action Laboratory, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, United Kingdom
| | - Emily. S. Cross
- Social Brain in Action Laboratory, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, United Kingdom
- Social Brain in Action Laboratory, Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
28
|
Xiao C, Xu L, Sui Y, Zhou R. Do People Regard Robots as Human-Like Social Partners? Evidence From Perspective-Taking in Spatial Descriptions. Front Psychol 2021; 11:578244. [PMID: 33613351 PMCID: PMC7892441 DOI: 10.3389/fpsyg.2020.578244] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 12/23/2020] [Indexed: 11/16/2022] Open
Abstract
Spatial communications are essential to the survival and social interaction of human beings. In science fiction and the near future, robots are supposed to be able to understand spatial languages to collaborate and cooperate with humans. However, it remains unknown whether human speakers regard robots as human-like social partners. In this study, human speakers describe target locations to an imaginary human or robot addressee under various scenarios varying in relative speaker–addressee cognitive burden. Speakers made equivalent perspective choices to human and robot addressees, which consistently shifted according to the relative speaker–addressee cognitive burden. However, speakers’ perspective choice was only significantly correlated to their social skills when the addressees were humans but not robots. These results suggested that people generally assume robots and humans with equal capabilities in understanding spatial descriptions but do not regard robots as human-like social partners.
Collapse
Affiliation(s)
- Chengli Xiao
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, China
| | - Liufei Xu
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, China
| | - Yuqing Sui
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, China
| | - Renlai Zhou
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, China
| |
Collapse
|
29
|
Henschel A, Laban G, Cross ES. What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You. CURRENT ROBOTICS REPORTS 2021; 2:9-19. [PMID: 34977592 PMCID: PMC7860159 DOI: 10.1007/s43154-020-00035-0] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 12/21/2020] [Indexed: 12/17/2022]
Abstract
Purpose of Review We provide an outlook on the definitions, laboratory research, and applications of social robots, with an aim to understand what makes a robot social—in the eyes of science and the general public. Recent Findings Social robots demonstrate their potential when deployed within contexts appropriate to their form and functions. Some examples include companions for the elderly and cognitively impaired individuals, robots within educational settings, and as tools to support cognitive and behavioural change interventions. Summary Science fiction has inspired us to conceive of a future with autonomous robots helping with every aspect of our daily lives, although the robots we are familiar with through film and literature remain a vision of the distant future. While there are still miles to go before robots become a regular feature within our social spaces, rapid progress in social robotics research, aided by the social sciences, is helping to move us closer to this reality.
Collapse
Affiliation(s)
- Anna Henschel
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Guy Laban
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Emily S Cross
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland.,Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|
30
|
Laban G, Ben-Zion Z, Cross ES. Social Robots for Supporting Post-traumatic Stress Disorder Diagnosis and Treatment. Front Psychiatry 2021; 12:752874. [PMID: 35185629 PMCID: PMC8854768 DOI: 10.3389/fpsyt.2021.752874] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 12/27/2021] [Indexed: 12/13/2022] Open
Abstract
Post-Traumatic Stress Disorder (PTSD) is a severe psychiatric disorder with profound public health impact due to its high prevalence, chronic nature, accompanying functional impairment, and frequently occurring comorbidities. Early PTSD symptoms, often observed shortly after trauma exposure, abate with time in the majority of those who initially express them, yet leave a significant minority with chronic PTSD. While the past several decades of PTSD research have produced substantial knowledge regarding the mechanisms and consequences of this debilitating disorder, the diagnosis of and available treatments for PTSD still face significant challenges. Here, we discuss how novel therapeutic interventions involving social robots can potentially offer meaningful opportunities for overcoming some of the present challenges. As the application of social robotics-based interventions in the treatment of mental disorders is only in its infancy, it is vital that careful, well-controlled research is conducted to evaluate their efficacy, safety, and ethics. Nevertheless, we are hopeful that robotics-based solutions could advance the quality, availability, specificity and scalability of care for PTSD.
Collapse
Affiliation(s)
- Guy Laban
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Ziv Ben-Zion
- Tel-Aviv Sourasky Medical Center, Sagol Brain Institute Tel-Aviv, Wohl Institute for Advanced Imaging, Tel-Aviv, Israel.,Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv, Israel.,Departments of Comparative Medicine and Psychiatry, Yale School of Medicine, Yale University, New Haven, CT, United States.,The Clinical Neurosciences Division, VA Connecticut Healthcare System, United States Department of Veterans Affairs, National Center for Posttraumatic Stress Disorder, West Haven, CT, United States
| | - Emily S Cross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom.,Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
31
|
Cross ES, Ramsey R. Mind Meets Machine: Towards a Cognitive Science of Human-Machine Interactions. Trends Cogn Sci 2020; 25:200-212. [PMID: 33384213 DOI: 10.1016/j.tics.2020.11.009] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 11/26/2020] [Accepted: 11/28/2020] [Indexed: 12/31/2022]
Abstract
As robots advance from the pages and screens of science fiction into our homes, hospitals, and schools, they are poised to take on increasingly social roles. Consequently, the need to understand the mechanisms supporting human-machine interactions is becoming increasingly pressing. We introduce a framework for studying the cognitive and brain mechanisms that support human-machine interactions, leveraging advances made in cognitive neuroscience to link different levels of description with relevant theory and methods. We highlight unique features that make this endeavour particularly challenging (and rewarding) for brain and behavioural scientists. Overall, the framework offers a way to study the cognitive science of human-machine interactions that respects the diversity of social machines, individuals' expectations and experiences, and the structure and function of multiple cognitive and brain systems.
Collapse
Affiliation(s)
- Emily S Cross
- Department of Cognitive Science, Macquarie University, Sydney, Australia; Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, UK.
| | - Richard Ramsey
- Department of Psychology, Macquarie University, Sydney, Australia
| |
Collapse
|
32
|
Tan H, Zhao Y, Li S, Wang W, Zhu M, Hong J, Yuan X. Relationship between social robot proactive behavior and the human perception of anthropomorphic attributes. Adv Robot 2020. [DOI: 10.1080/01691864.2020.1831699] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Hao Tan
- State Key Laboratory of Advanced design and Manufacturing for Vehicle body, Hunan University, Hunan, People’s Republic of China
| | - Ying Zhao
- School of Design, Hunan University, Hunan, People’s Republic of China
| | - Shiyan Li
- AI HCI Lab of Baidu, Baidu Online Network Technology Co., Ltd, Beijing, People’s Republic of China
| | - Wei Wang
- School of Industrial Design, Georgia Institute of Technology, Atlanta, GA, USA
| | - Ming Zhu
- School of Design, Hunan University, Hunan, People’s Republic of China
| | - Jie Hong
- School of Design, Hunan University, Hunan, People’s Republic of China
| | - Xiang Yuan
- School of Design, Hunan University, Hunan, People’s Republic of China
| |
Collapse
|
33
|
Social Cognition in the Age of Human–Robot Interaction. Trends Neurosci 2020; 43:373-384. [DOI: 10.1016/j.tins.2020.03.013] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 03/04/2020] [Accepted: 03/26/2020] [Indexed: 11/22/2022]
|
34
|
Desideri L, Bonifacci P, Croati G, Dalena A, Gesualdo M, Molinario G, Gherardini A, Cesario L, Ottaviani C. The Mind in the Machine: Mind Perception Modulates Gaze Aversion During Child–Robot Interaction. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00656-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
35
|
Hsieh TY, Chaudhury B, Cross ES. Human-Robot Cooperation in Prisoner Dilemma Games. COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION 2020. [DOI: 10.1145/3371382.3378309] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
Affiliation(s)
- Te-Yi Hsieh
- University of Glasgow, Glasgow, United Kingdom
| | | | - Emily S. Cross
- Macquarie University & University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
36
|
Cross ES, Riddoch KA, Pratts J, Titone S, Chaudhury B, Hortensius R. A neurocognitive investigation of the impact of socializing with a robot on empathy for pain. Philos Trans R Soc Lond B Biol Sci 2020; 374:20180034. [PMID: 30852995 DOI: 10.1098/rstb.2018.0034] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
To what extent can humans form social relationships with robots? In the present study, we combined functional neuroimaging with a robot socializing intervention to probe the flexibility of empathy, a core component of social relationships, towards robots. Twenty-six individuals underwent identical fMRI sessions before and after being issued a social robot to take home and interact with over the course of a week. While undergoing fMRI, participants observed videos of a human actor or a robot experiencing pain or pleasure in response to electrical stimulation. Repetition suppression of activity in the pain network, a collection of brain regions associated with empathy and emotional responding, was measured to test whether socializing with a social robot leads to greater overlap in neural mechanisms when observing human and robotic agents experiencing pain or pleasure. In contrast to our hypothesis, functional region-of-interest analyses revealed no change in neural overlap for agents after the socializing intervention. Similarly, no increase in activation when observing a robot experiencing pain emerged post-socializing. Whole-brain analysis showed that, before the socializing intervention, superior parietal and early visual regions are sensitive to novel agents, while after socializing, medial temporal regions show agent sensitivity. A region of the inferior parietal lobule was sensitive to novel emotions, but only during the pre-socializing scan session. Together, these findings suggest that a short socialization intervention with a social robot does not lead to discernible differences in empathy towards the robot, as measured by behavioural or brain responses. We discuss the extent to which long-term socialization with robots might shape social cognitive processes and ultimately our relationships with these machines. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Emily S Cross
- 1 Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow , Glasgow G12 8QB , UK.,2 Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University , Bangor LL57 2AS , UK
| | - Katie A Riddoch
- 2 Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University , Bangor LL57 2AS , UK
| | - Jaydan Pratts
- 2 Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University , Bangor LL57 2AS , UK
| | - Simon Titone
- 2 Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University , Bangor LL57 2AS , UK
| | - Bishakha Chaudhury
- 1 Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow , Glasgow G12 8QB , UK
| | - Ruud Hortensius
- 1 Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow , Glasgow G12 8QB , UK
| |
Collapse
|
37
|
Skewes J, Amodio DM, Seibt J. Social robotics and the modulation of social perception and bias. Philos Trans R Soc Lond B Biol Sci 2020; 374:20180037. [PMID: 30853001 DOI: 10.1098/rstb.2018.0037] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The field of social robotics offers an unprecedented opportunity to probe the process of impression formation and the effects of identity-based stereotypes (e.g. about gender or race) on social judgements and interactions. We present the concept of fair proxy communication-a form of robot-mediated communication that proceeds in the absence of potentially biasing identity cues-and describe how this application of social robotics may be used to illuminate implicit bias in social cognition and inform novel interventions to reduce bias. We discuss key questions and challenges for the use of robots in research on the social cognition of bias and offer some practical recommendations. We conclude by discussing boundary conditions of this new form of interaction and by raising some ethical concerns about the inclusion of social robots in psychological research and interventions. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Joshua Skewes
- 1 Department for Linguistics, Cognitive Science and Semiotics, and Interacting Minds Center, Aarhus University , Denmark
| | - David M Amodio
- 3 Department of Psychology and Neural Science, New York University , New York, NY , USA.,4 Department of Psychology, University of Amsterdam , Amsterdam , The Netherlands
| | - Johanna Seibt
- 2 Research Unit for Robophilosophy, School of Culture and Society, Aarhus University , Denmark
| |
Collapse
|
38
|
Vanman EJ, Kappas A. “Danger, Will Robinson!” The challenges of social robots for intergroup relations. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2019. [DOI: 10.1111/spc3.12489] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
39
|
Cross ES, Hortensius R, Wykowska A. From social brains to social robots: applying neurocognitive insights to human-robot interaction. Philos Trans R Soc Lond B Biol Sci 2019; 374:20180024. [PMID: 30852997 PMCID: PMC6452245 DOI: 10.1098/rstb.2018.0024] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2018] [Indexed: 12/30/2022] Open
Abstract
Amidst the fourth industrial revolution, social robots are resolutely moving from fiction to reality. With sophisticated artificial agents becoming ever more ubiquitous in daily life, researchers across different fields are grappling with the questions concerning how humans perceive and interact with these agents and the extent to which the human brain incorporates intelligent machines into our social milieu. This theme issue surveys and discusses the latest findings, current challenges and future directions in neuroscience- and psychology-inspired human-robot interaction (HRI). Critical questions are explored from a transdisciplinary perspective centred around four core topics in HRI: technical solutions for HRI, development and learning for HRI, robots as a tool to study social cognition, and moral and ethical implications of HRI. Integrating findings from diverse but complementary research fields, including social and cognitive neurosciences, psychology, artificial intelligence and robotics, the contributions showcase ways in which research from disciplines spanning biological sciences, social sciences and technology deepen our understanding of the potential and limits of robotic agents in human social life. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Emily S. Cross
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| | - Ruud Hortensius
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| | | |
Collapse
|
40
|
Balas B, Auen A. Perceiving Animacy in Own-and Other-Species Faces. Front Psychol 2019; 10:29. [PMID: 30728795 PMCID: PMC6351462 DOI: 10.3389/fpsyg.2019.00029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 01/07/2019] [Indexed: 11/17/2022] Open
Abstract
Though artificial faces of various kinds are rapidly becoming more and more life-like due to advances in graphics technology (Suwajanakorn et al., 2015; Booth et al., 2017), observers can typically distinguish real faces from artificial faces. In general, face recognition is tuned to experience such that expert-level processing is most evident for faces that we encounter frequently in our visual world, but the extent to which face animacy perception is also tuned to in-group vs. out-group categories remains an open question. In the current study, we chose to examine how the perception of animacy in human faces and dog faces was affected by face inversion and the duration of face images presented to adult observers. We hypothesized that the impact of these manipulations may differ as a function of species category, indicating that face animacy perception is tuned for in-group faces. Briefly, we found evidence of such a differential impact, suggesting either that distinct mechanisms are used to evaluate the "life" in a face for in-group and out-group faces, or that the efficiency of a common mechanism varies substantially as a function of visual expertise.
Collapse
Affiliation(s)
- Benjamin Balas
- Department of Psychology, North Dakota State University, Fargo, ND, United States
- Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States
| | - Amanda Auen
- Department of Psychology, North Dakota State University, Fargo, ND, United States
| |
Collapse
|