1
|
Guingrich RE, Graziano MSA. Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Front Psychol 2024; 15:1322781. [PMID: 38605842 PMCID: PMC11008604 DOI: 10.3389/fpsyg.2024.1322781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 03/13/2024] [Indexed: 04/13/2024] Open
Abstract
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI's inherent conscious or moral status.
Collapse
Affiliation(s)
- Rose E. Guingrich
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton School of Public and International Affairs, Princeton University, Princeton, NJ, United States
| | - Michael S. A. Graziano
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| |
Collapse
|
2
|
Uchida T, Minato T, Ishiguro H. Opinion attribution improves motivation to exchange subjective opinions with humanoid robots. Front Robot AI 2024; 11:1175879. [PMID: 38440774 PMCID: PMC10909954 DOI: 10.3389/frobt.2024.1175879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 01/30/2024] [Indexed: 03/06/2024] Open
Abstract
In recent years, the development of robots that can engage in non-task-oriented dialogue with people, such as chat, has received increasing attention. This study aims to clarify the factors that improve the user's willingness to talk with robots in non-task oriented dialogues (e.g., chat). A previous study reported that exchanging subjective opinions makes such dialogue enjoyable and enthusiastic. In some cases, however, the robot's subjective opinions are not realistic, i.e., the user believes the robot does not have opinions, thus we cannot attribute the opinion to the robot. For example, if a robot says that alcohol tastes good, it may be difficult to imagine the robot having such an opinion. In this case, the user's motivation to exchange opinions may decrease. In this study, we hypothesize that regardless of the type of robot, opinion attribution affects the user's motivation to exchange opinions with humanoid robots. We examined the effect by preparing various opinions of two kinds of humanoid robots. The experimental result suggests that not only the users' interest in the topic but also the attribution of the subjective opinions to them influence their motivation to exchange opinions. Another analysis revealed that the android significantly increased the motivation when they are interested in the topic and do not attribute opinions, while the small robot significantly increased it when not interested and attributed opinions. In situations where there are opinions that cannot be attributed to humanoid robots, the result that androids are more motivating when users have the interests even if opinions are not attributed can indicate the usefulness of androids.
Collapse
Affiliation(s)
- Takahisa Uchida
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan
| | - Takashi Minato
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan
- RIKEN, Kyoto, Japan
| | - Hiroshi Ishiguro
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan
| |
Collapse
|
3
|
Abubshait A, Weis PP, Momen A, Wiese E. Perceptual discrimination in the face perception of robots is attenuated compared to humans. Sci Rep 2023; 13:16708. [PMID: 37794045 PMCID: PMC10550918 DOI: 10.1038/s41598-023-42510-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 09/11/2023] [Indexed: 10/06/2023] Open
Abstract
When interacting with groups of robots, we tend to perceive them as a homogenous group where all group members have similar capabilities. This overgeneralization of capabilities is potentially due to a lack of perceptual experience with robots or a lack of motivation to see them as individuals (i.e., individuation). This can undermine trust and performance in human-robot teams. One way to overcome this issue is by designing robots that can be individuated such that each team member can be provided tasks based on its actual skills. In two experiments, we examine if humans can effectively individuate robots: Experiment 1 (n = 225) investigates how individuation performance of robot stimuli compares to that of human stimuli that either belong to a social ingroup or outgroup. Experiment 2 (n = 177) examines to what extent robots' physical human-likeness (high versus low) affects individuation performance. Results show that although humans are able to individuate robots, they seem to individuate them to a lesser extent than both ingroup and outgroup human stimuli (Experiment 1). Furthermore, robots that are physically more humanlike are initially individuated better compared to robots that are physically less humanlike; this effect, however, diminishes over the course of the experiment, suggesting that the individuation of robots can be learned quite quickly (Experiment 2). Whether differences in individuation performance with robot versus human stimuli is primarily due to a reduced perceptual experience with robot stimuli or due to motivational aspects (i.e., robots as potential social outgroup) should be examined in future studies.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- Italian Institute of Technology, Genoa, Italy.
- George Mason University, Fairfax, VA, USA.
| | - Patrick P Weis
- George Mason University, Fairfax, VA, USA
- Julius Maximilians University, Würzburg, Germany
| | - Ali Momen
- George Mason University, Fairfax, VA, USA
- Air Force Academy, Colorado Springs, CO, USA
| | - Eva Wiese
- George Mason University, Fairfax, VA, USA
- Berlin Institute of Technology, Berlin, Germany
| |
Collapse
|
4
|
Morillo-Mendez L, Stower R, Sleat A, Schreiter T, Leite I, Mozos OM, Schrooten MGS. Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution. Front Psychol 2023; 14:1215771. [PMID: 37519379 PMCID: PMC10374202 DOI: 10.3389/fpsyg.2023.1215771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 06/27/2023] [Indexed: 08/01/2023] Open
Abstract
Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.
Collapse
Affiliation(s)
| | - Rebecca Stower
- Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden
| | - Alex Sleat
- Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden
| | - Tim Schreiter
- Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden
| | - Iolanda Leite
- Division of Robotics, Perception and Learning, KTH, Stockholm, Sweden
| | | | | |
Collapse
|
5
|
Esterwood C, Robert LP. The theory of mind and human-robot trust repair. Sci Rep 2023; 13:9877. [PMID: 37337033 DOI: 10.1038/s41598-023-37032-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 06/13/2023] [Indexed: 06/21/2023] Open
Abstract
Nothing is perfect and robots can make as many mistakes as any human, which can lead to a decrease in trust in them. However, it is possible, for robots to repair a human's trust in them after they have made mistakes through various trust repair strategies such as apologies, denials, and promises. Presently, the efficacy of these trust repairs in the human-robot interaction literature has been mixed. One reason for this might be that humans have different perceptions of a robot's mind. For example, some repairs may be more effective when humans believe that robots are capable of experiencing emotion. Likewise, other repairs might be more effective when humans believe robots possess intentionality. A key element that determines these beliefs is mind perception. Therefore understanding how mind perception impacts trust repair may be vital to understanding trust repair in human-robot interaction. To investigate this, we conducted a study involving 400 participants recruited via Amazon Mechanical Turk to determine whether mind perception influenced the effectiveness of three distinct repair strategies. The study employed an online platform where the robot and participant worked in a warehouse to pick and load 10 boxes. The robot made three mistakes over the course of the task and employed either a promise, denial, or apology after each mistake. Participants then rated their trust in the robot before and after it made the mistake. Results of this study indicated that overall, individual differences in mind perception are vital considerations when seeking to implement effective apologies and denials between humans and robots.
Collapse
Affiliation(s)
- Connor Esterwood
- School of Information, University of Michigan, Ann Arbor, 48109, USA.
| | - Lionel P Robert
- School of Information, University of Michigan, Ann Arbor, 48109, USA
- Robotics Department, University of Michigan, Ann Arbor, 48109, USA
| |
Collapse
|
6
|
Kim J, Im I. Anthropomorphic response: Understanding interactions between humans and artificial intelligence agents. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
7
|
Xu K, Chen M, You L. The Hitchhiker’s Guide to a Credible and Socially Present Robot: Two Meta-Analyses of the Power of Social Cues in Human–Robot Interaction. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00961-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
8
|
Miraglia L, Peretti G, Manzi F, Di Dio C, Massaro D, Marchetti A. Development and validation of the Attribution of Mental States Questionnaire (AMS-Q): A reference tool for assessing anthropomorphism. Front Psychol 2023; 14:999921. [PMID: 36895742 PMCID: PMC9989770 DOI: 10.3389/fpsyg.2023.999921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 01/24/2023] [Indexed: 02/18/2023] Open
Abstract
Attributing mental states to others, such as feelings, beliefs, goals, desires, and attitudes, is an important interpersonal ability, necessary for adaptive relationships, which underlies the ability to mentalize. To evaluate the attribution of mental and sensory states, a new 23-item measure, the Attribution of Mental States Questionnaire (AMS-Q), has been developed. The present study aimed to investigate the dimensionality of the AMS-Q and its psychometric proprieties in two studies. Study 1 focused on the development of the questionnaire and its factorial structure in a sample of Italian adults (N = 378). Study 2 aimed to confirm the findings in a new sample (N = 271). Besides the AMS-Q, Study 2 included assessments of Theory of Mind (ToM), mentalization, and alexithymia. A Principal Components Analysis (PCA) and a Parallel Analysis (PA) of the data from Study 1 yielded three factors assessing mental states with positive or neutral valence (AMS-NP), mental states with negative valence (AMS-N), and sensory states (AMS-S). These showed satisfactory reliability indexes. AMS-Q's whole-scale internal consistency was excellent. Multigroup Confirmatory Factor Analysis (CFA) further confirmed the three-factor structure. The AMS-Q subscales also showed a consistent pattern of correlation with associated constructs in the theoretically predicted ways, relating positively to ToM and mentalization and negatively to alexithymia. Thus, the questionnaire is considered suitable to be easily administered and sensitive for assessing the attribution of mental and sensory states to humans. The AMS-Q can also be administered with stimuli of nonhuman agents (e.g., animals, inanimate things, and even God); this allows the level of mental anthropomorphization of other agents to be assessed using the human as a term of comparison, providing important hints in the perception of nonhuman entities as more or less mentalistic compared to human beings, and identifying what factors are required for the attribution of human mental traits to nonhuman agents, further helping to delineate the perception of others' minds.
Collapse
Affiliation(s)
- Laura Miraglia
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Giulia Peretti
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Federico Manzi
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy.,Research Unit on Robopsychology in the Lifespan, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Cinzia Di Dio
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy.,Research Unit on Robopsychology in the Lifespan, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Davide Massaro
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy.,Research Unit on Robopsychology in the Lifespan, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Antonella Marchetti
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy.,Research Unit on Robopsychology in the Lifespan, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| |
Collapse
|
9
|
Vaitonytė J, Alimardani M, Louwerse MM. Scoping review of the neural evidence on the uncanny valley. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
|
10
|
Being watched by a humanoid robot and a human: Effects on affect-related psychophysiological responses. Biol Psychol 2022; 175:108451. [DOI: 10.1016/j.biopsycho.2022.108451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 10/27/2022] [Accepted: 10/31/2022] [Indexed: 11/06/2022]
|
11
|
Momen A, Hugenberg K, Wiese E. Robots engage face-processing less strongly than humans. FRONTIERS IN NEUROERGONOMICS 2022; 3:959578. [PMID: 38235446 PMCID: PMC10790943 DOI: 10.3389/fnrgo.2022.959578] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 09/20/2022] [Indexed: 01/19/2024]
Abstract
Robot faces often differ from human faces in terms of their facial features (e.g., lack of eyebrows) and spatial relationships between these features (e.g., disproportionately large eyes), which can influence the degree to which social brain [i.e., Fusiform Face Area (FFA), Superior Temporal Sulcus (STS); Haxby et al., 2000] areas process them as social individuals that can be discriminated from other agents in terms of their perceptual features and person attributes. Of interest in this work is whether robot stimuli are processed in a less social manner than human stimuli. If true, this could undermine human-robot interactions (HRIs) because human partners could potentially fail to perceive robots as individual agents with unique features and capabilities-a phenomenon known as outgroup homogeneity-potentially leading to miscalibration of trust and errors in allocation of task responsibilities. In this experiment, we use the face inversion paradigm (as a proxy for neural activation in social brain areas) to examine whether face processing differs between human and robot face stimuli: if robot faces are perceived as less face-like than human-faces, the difference in recognition performance for faces presented upright compared to upside down (i.e., inversion effect) should be less pronounced for robot faces than human faces. The results demonstrate a reduced face inversion effect with robot vs. human faces, supporting the hypothesis that robot faces are processed in a less face-like manner. This suggests that roboticists should attend carefully to the design of robot faces and evaluate them based on their ability to engage face-typical processes. Specific design recommendations on how to accomplish this goal are provided in the discussion.
Collapse
Affiliation(s)
- Ali Momen
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
- Department of Psychology, George Mason University, Fairfax, VA, United States
| | - Kurt Hugenberg
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, United States
| | - Eva Wiese
- Department of Psychology, George Mason University, Fairfax, VA, United States
- Institute for Psychology and Ergonomics, Technical University of Berlin, Berlin, Germany
| |
Collapse
|
12
|
Two uncanny valleys: Re-evaluating the uncanny valley across the full spectrum of real-world human-like robots. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
13
|
Thellman S, de Graaf M, Ziemke T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a
computer < robot < human
pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Collapse
|
14
|
Dang J, Liu L. Implicit theories of the human mind predict competitive and cooperative responses to AI robots. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107300] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
15
|
Edwards A, Edwards C. Does the Correspondence Bias Apply to Social Robots?: Dispositional and Situational Attributions of Human Versus Robot Behavior. Front Robot AI 2022; 8:788242. [PMID: 35059443 PMCID: PMC8764179 DOI: 10.3389/frobt.2021.788242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 11/22/2021] [Indexed: 11/24/2022] Open
Abstract
Increasingly, people interact with embodied machine communicators and are challenged to understand their natures and behaviors. The Fundamental Attribution Error (FAE, sometimes referred to as the correspondence bias) is the tendency for individuals to over-emphasize personality-based or dispositional explanations for other people’s behavior while under-emphasizing situational explanations. This effect has been thoroughly examined with humans, but do people make the same causal inferences when interpreting the actions of a robot? As compared to people, social robots are less autonomous and agentic because their behavior is wholly determined by humans in the loop, programming, and design choices. Nonetheless, people do assign robots agency, intentionality, personality, and blame. Results of an experiment showed that participants made correspondent inferences when evaluating both human and robot speakers, attributing their behavior to underlying attitudes even when it was clearly coerced. However, they committed a stronger correspondence bias in the case of the robot–an effect driven by the greater dispositional culpability assigned to robots committing unpopular behavior–and they were more confident in their attitudinal judgments of robots than humans. Results demonstrated some differences in the global impressions of humans and robots based on behavior valence and choice. Judges formed more generous impressions of the robot agent when its unpopular behavior was coerced versus chosen; a tendency not displayed when forming impressions of the human agent. Implications of attributing robot behavior to disposition, or conflating robot actors with their actions, are addressed.
Collapse
|
16
|
Competing with or Against Cozmo, the Robot: Influence of Interaction Context and Outcome on Mind Perception. Int J Soc Robot 2021. [DOI: 10.1007/s12369-020-00668-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
17
|
Ullrich D, Butz A, Diefenbach S. The Eternal Robot: Anchoring Effects in Humans' Mental Models of Robots and Their Self. Front Robot AI 2021; 7:546724. [PMID: 33501314 PMCID: PMC7806034 DOI: 10.3389/frobt.2020.546724] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 11/30/2020] [Indexed: 11/30/2022] Open
Abstract
Current robot designs often reflect an anthropomorphic approach, apparently aiming to convince users through an ideal system, being most similar or even on par with humans. The present paper challenges human-likeness as a design goal and questions whether simulating human appearance and performance adequately fits into how humans think about robots in a conceptual sense, i.e., human's mental models of robots and their self. Independent of the technical possibilities and limitations, our paper explores robots' attributed potential to become human-like by means of a thought experiment. Four hundred eighty-one participants were confronted with fictional transitions from human-to-robot and robot-to-human, consisting of 20 subsequent steps. In each step, one part or area of the human (e.g., brain, legs) was replaced with robotic parts providing equal functionalities and vice versa. After each step, the participants rated the remaining humanness and remaining self of the depicted entity on a scale from 0 to 100%. It showed that the starting category (e.g., human, robot) serves as an anchor for all former judgments and can hardly be overcome. Even if all body parts had been exchanged, a former robot was not perceived as totally human-like and a former human not as totally robot-like. Moreover, humanness appeared as a more sensible and easier denied attribute than robotness, i.e., after the objectively same transition and exchange of the same parts, the former human was attributed less humanness and self left compared to the former robot's robotness and self left. The participants' qualitative statements about why the robot has not become human-like, often concerned the (unnatural) process of production, or simply argued that no matter how many parts are exchanged, the individual keeps its original entity. Based on such findings, we suggest that instead of designing most human-like robots in order to reach acceptance, it might be more promising to understand robots as an own “species” and underline their specific characteristics and benefits. Limitations of the present study and implications for future HRI research and practice are discussed.
Collapse
Affiliation(s)
- Daniel Ullrich
- Department of Computer Science, Ludwig-Maximilians-University Munich, Munich, Germany
| | - Andreas Butz
- Department of Computer Science, Ludwig-Maximilians-University Munich, Munich, Germany
| | - Sarah Diefenbach
- Department of Psychology, Ludwig-Maximilians-University Munich, Munich, Germany
| |
Collapse
|
18
|
Kiilavuori H, Sariola V, Peltola MJ, Hietanen JK. Making eye contact with a robot: Psychophysiological responses to eye contact with a human and with a humanoid robot. Biol Psychol 2020; 158:107989. [PMID: 33217486 DOI: 10.1016/j.biopsycho.2020.107989] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Revised: 11/09/2020] [Accepted: 11/10/2020] [Indexed: 11/18/2022]
Abstract
Previous research has shown that eye contact, in human-human interaction, elicits increased affective and attention related psychophysiological responses. In the present study, we investigated whether eye contact with a humanoid robot would elicit these responses. Participants were facing a humanoid robot (NAO) or a human partner, both physically present and looking at or away from the participant. The results showed that both in human-robot and human-human condition, eye contact versus averted gaze elicited greater skin conductance responses indexing autonomic arousal, greater facial zygomatic muscle responses (and smaller corrugator responses) associated with positive affect, and greater heart deceleration responses indexing attention allocation. With regard to the skin conductance and zygomatic responses, the human model's gaze direction had a greater effect on the responses as compared to the robot's gaze direction. In conclusion, eye contact elicits automatic affective and attentional reactions both when shared with a humanoid robot and with another human.
Collapse
Affiliation(s)
- Helena Kiilavuori
- Human Information Processing Laboratory, Psychology, Faculty of Social Sciences, FI -33014, Tampere University, Finland
| | - Veikko Sariola
- Faculty of Medicine and Health Technology, Korkeakoulunkatu 3, FI - 33720, Tampere University, Finland
| | - Mikko J Peltola
- Human Information Processing Laboratory, Psychology, Faculty of Social Sciences, FI -33014, Tampere University, Finland
| | - Jari K Hietanen
- Human Information Processing Laboratory, Psychology, Faculty of Social Sciences, FI -33014, Tampere University, Finland.
| |
Collapse
|
19
|
Tulk Jesso S, Kennedy WG, Wiese E. Behavioral Cues of Humanness in Complex Environments: How People Engage With Human and Artificially Intelligent Agents in a Multiplayer Videogame. Front Robot AI 2020; 7:531805. [PMID: 33501306 PMCID: PMC7806100 DOI: 10.3389/frobt.2020.531805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2020] [Accepted: 09/30/2020] [Indexed: 11/13/2022] Open
Abstract
The development of AI that can socially engage with humans is exciting to imagine, but such advanced algorithms might prove harmful if people are no longer able to detect when they are interacting with non-humans in online environments. Because we cannot fully predict how socially intelligent AI will be applied, it is important to conduct research into how sensitive humans are to behaviors of humans compared to those produced by AI. This paper presents results from a behavioral Turing Test, in which participants interacted with a human, or a simple or "social" AI within a complex videogame environment. Participants (66 total) played an open world, interactive videogame with one of these co-players and were instructed that they could interact non-verbally however they desired for 30 min, after which time they would indicate their beliefs about the agent, including three Likert measures of how much participants trusted and liked the co-player, the extent to which they perceived them as a "real person," and an interview about the overall perception and what cues participants used to determine humanness. T-tests, Analysis of Variance and Tukey's HSD was used to analyze quantitative data, and Cohen's Kappa and χ2 was used to analyze interview data. Our results suggest that it was difficult for participants to distinguish between humans and the social AI on the basis of behavior. An analysis of in-game behaviors, survey data and qualitative responses suggest that participants associated engagement in social interactions with humanness within the game.
Collapse
Affiliation(s)
- Stephanie Tulk Jesso
- Social and Cognitive Interactions Lab, Department of Psychology, George Mason University, Fairfax, VA, United States
| | - William G. Kennedy
- Center for Social Complexity, Department of Computational Data Science, College of Science, George Mason University, Fairfax, VA, United States
| | - Eva Wiese
- Social and Cognitive Interactions Lab, Department of Psychology, George Mason University, Fairfax, VA, United States
| |
Collapse
|
20
|
Employee norm-violations in the service encounter during the corona pandemic and their impact on customer satisfaction. JOURNAL OF RETAILING AND CONSUMER SERVICES 2020; 57:102209. [PMCID: PMC7335490 DOI: 10.1016/j.jretconser.2020.102209] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 06/26/2020] [Accepted: 06/26/2020] [Indexed: 09/07/2023]
Abstract
In this study, an experiment was used to examine the effects of employee norm-violations in the service encounter with respect to what was considered appropriate behavior (e.g., social distancing) during the 2020 corona pandemic. The participants were exposed to a grocery store employee whose behavior was manipulated (norm-violating vs. norm-confirming). Norm-violating behavior resulted in lower perceived employee warmth, lower perceived employee competence, higher disgust, and more dehumanization of the employee. These responses mediated the impact of employee behavior on customer satisfaction, so that satisfaction was attenuated when norms were violated. The mediators, however, typically also instill a hostile, avoidance-seeking mindset for those who are subject to norm-violations, which is likely to result in problems when transgressors are to be persuaded to change their behaviors.
Collapse
|
21
|
Abubshait A, Momen A, Wiese E. Pre-exposure to Ambiguous Faces Modulates Top-Down Control of Attentional Orienting to Counterpredictive Gaze Cues. Front Psychol 2020; 11:2234. [PMID: 33013584 PMCID: PMC7509110 DOI: 10.3389/fpsyg.2020.02234] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 08/10/2020] [Indexed: 11/13/2022] Open
Abstract
Understanding and reacting to others' nonverbal social signals, such as changes in gaze direction (i.e., gaze cue), are essential for social interactions, as it is important for processes such as joint attention and mentalizing. Although attentional orienting in response to gaze cues has a strong reflexive component, accumulating evidence shows that it can be top-down controlled by context information regarding the signals' social relevance. For example, when a gazer is believed to be an entity "with a mind" (i.e., mind perception), people exert more top-down control on attention orienting. Although increasing an agent's physical human-likeness can enhance mind perception, it could have negative consequences on top-down control of social attention when a gazer's physical appearance is categorically ambiguous (i.e., difficult to categorize as human or nonhuman), as resolving this ambiguity would require using cognitive resources that otherwise could be used to top-down control attention orienting. To examine this question, we used mouse-tracking to explore if categorically ambiguous agents are associated with increased processing costs (Experiment 1), whether categorically ambiguous stimuli negatively impact top-down control of social attention (Experiment 2), and if resolving the conflict related to the agent's categorical ambiguity (using exposure) would restore top-down control to orient attention (Experiment 3). The findings suggest that categorically ambiguous stimuli are associated with cognitive conflict, which negatively impact the ability to exert top-down control on attentional orienting in a counterpredicitive gaze-cueing paradigm; this negative impact, however, is attenuated when being pre-exposed to the stimuli prior to the gaze-cueing task. Taken together, these findings suggest that manipulating physical human-likeness is a powerful way to affect mind perception in human-robot interaction (HRI) but has a diminishing returns effect on social attention when it is categorically ambiguous due to drainage of cognitive resources and impairment of top-down control.
Collapse
Affiliation(s)
| | - Ali Momen
- Department of Psychology, George Mason University, Fairfax, VA, United States
| | - Eva Wiese
- Department of Psychology, George Mason University, Fairfax, VA, United States
| |
Collapse
|
22
|
Manzi F, Peretti G, Di Dio C, Cangelosi A, Itakura S, Kanda T, Ishiguro H, Massaro D, Marchetti A. A Robot Is Not Worth Another: Exploring Children's Mental State Attribution to Different Humanoid Robots. Front Psychol 2020; 11:2011. [PMID: 33101099 PMCID: PMC7554578 DOI: 10.3389/fpsyg.2020.02011] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 07/20/2020] [Indexed: 11/13/2022] Open
Abstract
Recent technological developments in robotics has driven the design and production of different humanoid robots. Several studies have highlighted that the presence of human-like physical features could lead both adults and children to anthropomorphize the robots. In the present study we aimed to compare the attribution of mental states to two humanoid robots, NAO and Robovie, which differed in the degree of anthropomorphism. Children aged 5, 7, and 9 years were required to attribute mental states to the NAO robot, which presents more human-like characteristics compared to the Robovie robot, whose physical features look more mechanical. The results on mental state attribution as a function of children's age and robot type showed that 5-year-olds have a greater tendency to anthropomorphize robots than older children, regardless of the type of robot. Moreover, the findings revealed that, although children aged 7 and 9 years attributed a certain degree of human-like mental features to both robots, they attributed greater mental states to NAO than Robovie compared to younger children. These results generally show that children tend to anthropomorphize humanoid robots that also present some mechanical characteristics, such as Robovie. Nevertheless, age-related differences showed that they should be endowed with physical characteristics closely resembling human ones to increase older children's perception of human likeness. These findings have important implications for the design of robots, which also needs to consider the user's target age, as well as for the generalizability issue of research findings that are commonly associated with the use of specific types of robots.
Collapse
Affiliation(s)
- Federico Manzi
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Giulia Peretti
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Cinzia Di Dio
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Angelo Cangelosi
- School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Shoji Itakura
- Centre for Baby Science, Doshisha University, Kyoto, Japan
| | - Takayuki Kanda
- Human-Robot Interaction Laboratory, Graduate School of Informatics, Kyoto University, Kyoto, Japan
- Advanced Telecommunications Research Institute International, IRC/HIL, Keihanna Science City, Kyoto, Japan
| | - Hiroshi Ishiguro
- Advanced Telecommunications Research Institute International, IRC/HIL, Keihanna Science City, Kyoto, Japan
- Department of Systems Innovation, Osaka University, Toyonaka, Japan
| | - Davide Massaro
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Antonella Marchetti
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| |
Collapse
|
23
|
Wang S, Cheong YF, Dilks DD, Rochat P. The Uncanny Valley Phenomenon and the Temporal Dynamics of Face Animacy Perception. Perception 2020; 49:1069-1089. [PMID: 32903162 DOI: 10.1177/0301006620952611] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Human replicas highly resembling people tend to elicit eerie sensations-a phenomenon known as the uncanny valley. To test whether this effect is attributable to people's ascription of mind to (i.e., mind perception hypothesis) or subtraction of mind from androids (i.e., dehumanization hypothesis), in Study 1, we examined the effect of face exposure time on the perceived animacy of human, android, and mechanical-looking robot faces. In Study 2, in addition to exposure time, we also manipulated the spatial frequency of faces, by preserving either their fine (high spatial frequency) or coarse (low spatial frequency) information, to examine its effect on faces' perceived animacy and uncanniness. We found that perceived animacy decreased as a function of exposure time only in android but not in human or mechanical-looking robot faces (Study 1). In addition, the manipulation of spatial frequency eliminated the decrease in android faces' perceived animacy and reduced their perceived uncanniness (Study 2). These findings link perceived uncanniness in androids to the temporal dynamics of face animacy perception. We discuss these findings in relation to the dehumanization hypothesis and alternative hypotheses of the uncanny valley phenomenon.
Collapse
|
24
|
Abstract
As the field of social robotics has been dynamically growing and expanding over various areas of research and application, in which robots can be of assistance and companionship for humans, this paper offers a different perspective on a role that social robots can also play, namely the role of informing us about flexibility of human mechanisms of social cognition. The paper focuses on studies in which robots have been used as a new type of "stimuli" in psychological experiments to examine whether similar mechanisms of social cognition would be activated in interaction with a robot, as would be elicited in interaction with another human. Analysing studies in which a direct comparison has been made between a robot and a human agent, the paper examines whether for robot agents, the brain re-uses the same mechanisms that have been developed for interaction with other humans in terms of perception, action representation, attention and higher-order social cognition. Based on this analysis, the paper concludes that the human socio-cognitive mechanisms, in adult brains, are sufficiently flexible to be re-used for robotic agents, at least for those that have some level of resemblance to humans.
Collapse
|
25
|
Brandi ML, Kaifel D, Lahnakoski JM, Schilbach L. A naturalistic paradigm simulating gaze-based social interactions for the investigation of social agency. Behav Res Methods 2020; 52:1044-1055. [PMID: 31712998 PMCID: PMC7280341 DOI: 10.3758/s13428-019-01299-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Sense of agency describes the experience of being the cause of one's own actions and the resulting effects. In a social interaction, one's actions may also have a perceivable effect on the actions of others. In this article, we refer to the experience of being responsible for the behavior of others as social agency, which has important implications for the success or failure of social interactions. Gaze-contingent eyetracking paradigms provide a useful tool to analyze social agency in an experimentally controlled manner, but the current methods are lacking in terms of their ecological validity. We applied this technique in a novel task using video stimuli of real gaze behavior to simulate a gaze-based social interaction. This enabled us to create the impression of a live interaction with another person while being able to manipulate the gaze contingency and congruency shown by the simulated interaction partner in a continuous manner. Behavioral data demonstrated that participants believed they were interacting with a real person and that systematic changes in the responsiveness of the simulated partner modulated the experience of social agency. More specifically, gaze contingency (temporal relatedness) and gaze congruency (gaze direction relative to the participant's gaze) influenced the explicit sense of being responsible for the behavior of the other. In general, our study introduces a new naturalistic task to simulate gaze-based social interactions and demonstrates that it is suitable to studying the explicit experience of social agency.
Collapse
Affiliation(s)
- Marie-Luise Brandi
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.
| | - Daniela Kaifel
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany
| | - Juha M Lahnakoski
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany
| | - Leonhard Schilbach
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany
| |
Collapse
|
26
|
Di Dio C, Manzi F, Peretti G, Cangelosi A, Harris PL, Massaro D, Marchetti A. Shall I Trust You? From Child-Robot Interaction to Trusting Relationships. Front Psychol 2020; 11:469. [PMID: 32317998 PMCID: PMC7147504 DOI: 10.3389/fpsyg.2020.00469] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 02/28/2020] [Indexed: 11/19/2022] Open
Abstract
Studying trust in the context of human-robot interaction is of great importance given the increasing relevance and presence of robotic agents in the social sphere, including educational and clinical. We investigated the acquisition, loss, and restoration of trust when preschool and school-age children played with either a human or a humanoid robot in vivo. The relationship between trust and the representation of the quality of attachment relationships, Theory of Mind, and executive function skills was also investigated. Additionally, to outline children's beliefs about the mental competencies of the robot, we further evaluated the attribution of mental states to the interactive agent. In general, no substantial differences were found in children's trust in the play partner as a function of agency (human or robot). Nevertheless, 3-year-olds showed a trend toward trusting the human more than the robot, as opposed to 7-year-olds, who displayed the reverse pattern. These findings align with results showing that, for 3- and 7-year-olds, the cognitive ability to switch was significantly associated with trust restoration in the human and the robot, respectively. Additionally, supporting previous findings, we found a dichotomy between attributions of mental states to the human and robot and children's behavior: while attributing to the robot significantly lower mental states than the human, in the Trusting Game, children behaved in a similar way when they related to the human and the robot. Altogether, the results of this study highlight that similar psychological mechanisms are at play when children are to establish a novel trustful relationship with a human and robot partner. Furthermore, the findings shed light on the interplay - during development - between children's quality of attachment relationships and the development of a Theory of Mind, which act differently on trust dynamics as a function of the children's age as well as the interactive partner's nature (human vs. robot).
Collapse
Affiliation(s)
- Cinzia Di Dio
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Federico Manzi
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Giulia Peretti
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Angelo Cangelosi
- School of Computer Science, The University of Manchester, Manchester, United Kingdom
| | - Paul L. Harris
- Graduate School of Education, Harvard University, Cambridge, MA, United States
| | - Davide Massaro
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Antonella Marchetti
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| |
Collapse
|
27
|
Zhang Y, Song W, Tan Z, Zhu H, Wang Y, Lam CM, Weng Y, Hoi SP, Lu H, Man Chan BS, Chen J, Yi L. Could social robots facilitate children with autism spectrum disorders in learning distrust and deception? COMPUTERS IN HUMAN BEHAVIOR 2019. [DOI: 10.1016/j.chb.2019.04.008] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
28
|
Seeing minds in others: Mind perception modulates low-level social-cognitive performance and relates to ventromedial prefrontal structures. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 18:837-856. [PMID: 29992485 DOI: 10.3758/s13415-018-0608-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In social interactions, we rely on nonverbal cues like gaze direction to understand the behavior of others. How we react to these cues is affected by whether they are believed to originate from an entity with a mind, capable of having internal states (i.e., mind perception). While prior work has established a set of neural regions linked to social-cognitive processes like mind perception, the degree to which activation within this network relates to performance in subsequent social-cognitive tasks remains unclear. In the current study, participants performed a mind perception task (i.e., judging the likelihood that faces, varying in physical human-likeness, have internal states) while event-related fMRI was collected. Afterwards, participants performed a social attention task outside the scanner, during which they were cued by the gaze of the same faces that they previously judged within the mind perception task. Parametric analyses of the fMRI data revealed that activity within ventromedial prefrontal cortex (vmPFC) was related to both mind ratings inside the scanner and gaze-cueing performance outside the scanner. In addition, other social brain regions were related to gaze-cueing performance, including frontal areas like the left insula, dorsolateral prefrontal cortex, and inferior frontal gyrus, as well as temporal areas like the left temporo-parietal junction and bilateral temporal gyri. The findings suggest that functions subserved by the vmPFC are relevant to both mind perception and social attention, implicating a role of vmPFC in the top-down modulation of low-level social-cognitive processes.
Collapse
|
29
|
The mind minds minds: The effect of intentional stance on the neural encoding of joint attention. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:1479-1491. [DOI: 10.3758/s13415-019-00734-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
30
|
Jaeger CB, Hymel AM, Levin DT, Biswas G, Paul N, Kinnebrew J. The interrelationship between concepts about agency and students' use of teachable-agent learning technology. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:14. [PMID: 31001708 PMCID: PMC6473007 DOI: 10.1186/s41235-019-0163-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Accepted: 02/28/2019] [Indexed: 11/10/2022]
Abstract
To successfully interact with software agents, people must call upon basic concepts about goals and intentionality and strategically deploy these concepts in a range of circumstances where specific entailments may or may not apply. We hypothesize that people who can effectively deploy agency concepts in new situations will be more effective in interactions with software agents. Further, we posit that interacting with a software agent can itself refine a person’s deployment of agency concepts. We investigated this reciprocal relationship in one particularly important context: the classroom. In three experiments we examined connections between middle school students’ concepts about agency and their success learning from a teachable-agent-based computer system called “Betty’s Brain”. We found that the students who made more intentional behavioral predictions about humans learned more effectively from the system. We also found that students who used the Betty’s Brain system distinguished human behavior from machine behavior more strongly than students who did not. We conclude that the ability to effectively deploy agency concepts both supports, and is refined by, interactions with software agents.
Collapse
Affiliation(s)
- Christopher Brett Jaeger
- Department of Psychology and Human Development, Vanderbilt University, 230 Appleton Place, Nashville, TN, 37203-5701, USA.
| | - Alicia M Hymel
- Department of Psychology and Human Development, Vanderbilt University, 230 Appleton Place, Nashville, TN, 37203-5701, USA
| | - Daniel T Levin
- Department of Psychology and Human Development, Vanderbilt University, 230 Appleton Place, Nashville, TN, 37203-5701, USA
| | - Gautam Biswas
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Box 1824, Station B,, Nashville, TN, 37325, USA
| | - Natalie Paul
- Department of Psychology and Human Development, Vanderbilt University, 230 Appleton Place, Nashville, TN, 37203-5701, USA
| | | |
Collapse
|
31
|
It Does Not Matter Who You Are: Fairness in Pre-schoolers Interacting with Human and Robotic Partners. Int J Soc Robot 2019. [DOI: 10.1007/s12369-019-00528-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
32
|
Wang X, Krumhuber EG. Mind Perception of Robots Varies With Their Economic Versus Social Function. Front Psychol 2018; 9:1230. [PMID: 30072938 PMCID: PMC6058296 DOI: 10.3389/fpsyg.2018.01230] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Accepted: 06/27/2018] [Indexed: 11/13/2022] Open
Abstract
While robots were traditionally built to achieve economic efficiency and financial profits, their roles are likely to change in the future with the aim to provide social support and companionship. In this research, we examined whether the robot’s proposed function (social vs. economic) impacts judgments of mind and moral treatment. Studies 1a and 1b demonstrated that robots with social function were perceived to possess greater ability for emotional experience, but not cognition, compared to those with economic function and whose function was not mentioned explicitly. Study 2 replicated this finding and further showed that low economic value reduced ascriptions of cognitive capacity, whereas high social value resulted in increased emotion perception. In Study 3, robots with high social value were more likely to be afforded protection from harm, and such effect was related to levels of ascribed emotional experience. Together, the findings demonstrate a dissociation between function type (social vs. economic) and ascribed mind (emotion vs. cognition). In addition, the two types of functions exert asymmetric influences on the moral treatment of robots. Theoretical and practical implications for the field of social psychology and human-computer interaction are discussed.
Collapse
Affiliation(s)
- Xijing Wang
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
33
|
|
34
|
Hortensius R, Cross ES. From automata to animate beings: the scope and limits of attributing socialness to artificial agents. Ann N Y Acad Sci 2018; 1426:93-110. [PMID: 29749634 DOI: 10.1111/nyas.13727] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Revised: 03/16/2018] [Accepted: 03/21/2018] [Indexed: 12/29/2022]
Abstract
Understanding the mechanisms and consequences of attributing socialness to artificial agents has important implications for how we can use technology to lead more productive and fulfilling lives. Here, we integrate recent findings on the factors that shape behavioral and brain mechanisms that support social interactions between humans and artificial agents. We review how visual features of an agent, as well as knowledge factors within the human observer, shape attributions across dimensions of socialness. We explore how anthropomorphism and dehumanization further influence how we perceive and interact with artificial agents. Based on these findings, we argue that the cognitive reconstruction within the human observer is likely to be far more crucial in shaping our interactions with artificial agents than previously thought, while the artificial agent's visual features are possibly of lesser importance. We combine these findings to provide an integrative theoretical account based on the "like me" hypothesis, and discuss the key role played by the Theory-of-Mind network, especially the temporal parietal junction, in the shift from mechanistic to social attributions. We conclude by highlighting outstanding questions on the impact of long-term interactions with artificial agents on the behavioral and brain mechanisms of attributing socialness to these agents.
Collapse
Affiliation(s)
- Ruud Hortensius
- Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Wales, United Kingdom
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Scotland, United Kingdom
| | - Emily S Cross
- Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Wales, United Kingdom
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Scotland, United Kingdom
| |
Collapse
|
35
|
Marchetti A, Manzi F, Itakura S, Massaro D. Theory of Mind and Humanoid Robots From a Lifespan Perspective. ZEITSCHRIFT FUR PSYCHOLOGIE-JOURNAL OF PSYCHOLOGY 2018. [DOI: 10.1027/2151-2604/a000326] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Abstract. This review focuses on some relevant issues concerning the relationship between theory of mind (ToM) and humanoid robots. Humanoid robots are employed in different everyday-life contexts, so it seems relevant to question whether the relationships between human beings and humanoids can be characterized by a mode of interaction typical of the relationships between human beings, that is, the attribution of mental states. Because ToM development continuously undergoes changes from early childhood to late adulthood, we adopted a lifespan perspective. We analyzed contributions from the literature by organizing them around the partition between “mental states and actions” and “human-like features.” Finally, we considered how studying human–robot interaction, within a ToM context, can contribute to our understanding of the intersubjective nature of this interaction.
Collapse
Affiliation(s)
- Antonella Marchetti
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Federico Manzi
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Shoji Itakura
- Department of Psychology, Graduate School of Letters, Kyoto University, Japan
| | - Davide Massaro
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| |
Collapse
|
36
|
Wiese E, Metta G, Wykowska A. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social. Front Psychol 2017; 8:1663. [PMID: 29046651 PMCID: PMC5632653 DOI: 10.3389/fpsyg.2017.01663] [Citation(s) in RCA: 100] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 09/11/2017] [Indexed: 12/30/2022] Open
Abstract
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.
Collapse
Affiliation(s)
- Eva Wiese
- Department of Psychology, George Mason University, Fairfax, VA, United States
| | | | | |
Collapse
|
37
|
Abubshait A, Wiese E. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction. Front Psychol 2017; 8:1393. [PMID: 28878703 PMCID: PMC5572356 DOI: 10.3389/fpsyg.2017.01393] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Accepted: 07/31/2017] [Indexed: 11/15/2022] Open
Abstract
Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human–robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human–robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human–robot interaction. The results show that both appearance and behavior affect human–robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human–robot interaction are discussed.
Collapse
Affiliation(s)
| | - Eva Wiese
- Department of Psychology, George Mason University, FairfaxVA, United States
| |
Collapse
|
38
|
de Visser EJ, Monfort SS, Goodyear K, Lu L, O'Hara M, Lee MR, Parasuraman R, Krueger F. A Little Anthropomorphism Goes a Long Way. HUMAN FACTORS 2017; 59:116-133. [PMID: 28146673 PMCID: PMC5477060 DOI: 10.1177/0018720816687205] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
OBJECTIVE We investigated the effects of exogenous oxytocin on trust, compliance, and team decision making with agents varying in anthropomorphism (computer, avatar, human) and reliability (100%, 50%). BACKGROUND Authors of recent work have explored psychological similarities in how people trust humanlike automation compared with how they trust other humans. Exogenous administration of oxytocin, a neuropeptide associated with trust among humans, offers a unique opportunity to probe the anthropomorphism continuum of automation to infer when agents are trusted like another human or merely a machine. METHOD Eighty-four healthy male participants collaborated with automated agents varying in anthropomorphism that provided recommendations in a pattern recognition task. RESULTS Under placebo, participants exhibited less trust and compliance with automated aids as the anthropomorphism of those aids increased. Under oxytocin, participants interacted with aids on the extremes of the anthropomorphism continuum similarly to placebos but increased their trust, compliance, and performance with the avatar, an agent on the midpoint of the anthropomorphism continuum. CONCLUSION This study provides the first evidence that administration of exogenous oxytocin affected trust, compliance, and team decision making with automated agents. These effects provide support for the premise that oxytocin increases affinity for social stimuli in automated aids. APPLICATION Designing automation to mimic basic human characteristics is sufficient to elicit behavioral trust outcomes that are driven by neurological processes typically observed in human-human interactions. Designers of automated systems should consider the task, the individual, and the level of anthropomorphism to achieve the desired outcome.
Collapse
Affiliation(s)
| | - Samuel S Monfort
- George Mason University, Fairfax, Virginia
- George Mason University, Fairfax, Virginia
| | - Kimberly Goodyear
- Brown University, Providence, Rhode Island
- George Mason University, Fairfax, Virginia
| | - Li Lu
- George Mason University, Fairfax, Virginia
- George Mason University, Fairfax, Virginia
| | - Martin O'Hara
- Virginia Hospital Center, Fairfax Hospital, Arlington, Virginia
- George Mason University, Fairfax, Virginia
| | - Mary R Lee
- National Institute on Alcohol Abuse and Alcoholism, Bethesda, Maryland
- George Mason University, Fairfax, Virginia
| | | | | |
Collapse
|
39
|
Martini MC, Gonzalez CA, Wiese E. Correction: Seeing Minds in Others - Can Agents with Robotic Appearance Have Human-Like Preferences? PLoS One 2016; 11:e0149766. [PMID: 26872149 PMCID: PMC4752504 DOI: 10.1371/journal.pone.0149766] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|