1
|
Linnunsalo S, Küster D, Yrttiaho S, Peltola MJ, Hietanen JK. Psychophysiological responses to eye contact with a humanoid robot: Impact of perceived intentionality. Neuropsychologia 2023; 189:108668. [PMID: 37619935 DOI: 10.1016/j.neuropsychologia.2023.108668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 06/20/2023] [Accepted: 08/21/2023] [Indexed: 08/26/2023]
Abstract
Eye contact with a social robot has been shown to elicit similar psychophysiological responses to eye contact with another human. However, it is becoming increasingly clear that the attention- and affect-related psychophysiological responses differentiate between direct (toward the observer) and averted gaze mainly when viewing embodied faces that are capable of social interaction, whereas pictorial or pre-recorded stimuli have no such capability. It has been suggested that genuine eye contact, as indicated by the differential psychophysiological responses to direct and averted gaze, requires a feeling of being watched by another mind. Therefore, we measured event-related potentials (N170 and frontal P300) with EEG, facial electromyography, skin conductance, and heart rate deceleration responses to seeing a humanoid robot's direct versus averted gaze, while manipulating the impression of the robot's intentionality. The results showed that the N170 and the facial zygomatic responses were greater to direct than to averted gaze of the robot, and independent of the robot's intentionality, whereas the frontal P300 responses were more positive to direct than to averted gaze only when the robot appeared intentional. The study provides further evidence that the gaze behavior of a social robot elicits attentional and affective responses and adds that the robot's seemingly autonomous social behavior plays an important role in eliciting higher-level socio-cognitive processing.
Collapse
Affiliation(s)
- Samuli Linnunsalo
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland.
| | - Dennis Küster
- Cognitive Systems Lab, Department of Computer Science, University of Bremen, Bremen, Germany
| | - Santeri Yrttiaho
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland
| | - Mikko J Peltola
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland; Tampere Institute for Advanced Study, Tampere University, Tampere, Finland
| | - Jari K Hietanen
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland.
| |
Collapse
|
2
|
Longin L, Bahrami B, Deroy O. Intelligence brings responsibility - Even smart AI assistants are held responsible. iScience 2023; 26:107494. [PMID: 37609629 PMCID: PMC10440553 DOI: 10.1016/j.isci.2023.107494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 06/07/2023] [Accepted: 07/22/2023] [Indexed: 08/24/2023] Open
Abstract
People will not hold cars responsible for traffic accidents, yet they do when artificial intelligence (AI) is involved. AI systems are held responsible when they act or merely advise a human agent. Does this mean that as soon as AI is involved responsibility follows? To find out, we examined whether purely instrumental AI systems stay clear of responsibility. We compared AI-powered with non-AI-powered car warning systems and measured their responsibility rating alongside their human users. Our findings show that responsibility is shared when the warning system is powered by AI but not by a purely mechanical system, even though people consider both systems as mere tools. Surprisingly, whether the warning prevents the accident introduces an outcome bias: the AI takes higher credit than blame depending on what the human manages or fails to do.
Collapse
Affiliation(s)
- Louis Longin
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU Munich, Geschwister-Scholl-Platz 1, 80539 Munich, Germany
| | - Bahador Bahrami
- Crowd Cognition Group, Department of General Psychology and Education, LMU-Munich, Gabelsbergerstraße 62, 80333 Munich, Germany
| | - Ophelia Deroy
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU Munich, Geschwister-Scholl-Platz 1, 80539 Munich, Germany
- Munich Centre for Neurosciences-Brain & Mind, Großhaderner Str. 2, 82152 Munich, Germany
- Institute of Philosophy, School of Advanced Study, University of London, Senate House, Malet Street, London WC1E 7HU, UK
| |
Collapse
|
3
|
Ziemke T. Understanding Social Robots: Attribution of Intentional Agency to Artificial and Biological Bodies. ARTIFICIAL LIFE 2023; 29:351-366. [PMID: 36943757 DOI: 10.1162/artl_a_00404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Much research in robotic artificial intelligence (AI) and Artificial Life has focused on autonomous agents as an embodied and situated approach to AI. Such systems are commonly viewed as overcoming many of the philosophical problems associated with traditional computationalist AI and cognitive science, such as the grounding problem (Harnad) or the lack of intentionality (Searle), because they have the physical and sensorimotor grounding that traditional AI was argued to lack. Robot lawn mowers and self-driving cars, for example, more or less reliably avoid obstacles, approach charging stations, and so on-and therefore might be considered to have some form of artificial intentionality or intentional directedness. It should be noted, though, that the fact that robots share physical environments with people does not necessarily mean that they are situated in the same perceptual and social world as humans. For people encountering socially interactive systems, such as social robots or automated vehicles, this poses the nontrivial challenge to interpret them as intentional agents to understand and anticipate their behavior but also to keep in mind that the intentionality of artificial bodies is fundamentally different from their natural counterparts. This requires, on one hand, a "suspension of disbelief " but, on the other hand, also a capacity for the "suspension of belief." This dual nature of (attributed) artificial intentionality has been addressed only rather superficially in embodied AI and social robotics research. It is therefore argued that Bourgine and Varela's notion of Artificial Life as the practice of autonomous systems needs to be complemented with a practice of socially interactive autonomous systems, guided by a better understanding of the differences between artificial and biological bodies and their implications in the context of social interactions between people and technology.
Collapse
Affiliation(s)
- Tom Ziemke
- Linköping University, Cognition & Interaction Lab, Human-Centered Systems Division, Department of Computer and Information Science.
| |
Collapse
|
4
|
The presence of automation enhances deontological considerations in moral judgments. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
5
|
Being watched by a humanoid robot and a human: Effects on affect-related psychophysiological responses. Biol Psychol 2022; 175:108451. [DOI: 10.1016/j.biopsycho.2022.108451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 10/27/2022] [Accepted: 10/31/2022] [Indexed: 11/06/2022]
|
6
|
Persiani M, Hellström T. Policy regularization for legible behavior. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07942-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
AbstractIn this paper we propose a method to augment a Reinforcement Learning agent with legibility. This method is inspired by the literature in Explainable Planning and allows to regularize the agent’s policy after training, and without requiring to modify its learning algorithm. This is achieved by evaluating how the agent’s optimal policy may produce observations that would make an observer model to infer a wrong policy. In our formulation, the decision boundary introduced by legibility impacts the states in which the agent’s policy returns an action that is non-legible because having high likelihood also in other policies. In these cases, a trade-off between such action, and legible/sub-optimal action is made. We tested our method in a grid-world environment highlighting how legibility impacts the agent’s optimal policy, and gathered both quantitative and qualitative results. In addition, we discuss how the proposed regularization generalizes over methods functioning with goal-driven policies, because applicable to general policies of which goal-driven policies are a special case.
Collapse
|
7
|
McCrae RR. Seeking a Philosophical Basis for Trait Psychology. Psychol Rep 2022:332941221132992. [PMID: 36269570 DOI: 10.1177/00332941221132992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
I summarize an early effort to provide a conceptual basis for psychology. Natural science studies material objects, and its methods and assumptions may not be appropriate for the study of persons. Persons exist within the natural attitude and are characterized by such properties as temporality, responsibility, normality, and identity. Contemporary theories of mind focus on people's understanding of how minds make decisions and shape behavior, but say little about the nature of the entity that possesses a mind; ethnopsychologies are concerned with cultural variations in beliefs about accidental rather than essential aspects of human psychology. The lay philosophical view of the person sketched here is intended to be broader and deeper. It is particularly relevant to trait psychology, appears to have been implicit in much trait research, and is generally consistent with empirical findings on personality traits.
Collapse
|
8
|
Pagliari M, Chambon V, Berberian B. What is new with Artificial Intelligence? Human–agent interactions through the lens of social agency. Front Psychol 2022; 13:954444. [PMID: 36248519 PMCID: PMC9559368 DOI: 10.3389/fpsyg.2022.954444] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 08/03/2022] [Indexed: 11/17/2022] Open
Abstract
In this article, we suggest that the study of social interactions and the development of a “sense of agency” in joint action can help determine the content of relevant explanations to be implemented in artificial systems to make them “explainable.” The introduction of automated systems, and more broadly of Artificial Intelligence (AI), into many domains has profoundly changed the nature of human activity, as well as the subjective experience that agents have of their own actions and their consequences – an experience that is commonly referred to as sense of agency. We propose to examine the empirical evidence supporting this impact of automation on individuals’ sense of agency, and hence on measures as diverse as operator performance, system explicability and acceptability. Because of some of its key characteristics, AI occupies a special status in the artificial systems landscape. We suggest that this status prompts us to reconsider human–AI interactions in the light of human–human relations. We approach the study of joint actions in human social interactions to deduce what key features are necessary for the development of a reliable sense of agency in a social context and suggest that such framework can help define what constitutes a good explanation. Finally, we propose possible directions to improve human–AI interactions and, in particular, to restore the sense of agency of human operators, improve their confidence in the decisions made by artificial agents, and increase the acceptability of such agents.
Collapse
Affiliation(s)
- Marine Pagliari
- Institut Jean Nicod, Département d’Études Cognitives, École Normale Supérieure, Centre National de la Recherche Scientifique, Paris Sciences et Lettres University, Paris, France
- Information Processing and Systems, Office National d’Etudes et Recherches Aérospatiales, Salon de Provence, France
- *Correspondence: Marine Pagliari,
| | - Valérian Chambon
- Institut Jean Nicod, Département d’Études Cognitives, École Normale Supérieure, Centre National de la Recherche Scientifique, Paris Sciences et Lettres University, Paris, France
- Valérian Chambon,
| | - Bruno Berberian
- Information Processing and Systems, Office National d’Etudes et Recherches Aérospatiales, Salon de Provence, France
- Bruno Berberian,
| |
Collapse
|
9
|
Exploring behaviours perceived as important for human—Dog bonding and their translation to a robotic platform. PLoS One 2022; 17:e0274353. [PMID: 36170337 PMCID: PMC9518860 DOI: 10.1371/journal.pone.0274353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 08/25/2022] [Indexed: 11/19/2022] Open
Abstract
To facilitate long-term engagement with social robots, emerging evidence suggests that modelling robots on social animals with whom many people form enduring social bonds–specifically, pet dogs–may be useful. However, scientific understanding of the features of pet dogs that are important for establishing and maintaining social bonds remains limited to broad qualities that are liked, as opposed to specific behaviours. To better understand dog behaviours that are perceived as important for facilitating social bonds between owner and pet, we surveyed current dog owners (n = 153) with open-ended questions about their dogs’ behaviours. Thematic analysis identified 7 categories of behaviours perceived as important to human—dog bonding, including: 1) attunement, 2) communication, 3) consistency and predictability, 4) physical affection, 5) positivity and enthusiasm, 6) proximity, and 7) shared activities. We consider the feasibility of translating these behaviours into a social robotic platform, and signpost potential barriers moving forward. In addition to providing insight into important behaviours for human—dog bonding, this work provides a springboard for those hoping to implement dog behaviours into animal-like artificial agents designed for social roles.
Collapse
|
10
|
Spatola N, Chaminade T. Precuneus brain response changes differently during human-robot and human-human dyadic social interaction. Sci Rep 2022; 12:14794. [PMID: 36042357 PMCID: PMC9427745 DOI: 10.1038/s41598-022-14207-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 06/02/2022] [Indexed: 11/12/2022] Open
Abstract
Human–human interactions (HHI) and human–robot interactions (HRI) are compared to identify differences between cognitive processes reflecting bonding in social interactions with natural and artificial agents. We capitalize on a unique corpus of neuroimaging data (fMRI) recorded while participants freely discussed with another human or a conversational robotic head, in order to study a crucial parameter of human social cognition, namely that social interactions are adaptive bidirectional processes that evolve over time. We used linear statistics to identify regions of the brain where activity changes differently when participants carry out twelve one-minute conversations, alternating between a human and a robotic interlocutor. Results show that activity in the posterior cingulate cortex, a key region associated with social cognition, increases over time in HHI but not in HRI. These results are interpreted as reflecting a process of strengthening social bonding during repeated exchanges when the interacting agent is a human, but not a robot.
Collapse
Affiliation(s)
| | - Thierry Chaminade
- Institut de Neurosciences de La Timone, UMR 7289, Aix-Marseille Université-CNRS, Marseille, France.
| |
Collapse
|
11
|
Spatola N, Marchesi S, Wykowska A. Different models of anthropomorphism across cultures and ontological limits in current frameworks the integrative framework of anthropomorphism. Front Robot AI 2022; 9:863319. [PMID: 36093211 PMCID: PMC9452957 DOI: 10.3389/frobt.2022.863319] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 08/01/2022] [Indexed: 11/23/2022] Open
Abstract
Anthropomorphism describes the tendency to ascribe human characteristics to nonhuman agents. Due to the increased interest in social robotics, anthropomorphism has become a core concept of human-robot interaction (HRI) studies. However, the wide use of this concept resulted in an interchangeability of its definition. In the present study, we propose an integrative framework of anthropomorphism (IFA) encompassing three levels: cultural, individual general tendencies, and direct attributions of human-like characteristics to robots. We also acknowledge the Western bias of the state-of-the-art view of anthropomorphism and develop a cross-cultural approach. In two studies, participants from various cultures completed tasks and questionnaires assessing their animism beliefs, individual tendencies to endow robots with mental properties, spirit, and consider them as more or less human. We also evaluated their attributions of mental anthropomorphic characteristics towards robots (i.e., cognition, emotion, intention). Our results demonstrate, in both experiments, that a three-level model (as hypothesized in the IFA) reliably explains the collected data. We found an overall influence of animism (cultural level) on the two lower levels, and an influence of the individual tendencies to mentalize, spiritualize and humanize (individual level) on the attribution of cognition, emotion and intention. In addition, in Experiment 2, the analyses show a more anthropocentric view of the mind for Western than East-Asian participants. As such, Western perception of robots depends more on humanization while East-Asian on mentalization. We further discuss these results in relation to the anthropomorphism literature and argue for the use of integrative cross-cultural model in HRI research.
Collapse
Affiliation(s)
- Nicolas Spatola
- Istituto Italiano di Tecnologia, Genova, Italy
- Artimon Perspectives, Paris, France
- *Correspondence: Nicolas Spatola, ; Agnieszka Wykowska,
| | | | - Agnieszka Wykowska
- Istituto Italiano di Tecnologia, Genova, Italy
- *Correspondence: Nicolas Spatola, ; Agnieszka Wykowska,
| |
Collapse
|
12
|
Roselli C, Ciardo F, De Tommaso D, Wykowska A. Human-likeness and attribution of intentionality predict vicarious sense of agency over humanoid robot actions. Sci Rep 2022; 12:13845. [PMID: 35974080 PMCID: PMC9381554 DOI: 10.1038/s41598-022-18151-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 08/05/2022] [Indexed: 11/09/2022] Open
Abstract
Sense of Agency (SoA) is the feeling of being in control of one's actions and their outcomes. In a social context, people can experience a "vicarious" SoA over another human's actions; however, it is still controversial whether the same occurs in Human-Robot Interaction (HRI). The present study aimed at understanding whether humanoid robots may elicit vicarious SoA in humans, and whether the emergence of this phenomenon depends on the attribution of intentionality towards robots. We asked adult participants to perform an Intentional Binding (IB) task alone and with the humanoid iCub robot, reporting the time of occurrence of both self- and iCub-generated actions. Before the experiment, participants' degree of attribution of intentionality towards robots was assessed. Results showed that participants experienced vicarious SoA over iCub-generated actions. Moreover, intentionality attribution positively predicted the magnitude of vicarious SoA. In conclusion, our results highlight the importance of factors such as human-likeness and attribution of intentionality for the emergence of vicarious SoA towards robots.
Collapse
Affiliation(s)
- Cecilia Roselli
- Social Cognition in Human Robot Interaction, Center for Human Technologies, Italian Institute of Technology, Via Enrico Melen 83, 16152, Genova, Italy
| | - Francesca Ciardo
- Social Cognition in Human Robot Interaction, Center for Human Technologies, Italian Institute of Technology, Via Enrico Melen 83, 16152, Genova, Italy
| | - Davide De Tommaso
- Social Cognition in Human Robot Interaction, Center for Human Technologies, Italian Institute of Technology, Via Enrico Melen 83, 16152, Genova, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human Robot Interaction, Center for Human Technologies, Italian Institute of Technology, Via Enrico Melen 83, 16152, Genova, Italy.
| |
Collapse
|
13
|
Abubshait A, Parenti L, Perez-Osorio J, Wykowska A. Misleading Robot Signals in a Classification Task Induce Cognitive Load as Measured by Theta Synchronization Between Frontal and Temporo-parietal Brain Regions. FRONTIERS IN NEUROERGONOMICS 2022; 3:838136. [PMID: 38235447 PMCID: PMC10790903 DOI: 10.3389/fnrgo.2022.838136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 06/01/2022] [Indexed: 01/19/2024]
Abstract
As technological advances progress, we find ourselves in situations where we need to collaborate with artificial agents (e.g., robots, autonomous machines and virtual agents). For example, autonomous machines will be part of search and rescue missions, space exploration and decision aids during monitoring tasks (e.g., baggage-screening at the airport). Efficient communication in these scenarios would be crucial to interact fluently. While studies examined the positive and engaging effect of social signals (i.e., gaze communication) on human-robot interaction, little is known about the effects of conflicting robot signals on the human actor's cognitive load. Moreover, it is unclear from a social neuroergonomics perspective how different brain regions synchronize or communicate with one another to deal with the cognitive load induced by conflicting signals in social situations with robots. The present study asked if neural oscillations that correlate with conflict processing are observed between brain regions when participants view conflicting robot signals. Participants classified different objects based on their color after a robot (i.e., iCub), presented on a screen, simulated handing over the object to them. The robot proceeded to cue participants (with a head shift) to the correct or incorrect target location. Since prior work has shown that unexpected cues can interfere with oculomotor planning and induces conflict, we expected that conflicting robot social signals which would interfere with the execution of actions. Indeed, we found that conflicting social signals elicited neural correlates of cognitive conflict as measured by mid-brain theta oscillations. More importantly, we found higher coherence values between mid-frontal electrode locations and posterior occipital electrode locations in the theta-frequency band for incongruent vs. congruent cues, which suggests that theta-band synchronization between these two regions allows for communication between cognitive control systems and gaze-related attentional mechanisms. We also find correlations between coherence values and behavioral performance (Reaction Times), which are moderated by the congruency of the robot signal. In sum, the influence of irrelevant social signals during goal-oriented tasks can be indexed by behavioral, neural oscillation and brain connectivity patterns. These data provide insights about a new measure for cognitive load, which can also be used in predicting human interaction with autonomous machines.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| | - Lorenzo Parenti
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
- Department of Psychology, University of Torino, Turin, Italy
| | - Jairo Perez-Osorio
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| |
Collapse
|
14
|
Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup. AI & SOCIETY 2022. [DOI: 10.1007/s00146-021-01211-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
15
|
Mikalonytė ES, Kneer M. Can Artificial Intelligence Make Art? ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3530875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human paintings as art to roughly the same extent. However, people are much less willing to consider robots as artists than humans, which is partially explained by the fact that they are less disposed to attribute artistic intentions to robots.
Collapse
|
16
|
Thellman S, de Graaf M, Ziemke T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a
computer < robot < human
pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Collapse
|
17
|
Fukuchi Y, Osawa M, Yamakawa H, Takahashi T, Imai M. Conveying Intention by Motions With Awareness of Information Asymmetry. Front Robot AI 2022; 9:783863. [PMID: 35252364 PMCID: PMC8890267 DOI: 10.3389/frobt.2022.783863] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 01/10/2022] [Indexed: 12/02/2022] Open
Abstract
Humans sometimes attempt to infer an artificial agent’s mental state based on mere observations of its behavior. From the agent’s perspective, it is important to choose actions with awareness of how its behavior will be considered by humans. Previous studies have proposed computational methods to generate such publicly self-aware motion to allow an agent to convey a certain intention by motions that can lead a human observer to infer what the agent is aiming to do. However, little consideration has been given to the effect of information asymmetry between the agent and a human, or to the gaps in their beliefs due to different observations from their respective perspectives. This paper claims that information asymmetry is a key factor for conveying intentions with motions. To validate the claim, we developed a novel method to generate intention-conveying motions while considering information asymmetry. Our method utilizes a Bayesian public self-awareness model that effectively simulates the inference of an agent’s mental states as attributed to the agent by an observer in a partially observable domain. We conducted two experiments to investigate the effects of information asymmetry when conveying intentions with motions by comparing the motions from our method with those generated without considering information asymmetry in a manner similar to previous work. The results demonstrate that by taking information asymmetry into account, an agent can effectively convey its intention to human observers.
Collapse
Affiliation(s)
- Yosuke Fukuchi
- Faculty of Science and Technology, Keio University, Yokohama, Japan
- *Correspondence: Yosuke Fukuchi,
| | | | - Hiroshi Yamakawa
- The Whole Brain Architecture Initiative, Tokyo, Japan
- School of Engineering, University of Tokyo, Tokyo, Japan
- RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
| | - Tatsuji Takahashi
- School of Engineering, University of Tokyo, Tokyo, Japan
- RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
- School of Science and Engineering, Tokyo Denki University, Tokyo, Japan
| | - Michita Imai
- Faculty of Science and Technology, Keio University, Yokohama, Japan
| |
Collapse
|
18
|
Ciardo F, De Tommaso D, Wykowska A. Joint action with artificial agents: Human-likeness in behaviour and morphology affects sensorimotor signaling and social inclusion. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
19
|
Spatola N, Huguet P. Cognitive Impact of Anthropomorphized Robot Gaze. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2021. [DOI: 10.1145/3459994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Attentional control does not have fix functioning and can be strongly impacted by the presence of other human beings or humanoid robots. In two studies, this phenomenon was investigated while focusing exclusively on robot gaze as a potential determinant of attentional control along with the role of participants’ anthropomorphic inferences toward the robot. In study 1, we expected and found higher interference in trials including a direct robot gaze compared to an averted gaze on a task measuring attentional control (Eriksen flanker task). Participants’ anthropomorphic inferences about the social robot mediated this interference. In study 2, we found that averted gazes congruent with the correct answer (same task as study 1) facilitated performance. Again, this effect was mediated by anthropomorphic inferences. These two studies show the importance of anthropomorphic robotic gaze on human cognitive processing, especially attentional control, and also open new avenues of research in social robotics.
Collapse
Affiliation(s)
- Nicolas Spatola
- Istituto Italiano di Tecnologia, Social Cognition in Human-Robot Interaction, 16152 Genova, Italy
| | - Pascal Huguet
- Université Clermont Auvergne et CNRS, LAPSCO, UMR 6024 63000 Clermont-Ferrand, France
| |
Collapse
|
20
|
Spatola N, Marchesi S, Wykowska A. The Intentional Stance Test-2: How to Measure the Tendency to Adopt Intentional Stance Towards Robots. Front Robot AI 2021; 8:666586. [PMID: 34692776 PMCID: PMC8529049 DOI: 10.3389/frobt.2021.666586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 09/20/2021] [Indexed: 11/13/2022] Open
Abstract
In human-robot interactions, people tend to attribute to robots mental states such as intentions or desires, in order to make sense of their behaviour. This cognitive strategy is termed "intentional stance". Adopting the intentional stance influences how one will consider, engage and behave towards robots. However, people differ in their likelihood to adopt intentional stance towards robots. Therefore, it seems crucial to assess these interindividual differences. In two studies we developed and validated the structure of a task aiming at evaluating to what extent people adopt intentional stance towards robot actions, the Intentional Stance task (IST). The Intentional Stance Task consists in a task that probes participants' stance by requiring them to choose the plausibility of a description (mentalistic vs. mechanistic) of behaviour of a robot depicted in a scenario composed of three photographs. Results showed a reliable psychometric structure of the IST. This paper therefore concludes with the proposal of using the IST as a proxy for assessing the degree of adoption of the intentional stance towards robots.
Collapse
Affiliation(s)
- Nicolas Spatola
- Social Cognition in Human-Robot Interaction Laboratory, Italian Institute of Technology, Genova, Italy
| | - Serena Marchesi
- Social Cognition in Human-Robot Interaction Laboratory, Italian Institute of Technology, Genova, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction Laboratory, Italian Institute of Technology, Genova, Italy
| |
Collapse
|
21
|
Kneer M. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents. Cogn Sci 2021; 45:e13032. [PMID: 34606119 PMCID: PMC9285490 DOI: 10.1111/cogs.13032] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 05/29/2021] [Accepted: 07/12/2021] [Indexed: 11/29/2022]
Abstract
The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.
Collapse
Affiliation(s)
- Markus Kneer
- Center for Ethics, Department of Philosophy, University of Zurich.,Digital Society Initiative, University of Zurich
| |
Collapse
|
22
|
Roselli C, Ciardo F, Wykowska A. Intentions with actions: The role of intentionality attribution on the vicarious sense of agency in Human-Robot interaction. Q J Exp Psychol (Hove) 2021; 75:616-632. [PMID: 34472397 DOI: 10.1177/17470218211042003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sense of Agency (SoA) is the feeling of control over one's actions and their consequences. In social contexts, people experience a "vicarious" SoA over other humans' actions; however, the phenomenon disappears when the other agent is a computer. This study aimed to investigate the factors that determine when humans experience vicarious SoA in Human-Robot Interaction (HRI). To this end, in two experiments, we disentangled two potential contributing factors: (1) the possibility of representing the robot's actions and (2) the adoption of Intentional Stance towards robots. Participants performed an Intentional Binding (IB) task reporting the time of occurrence for self- or robot-generated actions or sensory outcomes. To assess the role of action representation, the robot either performed a physical keypress (Experiment 1) or "acted" by sending a command via Bluetooth (Experiment 2). Before the experiment, attribution of intentionality to the robot was assessed. Results showed that when participants judged the occurrence of the action, vicarious SoA was predicted by the degree of attributed intentionality, but only when the robot's action was physical. Conversely, digital actions elicited the reversed effect of vicarious IB, suggesting that disembodied actions of robots are perceived as non-intentional. When participants judged the occurrence of the sensory outcome, vicarious SoA emerged only when the causing action was physical. Notably, intentionality attribution predicted vicarious SoA for sensory outcomes independently of the nature of the causing event, physical or digital. In conclusion, both intentionality attribution and action representation play a crucial role for vicarious SoA in HRI.
Collapse
Affiliation(s)
- Cecilia Roselli
- Social Cognition in Human-Robot Interaction Unit, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy.,Dipartimento di Informatica, Bioingegneria, Robotica ed Ingegneria dei Sistemi (DIBRIS), Genoa, Italy
| | - Francesca Ciardo
- Social Cognition in Human-Robot Interaction Unit, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction Unit, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
23
|
Nicolas S, Agnieszka W. The personality of anthropomorphism: How the need for cognition and the need for closure define attitudes and anthropomorphic attributions toward robots. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106841] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|