1
|
Guingrich RE, Graziano MSA. Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Front Psychol 2024; 15:1322781. [PMID: 38605842 PMCID: PMC11008604 DOI: 10.3389/fpsyg.2024.1322781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 03/13/2024] [Indexed: 04/13/2024] Open
Abstract
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI's inherent conscious or moral status.
Collapse
Affiliation(s)
- Rose E. Guingrich
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton School of Public and International Affairs, Princeton University, Princeton, NJ, United States
| | - Michael S. A. Graziano
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| |
Collapse
|
2
|
Ding Y, Guo R, Bilal M, Duffy VG. Exploring the influence of anthropomorphic appearance on usage intention on online medical service robots (OMSRs): A neurophysiological study. Heliyon 2024; 10:e26582. [PMID: 38455577 PMCID: PMC10918018 DOI: 10.1016/j.heliyon.2024.e26582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 02/15/2024] [Accepted: 02/15/2024] [Indexed: 03/09/2024] Open
Abstract
Online medical service robots (OMSRs) are becoming increasingly important in the medical industry, and their design has become a highly focused issue. This study investigated the neuroeconomics underlying the formation of usage intention, specifically evaluating the impact of anthropomorphic appearance and age on users' intentions to use OMSRs. Event-related potentials were used to analyze electroencephalography signals recorded from participants. This study found that OMSRs with a low anthropomorphic appearance induced larger P200 and P300 amplitudes, resulting in increased attentional resources compared to OMSRs with a moderate or high anthropomorphic appearance. OMSRs with moderate anthropomorphic appearances captured more attention and elicited larger P200 and P300 than those with high anthropomorphic appearances. Regarding age characteristics, OMSRs with senior features attracted more attention and induced larger P200 and P300 amplitudes. In terms of usage intention, compared to the others, users demonstrate a stronger usage intention towards the low anthropomorphism of OMSRs. Additionally, compared to the senior ones, users also exhibit a stronger usage intention toward a young appearance of OMSRs. These findings provide valuable insights for robot designers and practitioners to improve the appearance of OMSRs.
Collapse
Affiliation(s)
- Yi Ding
- School of Economics and Management, Anhui Polytechnic University, Wuhu, PR China
| | - Ran Guo
- School of Economics and Management, Anhui Polytechnic University, Wuhu, PR China
| | - Muhammad Bilal
- School of Economics and Management, Anhui Polytechnic University, Wuhu, PR China
| | - Vincent G. Duffy
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
3
|
Stein JP, Messingschlager T, Gnambs T, Hutmacher F, Appel M. Attitudes towards AI: measurement and associations with personality. Sci Rep 2024; 14:2909. [PMID: 38316898 PMCID: PMC10844202 DOI: 10.1038/s41598-024-53335-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 01/31/2024] [Indexed: 02/07/2024] Open
Abstract
Artificial intelligence (AI) has become an integral part of many contemporary technologies, such as social media platforms, smart devices, and global logistics systems. At the same time, research on the public acceptance of AI shows that many people feel quite apprehensive about the potential of such technologies-an observation that has been connected to both demographic and sociocultural user variables (e.g., age, previous media exposure). Yet, due to divergent and often ad-hoc measurements of AI-related attitudes, the current body of evidence remains inconclusive. Likewise, it is still unclear if attitudes towards AI are also affected by users' personality traits. In response to these research gaps, we offer a two-fold contribution. First, we present a novel, psychologically informed questionnaire (ATTARI-12) that captures attitudes towards AI as a single construct, independent of specific contexts or applications. Having observed good reliability and validity for our new measure across two studies (N1 = 490; N2 = 150), we examine several personality traits-the Big Five, the Dark Triad, and conspiracy mentality-as potential predictors of AI-related attitudes in a third study (N3 = 298). We find that agreeableness and younger age predict a more positive view towards artificially intelligent technology, whereas the susceptibility to conspiracy beliefs connects to a more negative attitude. Our findings are discussed considering potential limitations and future directions for research and practice.
Collapse
Affiliation(s)
- Jan-Philipp Stein
- Department of Media Psychology, Institute for Media Research, Chemnitz University of Technology, Thüringer Weg 11, 09126, Chemnitz, Germany.
| | - Tanja Messingschlager
- Psychology of Communication and New Media, Human-Computer-Media Institute, University of Würzburg, Würzburg, Germany
| | - Timo Gnambs
- Leibniz Institute for Educational Trajectories, Bamberg, Germany
| | - Fabian Hutmacher
- Psychology of Communication and New Media, Human-Computer-Media Institute, University of Würzburg, Würzburg, Germany
| | - Markus Appel
- Psychology of Communication and New Media, Human-Computer-Media Institute, University of Würzburg, Würzburg, Germany
| |
Collapse
|
4
|
Johnson EA, Dudding KM, Carrington JM. When to err is inhuman: An examination of the influence of artificial intelligence-driven nursing care on patient safety. Nurs Inq 2024; 31:e12583. [PMID: 37459179 DOI: 10.1111/nin.12583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 07/05/2023] [Accepted: 07/09/2023] [Indexed: 01/18/2024]
Abstract
Artificial intelligence, as a nonhuman entity, is increasingly used to inform, direct, or supplant nursing care and clinical decision-making. The boundaries between human- and nonhuman-driven nursing care are blurred with the advent of sensors, wearables, camera devices, and humanoid robots at such an accelerated pace that the critical evaluation of its influence on patient safety has not been fully assessed. Since the pivotal release of To Err is Human, patient safety is being challenged by the dynamic healthcare environment like never before, with nursing at a critical juncture to steer the course of artificial intelligence integration in clinical decision-making. This paper presents an overview of artificial intelligence and its application in healthcare and highlights the implications which affect nursing as a profession, including perspectives on nursing education and training recommendations. The legal and policy challenges which emerge when artificial intelligence influences the risk of clinical errors and safety issues are discussed.
Collapse
Affiliation(s)
- Elizabeth A Johnson
- Mark & Robyn Jones College of Nursing, Montana State University, Bozeman, Montana, USA
| | - Katherine M Dudding
- Department of Family, Community, and Health Systems, UAB School of Nursing, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Jane M Carrington
- Department of Family, Community and Health System Science, University of Florida College of Nursing, Gainesville, Florida, USA
| |
Collapse
|
5
|
Jacobs OL, Pazhoohi F, Kingstone A. Self-discrepancies in mind perception for actual, ideal, and ought selves and partners. PLoS One 2023; 18:e0295515. [PMID: 38091324 PMCID: PMC10718412 DOI: 10.1371/journal.pone.0295515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 11/24/2023] [Indexed: 12/18/2023] Open
Abstract
Defining and measuring self-discrepancies in mind perception between how an individual sees their actual self in comparison to their ideal or ought self has a long but challenging history in psychology. Here we present a new approach for measuring and operationalizing discrepancies of mind by employing the mind perception framework that has been applied successfully to a variety of other psychological constructs. Across two studies (N = 265, N = 205), participants were recruited online to fill in a modified version of the mind perception survey with questions pertaining to three domains (actual, ideal, ought) and two agents (self versus partner). The results revealed that participants idealized and thought they ought to have greater agency (the ability to do) and diminished experience (the ability to feel) for both themselves and their partner. Sex differences were also examined across both studies, and while minor differences emerged, the effects were not robust across the collective evidence from both studies. The overall findings suggest that the mind perception approach can be used to distill a large number of qualities of mind into meaningful facets for interpretation in relation to self-discrepancy theory. This method can breathe new life into the field with future investigations directed at understanding self-discrepancies in relation to prosocial behaviour and psychological well-being.
Collapse
Affiliation(s)
- Oliver L. Jacobs
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Farid Pazhoohi
- School of Psychology, University of Plymouth, Plymouth, England, United Kingdom
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
6
|
Schreibelmayr S, Moradbakhti L, Mara M. First impressions of a financial AI assistant: differences between high trust and low trust users. Front Artif Intell 2023; 6:1241290. [PMID: 37854078 PMCID: PMC10579608 DOI: 10.3389/frai.2023.1241290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/05/2023] [Indexed: 10/20/2023] Open
Abstract
Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system's uncanniness and potential dangers. Manipulating the AI assistant's agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.
Collapse
Affiliation(s)
| | | | - Martina Mara
- Robopsychology Lab, Linz Institute of Technology, Johannes Kepler University Linz, Linz, Austria
| |
Collapse
|
7
|
Wagner W, Viidalepp A, Idoiaga-Mondragon N, Talves K, Lillemäe E, Pekarev J, Otsus M. Lay representations of artificial intelligence and autonomous military machines. PUBLIC UNDERSTANDING OF SCIENCE (BRISTOL, ENGLAND) 2023; 32:926-943. [PMID: 37194940 DOI: 10.1177/09636625231167071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
This study is about how lay persons perceive and represent artificial intelligence in general as well as its use in weaponised autonomous ground vehicles in the military context. We analysed the discourse of six focus groups in Estonia, using an automatic text analysis tool and complemented the results by a qualitative thematic content analysis. The findings show that representations of artificial intelligence-driven machines are anchored in the image of man. A cluster analysis revealed five dominant themes: artificial intelligence as programmed machines, artificial intelligence and the problem of control, artificial intelligence and its relation to human life, artificial intelligence used in wars and ethical problems in developing autonomous weaponised machines. The findings are discussed with regard to people's tendency to anthropomorphise robots despite their lack of emotions, which can be seen as a last resort when confronting an autonomous machine where the usual interpersonal understanding of intentions does not apply.
Collapse
|
8
|
Esterwood C, Robert LP. The theory of mind and human-robot trust repair. Sci Rep 2023; 13:9877. [PMID: 37337033 DOI: 10.1038/s41598-023-37032-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 06/13/2023] [Indexed: 06/21/2023] Open
Abstract
Nothing is perfect and robots can make as many mistakes as any human, which can lead to a decrease in trust in them. However, it is possible, for robots to repair a human's trust in them after they have made mistakes through various trust repair strategies such as apologies, denials, and promises. Presently, the efficacy of these trust repairs in the human-robot interaction literature has been mixed. One reason for this might be that humans have different perceptions of a robot's mind. For example, some repairs may be more effective when humans believe that robots are capable of experiencing emotion. Likewise, other repairs might be more effective when humans believe robots possess intentionality. A key element that determines these beliefs is mind perception. Therefore understanding how mind perception impacts trust repair may be vital to understanding trust repair in human-robot interaction. To investigate this, we conducted a study involving 400 participants recruited via Amazon Mechanical Turk to determine whether mind perception influenced the effectiveness of three distinct repair strategies. The study employed an online platform where the robot and participant worked in a warehouse to pick and load 10 boxes. The robot made three mistakes over the course of the task and employed either a promise, denial, or apology after each mistake. Participants then rated their trust in the robot before and after it made the mistake. Results of this study indicated that overall, individual differences in mind perception are vital considerations when seeking to implement effective apologies and denials between humans and robots.
Collapse
Affiliation(s)
- Connor Esterwood
- School of Information, University of Michigan, Ann Arbor, 48109, USA.
| | - Lionel P Robert
- School of Information, University of Michigan, Ann Arbor, 48109, USA
- Robotics Department, University of Michigan, Ann Arbor, 48109, USA
| |
Collapse
|
9
|
Hoppe JA, Tuisku O, Johansson-Pajala RM, Pekkarinen S, Hennala L, Gustafsson C, Melkas H, Thommes K. When do individuals choose care robots over a human caregiver? Insights from a laboratory experiment on choices under uncertainty. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2023. [DOI: 10.1016/j.chbr.2022.100258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
|
10
|
Benjamin R, Heine SJ. From Freud to Android: Constructing a Scale of Uncanny Feelings. J Pers Assess 2023; 105:121-133. [PMID: 35353019 DOI: 10.1080/00223891.2022.2048842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The uncanny valley is a topic for engineers, animators, and psychologists, yet uncanny emotions are without a clear definition. Across three studies, we developed an 8-item measure of unnerved feelings, finding that it was discriminable from other affective experiences. In Study 1, we conducted an exploratory factor analysis that yielded two factors; an unnerved factor, which connects to emotional reactions to the uncanny, and a disoriented factor, which connects to mental state changes more distally following uncanny experiences. Focusing on the unnerved measure, Study 2 tests the scale's convergent and discriminant validity, concluding that participants who watched an uncanny video were more unnerved than those who watched a disgusting, fearful, or a neutral video. In Study 3, we determined that our scale detects unnerved feelings created during early 2020 by the coronavirus pandemic; a distinct source of uncanniness. These studies contribute to the psychological and interdisciplinary understanding of this strange, eerie phenomenon of being confronted with what looms just beyond our understanding.
Collapse
Affiliation(s)
- Rachele Benjamin
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Steven J Heine
- Department of Psychology, University of British Columbia, Vancouver, Canada
| |
Collapse
|
11
|
Zhou Q, Li B, Han L, Jou M. Talking to a bot or a wall? How chatbots vs. human agents affect anticipated communication quality. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
12
|
Pauketat JV, Anthis JR. Predicting the moral consideration of artificial intelligences. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
13
|
Improving evaluations of advanced robots by depicting them in harmful situations. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
14
|
Diel A, Lewis M. The deviation-from-familiarity effect: Expertise increases uncanniness of deviating exemplars. PLoS One 2022; 17:e0273861. [PMID: 36048801 PMCID: PMC9436138 DOI: 10.1371/journal.pone.0273861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 08/16/2022] [Indexed: 11/19/2022] Open
Abstract
Humanlike entities deviating from the norm of human appearance are perceived as strange or uncanny. Explanations for the eeriness of deviating humanlike entities include ideas specific to human or animal stimuli like mate selection, avoidance of threat or disease, or dehumanization; however, deviation from highly familiar categories may provide a better explanation. Here it is tested whether experts and novices in a novel (greeble) category show different patterns of abnormality, attractiveness, and uncanniness responses to distorted and averaged greebles. Greeble-trained participants assessed the abnormality, attractiveness, uncanniness of normal, averaged, and distorted greebles and their responses were compared to participants who had not previously seen greebles. The data show that distorted greebles were more uncanny than normal greebles only in the training condition, and distorted greebles were more uncanny in the training compared to the control condition. In addition, averaged greebles were not more attractive than normal greebles regardless of condition. The results suggest uncanniness is elicited by deviations from stimulus categories of expertise rather than being a purely biological human- or animal-specific response.
Collapse
Affiliation(s)
- Alexander Diel
- School of Psychology, Cardiff University, Cardiff, United Kingdom
- * E-mail:
| | - Michael Lewis
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
15
|
Yorgancigil E, Yildirim F, Urgen BA, Erdogan SB. An Exploratory Analysis of the Neural Correlates of Human-Robot Interactions With Functional Near Infrared Spectroscopy. Front Hum Neurosci 2022; 16:883905. [PMID: 35923750 PMCID: PMC9339604 DOI: 10.3389/fnhum.2022.883905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/22/2022] [Indexed: 11/13/2022] Open
Abstract
Functional near infrared spectroscopy (fNIRS) has been gaining increasing interest as a practical mobile functional brain imaging technology for understanding the neural correlates of social cognition and emotional processing in the human prefrontal cortex (PFC). Considering the cognitive complexity of human-robot interactions, the aim of this study was to explore the neural correlates of emotional processing of congruent and incongruent pairs of human and robot audio-visual stimuli in the human PFC with fNIRS methodology. Hemodynamic responses from the PFC region of 29 subjects were recorded with fNIRS during an experimental paradigm which consisted of auditory and visual presentation of human and robot stimuli. Distinct neural responses to human and robot stimuli were detected at the dorsolateral prefrontal cortex (DLPFC) and orbitofrontal cortex (OFC) regions. Presentation of robot voice elicited significantly less hemodynamic response than presentation of human voice in a left OFC channel. Meanwhile, processing of human faces elicited significantly higher hemodynamic activity when compared to processing of robot faces in two left DLPFC channels and a left OFC channel. Significant correlation between the hemodynamic and behavioral responses for the face-voice mismatch effect was found in the left OFC. Our results highlight the potential of fNIRS for unraveling the neural processing of human and robot audio-visual stimuli, which might enable optimization of social robot designs and contribute to elucidation of the neural processing of human and robot stimuli in the PFC in naturalistic conditions.
Collapse
Affiliation(s)
- Emre Yorgancigil
- Department of Medical Engineering, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
- *Correspondence: Emre Yorgancigil
| | - Funda Yildirim
- Cognitive Science Master's Program, Yeditepe University, Istanbul, Turkey
- Department of Computer Engineering, Yeditepe University, Istanbul, Turkey
| | - Burcu A. Urgen
- Department of Psychology, Bilkent University, Ankara, Turkey
- Neuroscience Graduate Program, Bilkent University, Ankara, Turkey
- Aysel Sabuncu Brain Research Center, National Magnetic Resonance Research Center (UMRAM), Ankara, Turkey
| | - Sinem Burcu Erdogan
- Department of Medical Engineering, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| |
Collapse
|
16
|
Banks J, Koban K. A Kind Apart: The Limited Application of Human Race and Sex Stereotypes to a Humanoid Social Robot. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00900-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
17
|
How Human-like Behavior of Service Robot Affects Social Distance: A Mediation Model and Cross-Cultural Comparison. Behav Sci (Basel) 2022; 12:bs12070205. [PMID: 35877275 PMCID: PMC9311498 DOI: 10.3390/bs12070205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/03/2022] [Accepted: 06/21/2022] [Indexed: 02/05/2023] Open
Abstract
Previous studies on the human likeness of service robots have focused mainly on their human-like appearance and used psychological constructs to measure the outcomes of human likeness. Unlike previous studies, this study focused on the human-like behavior of the service robot and used a sociological construct, social distance, to measure the outcome of human likeness. We constructed a conceptual model, with perceived competence and warmth as mediators, based on social-identity theory. The hypotheses were tested through online experiments with 219 participants from China and 180 participants from the US. Similar results emerged for Chinese and American participants in that the high (vs. low) human-like behavior of the service robot caused the participants to have stronger perceptions of competence and warmth, both of which contributed to a smaller social distance between humans and service robots. Perceptions of competence and warmth completely mediated the positive effect of the human-like behavior of the service robot on social distance. Furthermore, Chinese participants showed higher anthropomorphism (perceived human-like behavior) and a stronger perception of warmth and smaller social distance. The perception of competence did not differ across cultures. This study provides suggestions for the human-likeness design of service robots to promote natural interaction between humans and service robots and increase human acceptance of service robots.
Collapse
|
18
|
Moradbakhti L, Schreibelmayr S, Mara M. Do Men Have No Need for “Feminist” Artificial Intelligence? Agentic and Gendered Voice Assistants in the Light of Basic Psychological Needs. Front Psychol 2022; 13:855091. [PMID: 35774945 PMCID: PMC9239329 DOI: 10.3389/fpsyg.2022.855091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/19/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial Intelligence (AI) is supposed to perform tasks autonomously, make competent decisions, and interact socially with people. From a psychological perspective, AI can thus be expected to impact users’ three Basic Psychological Needs (BPNs), namely (i) autonomy, (ii) competence, and (iii) relatedness to others. While research highlights the fulfillment of these needs as central to human motivation and well-being, their role in the acceptance of AI applications has hitherto received little consideration. Addressing this research gap, our study examined the influence of BPN Satisfaction on Intention to Use (ITU) an AI assistant for personal banking. In a 2×2 factorial online experiment, 282 participants (154 males, 126 females, two non-binary participants) watched a video of an AI finance coach with a female or male synthetic voice that exhibited either high or low agency (i.e., capacity for self-control). In combination, these factors resulted either in AI assistants conforming to traditional gender stereotypes (e.g., low-agency female) or in non-conforming conditions (e.g., high-agency female). Although the experimental manipulations had no significant influence on participants’ relatedness and competence satisfaction, a strong effect on autonomy satisfaction was found. As further analyses revealed, this effect was attributable only to male participants, who felt their autonomy need significantly more satisfied by the low-agency female assistant, consistent with stereotypical images of women, than by the high-agency female assistant. A significant indirect effects model showed that the greater autonomy satisfaction that men, unlike women, experienced from the low-agency female assistant led to higher ITU. The findings are discussed in terms of their practical relevance and the risk of reproducing traditional gender stereotypes through technology design.
Collapse
|
19
|
Stein JP, Cimander P, Appel M. Power-Posing Robots: The Influence of a Humanoid Robot’s Posture and Size on its Perceived Dominance, Competence, Eeriness, and Threat. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00878-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
AbstractWhen interacting with sophisticated digital technologies, people often fall back on the same interaction scripts they apply to the communication with other humans—especially if the technology in question provides strong anthropomorphic cues (e.g., a human-like embodiment). Accordingly, research indicates that observers tend to interpret the body language of social robots in the same way as they would with another human being. Backed by initial evidence, we assumed that a humanoid robot will be considered as more dominant and competent, but also as more eerie and threatening once it strikes a so-called power pose. Moreover, we pursued the research question whether these effects might be accentuated by the robot’s body size. To this end, the current study presented 204 participants with pictures of the robot NAO in different poses (expansive vs. constrictive), while also manipulating its height (child-sized vs. adult-sized). Our results show that NAO’s posture indeed exerted strong effects on perceptions of dominance and competence. Conversely, participants’ threat and eeriness ratings remained statistically independent of the robot’s depicted body language. Further, we found that the machine’s size did not affect any of the measured interpersonal perceptions in a notable way. The study findings are discussed considering limitations and future research directions.
Collapse
|
20
|
Thellman S, de Graaf M, Ziemke T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a
computer < robot < human
pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Collapse
|
21
|
Feier T, Gogoll J, Uhl M. Hiding Behind Machines: Artificial Agents May Help to Evade Punishment. SCIENCE AND ENGINEERING ETHICS 2022; 28:19. [PMID: 35377086 PMCID: PMC8979930 DOI: 10.1007/s11948-022-00372-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 02/28/2022] [Indexed: 05/08/2023]
Abstract
The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by delegating to other people. Our results imply that the availability of artificial agents could provide stronger incentives for decision-makers to delegate sensitive decisions.
Collapse
Affiliation(s)
- Till Feier
- TUM School of Governance, TU Munich, Richard-Wagner-Straße 1, 80333 Munich, Germany
| | - Jan Gogoll
- Bavarian Institute for Digital Transformation, TU Munich, Gabelsbergerstr. 4, 80333 Munich, Germany
| | - Matthias Uhl
- Faculty of Computer Science, Technische Hochschule Ingolstadt, Esplanade 10, 85049 Ingolstadt, Germany
| |
Collapse
|
22
|
Diel A, Weigelt S, Macdorman KF. A Meta-analysis of the Uncanny Valley's Independent and Dependent Variables. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3470742] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The
uncanny valley (UV)
effect is a negative affective reaction to human-looking artificial entities. It hinders comfortable, trust-based interactions with android robots and virtual characters. Despite extensive research, a consensus has not formed on its theoretical basis or methodologies. We conducted a meta-analysis to assess operationalizations of human likeness (independent variable) and the UV effect (dependent variable). Of 468 studies, 72 met the inclusion criteria. These studies employed 10 different stimulus creation techniques, 39 affect measures, and 14 indirect measures. Based on 247 effect sizes, a three-level meta-analysis model revealed the UV effect had a large effect size, Hedges’
g
= 1.01 [0.80, 1.22]. A mixed-effects meta-regression model with creation technique as the moderator variable revealed
face distortion
produced the largest effect size,
g
= 1.46 [0.69, 2.24], followed by
distinct entities, g
= 1.20 [1.02, 1.38],
realism render, g
= 0.99 [0.62, 1.36], and
morphing, g
= 0.94 [0.64, 1.24]. Affective indices producing the largest effects were
threatening, likable, aesthetics, familiarity
, and
eeriness
, and indirect measures were
dislike frequency, categorization reaction time, like frequency, avoidance
, and
viewing duration
. This meta-analysis—the first on the UV effect—provides a methodological foundation and design principles for future research.
Collapse
Affiliation(s)
- Alexander Diel
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Sarah Weigelt
- Department of Vision, Visual Impairments & Blindness, Faculty of Rehabilitation Sciences, Technical University of Dortmund, Dortmund, Germany
| | - Karl F. Macdorman
- School of Informatics and Computing, Indiana University, Indianapolis, IN, USA
| |
Collapse
|
23
|
Lv L, Huang M, Huang R. Anthropomorphize service robots: the role of human nature traits. SERVICE INDUSTRIES JOURNAL 2022. [DOI: 10.1080/02642069.2022.2048821] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Linxiang Lv
- Economics and Management School, Wuhan University, Wuhan, People’s Republic of China
| | - Minxue Huang
- Economics and Management School, Wuhan University, Wuhan, People’s Republic of China
| | - Ruyao Huang
- Economics and Management School, Wuhan University, Wuhan, People’s Republic of China
| |
Collapse
|
24
|
“I Have to Praise You Like I Should?” The Effects of Implicit Self-Theories and Robot-Delivered Praise on Evaluations of a Social Robot. Int J Soc Robot 2022. [DOI: 10.1007/s12369-021-00848-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
25
|
Mara M, Appel M, Gnambs T. Human-Like Robots and the Uncanny Valley. ZEITSCHRIFT FUR PSYCHOLOGIE-JOURNAL OF PSYCHOLOGY 2022. [DOI: 10.1027/2151-2604/a000486] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Abstract. In the field of human-robot interaction, the well-known uncanny valley hypothesis proposes a curvilinear relationship between a robot’s degree of human likeness and the observers’ responses to the robot. While low to medium human likeness should be associated with increased positive responses, a shift to negative responses is expected for highly anthropomorphic robots. As empirical findings on the uncanny valley hypothesis are inconclusive, we conducted a random-effects meta-analysis of 49 studies (total N = 3,556) that reported 131 evaluations of robots based on the Godspeed scales for anthropomorphism (i.e., human likeness) and likeability. Our results confirm more positive responses for more human-like robots at low to medium anthropomorphism, with moving robots rated as more human-like but not necessarily more likable than static ones. However, because highly anthropomorphic robots were sparsely utilized in previous studies, no conclusions regarding proposed adverse effects at higher levels of human likeness can be made at this stage.
Collapse
Affiliation(s)
- Martina Mara
- LIT Robopsychology Lab, Johannes Kepler University Linz, Austria
| | - Markus Appel
- Psychology of Communication and New Media, University of Würzburg, Germany
| | - Timo Gnambs
- Leibniz Institute for Educational Trajectories (LIfBi), University of Bamberg, Germany
| |
Collapse
|
26
|
Liu SX, Shen Q, Hancock J. Can a social robot be too warm or too competent? Older Chinese adults’ perceptions of social robots and vulnerabilities. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106942] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
27
|
Russo PA, Duradoni M, Guazzini A. How self-perceived reputation affects fairness towards humans and artificial intelligence. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106920] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
28
|
More than appearance: the uncanny valley effect changes with a robot’s mental capacity. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02298-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
29
|
Lin C, Šabanović S, Dombrowski L, Miller AD, Brady E, MacDorman KF. Parental Acceptance of Children's Storytelling Robots: A Projection of the Uncanny Valley of AI. Front Robot AI 2021; 8:579993. [PMID: 34095237 PMCID: PMC8172185 DOI: 10.3389/frobt.2021.579993] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Accepted: 02/09/2021] [Indexed: 11/13/2022] Open
Abstract
Parent-child story time is an important ritual of contemporary parenting. Recently, robots with artificial intelligence (AI) have become common. Parental acceptance of children's storytelling robots, however, has received scant attention. To address this, we conducted a qualitative study with 18 parents using the research technique design fiction. Overall, parents held mixed, though generally positive, attitudes toward children's storytelling robots. In their estimation, these robots would outperform screen-based technologies for children's story time. However, the robots' potential to adapt and to express emotion caused some parents to feel ambivalent about the robots, which might hinder their adoption. We found three predictors of parental acceptance of these robots: context of use, perceived agency, and perceived intelligence. Parents' speculation revealed an uncanny valley of AI: a nonlinear relation between the human likeness of the artificial agent's mind and affinity for the agent. Finally, we consider the implications of children's storytelling robots, including how they could enhance equity in children's access to education, and propose directions for research on their design to benefit family well-being.
Collapse
Affiliation(s)
- Chaolan Lin
- Department of Cognitive Science, University of California, San Diego, CA, United States
| | - Selma Šabanović
- The Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, United States
| | - Lynn Dombrowski
- The School of Informatics and Computing, Indiana University, Indianapolis, IN, United States
| | - Andrew D Miller
- The School of Informatics and Computing, Indiana University, Indianapolis, IN, United States
| | - Erin Brady
- The School of Informatics and Computing, Indiana University, Indianapolis, IN, United States
| | - Karl F MacDorman
- The School of Informatics and Computing, Indiana University, Indianapolis, IN, United States
| |
Collapse
|
30
|
Diel A, MacDorman KF. Creepy cats and strange high houses: Support for configural processing in testing predictions of nine uncanny valley theories. J Vis 2021; 21:1. [PMID: 33792617 PMCID: PMC8024776 DOI: 10.1167/jov.21.4.1] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In 1970, Masahiro Mori proposed the uncanny valley (UV), a region in a human-likeness continuum where an entity risks eliciting a cold, eerie, repellent feeling. Recent studies have shown that this feeling can be elicited by entities modeled not only on humans but also nonhuman animals. The perceptual and cognitive mechanisms underlying the UV effect are not well understood, although many theories have been proposed to explain them. To test the predictions of nine classes of theories, a within-subjects experiment was conducted with 136 participants. The theories' predictions were compared with ratings of 10 classes of stimuli on eeriness and coldness indices. One type of theory, configural processing, predicted eight out of nine significant effects. Atypicality, in its extended form, in which the uncanny valley effect is amplified by the stimulus appearing more human, also predicted eight. Threat avoidance predicted seven; atypicality, perceptual mismatch, and mismatch+ predicted six; category+, novelty avoidance, mate selection, and psychopathy avoidance predicted five; and category uncertainty predicted three. Empathy's main prediction was not supported. Given that the number of significant effects predicted depends partly on our choice of hypotheses, a detailed consideration of each result is advised. We do, however, note the methodological value of examining many competing theories in the same experiment.
Collapse
Affiliation(s)
- Alexander Diel
- School of Psychology, Cardiff University, Cardiff, United Kingdom.,Indiana University School of Informatics and Computing, Indianapolis, IN, USA.,
| | - Karl F MacDorman
- Indiana University School of Informatics and Computing, Indianapolis, IN, USA.,
| |
Collapse
|
31
|
Allan DD, Vonasch AJ, Bartneck C. The Doors of Social Robot Perception: The Influence of Implicit Self-theories. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00767-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|