1
|
Arnestad MN, Meyers S, Gray K, Bigman YE. The existence of manual mode increases human blame for AI mistakes. Cognition 2024; 252:105931. [PMID: 39208639 DOI: 10.1016/j.cognition.2024.105931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/14/2024] [Accepted: 08/21/2024] [Indexed: 09/04/2024]
Abstract
People are offloading many tasks to artificial intelligence (AI)-including driving, investing decisions, and medical choices-but it is human nature to want to maintain ultimate control. So even when using autonomous machines, people want a "manual mode", an option that shifts control back to themselves. Unfortunately, the mere existence of manual mode leads to more human blame when AI makes mistakes. When observers know that a human agent theoretically had the option to take control, the humans are assigned more responsibility, even when agents lack the time or ability to actually exert control, as with self-driving car crashes. Four experiments reveal that though people prefer having a manual mode, even if the AI mode is more efficient and adding the manual mode is more expensive (Study 1), the existence of a manual mode increases human blame (Studies 2a-3c). We examine two mediators for this effect: increased perceptions of causation and counterfactual cognition (Study 4). The results suggest that the human thirst for illusory control comes with real costs. Implications of AI decision-making are discussed.
Collapse
Affiliation(s)
- Mads N Arnestad
- Department of Leadership and Organization, BI Norwegian Business School, Norway
| | | | - Kurt Gray
- University of North Carolina at Chapel Hill, USA
| | | |
Collapse
|
2
|
Choi J, Chao MM. For Me or Against Me? Reactions to AI (vs. Human) Decisions That Are Favorable or Unfavorable to the Self and the Role of Fairness Perception. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2024:1461672241288338. [PMID: 39446885 DOI: 10.1177/01461672241288338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2024]
Abstract
Public reactions to algorithmic decisions often diverge. While high-profile media coverage suggests that the use of AI in organizational decision-making is viewed as unfair and received negatively, recent survey results suggest that such use of AI is perceived as fair and received positively. Drawing on fairness heuristic theory, the current research reconciles this apparent contradiction by examining the roles of decision outcome and fairness perception on individuals' attitudinal (Studies 1-3, 5) and behavioral (Study 4) reactions to algorithmic (vs. human) decisions. Results from six experiments (N = 2,794) showed that when the decision was unfavorable, AI was perceived as fairer than human, leading to a less negative reaction. This heightened fairness perception toward AI is shaped by its perceived unemotionality. Furthermore, reminders about the potential biases of AI in decision-making attenuate the differential fairness perception between AI and human. Theoretical and practical implications of the findings are discussed.
Collapse
Affiliation(s)
- Jungmin Choi
- Judge Business School, University of Cambridge, Cambridge, UK
| | - Melody M Chao
- Department of Management, Hong Kong University of Science & Technology, Clear Water Bay, Kowloon, Hong Kong SAR
| |
Collapse
|
3
|
Baines JI, Dalal RS, Ponce LP, Tsai HC. Advice from artificial intelligence: a review and practical implications. Front Psychol 2024; 15:1390182. [PMID: 39439754 PMCID: PMC11493662 DOI: 10.3389/fpsyg.2024.1390182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 07/29/2024] [Indexed: 10/25/2024] Open
Abstract
Despite considerable behavioral and organizational research on advice from human advisors, and despite the increasing study of artificial intelligence (AI) in organizational research, workplace-related applications, and popular discourse, an interdisciplinary review of advice from AI (vs. human) advisors has yet to be undertaken. We argue that the increasing adoption of AI to augment human decision-making would benefit from a framework that can characterize such interactions. Thus, the current research invokes judgment and decision-making research on advice from human advisors and uses a conceptual "fit"-based model to: (1) summarize how the characteristics of the AI advisor, human decision-maker, and advice environment influence advice exchanges and outcomes (including informed speculation about the durability of such findings in light of rapid advances in AI technology), (2) delineate future research directions (along with specific predictions), and (3) provide practical implications involving the use of AI advice by human decision-makers in applied settings.
Collapse
Affiliation(s)
- Julia I. Baines
- Department of Psychology, George Mason University, Fairfax, VA, United States
| | - Reeshad S. Dalal
- Department of Psychology, George Mason University, Fairfax, VA, United States
| | - Lida P. Ponce
- Department of Psychology, George Mason University, Fairfax, VA, United States
| | - Ho-Chun Tsai
- Department of Psychology, Illinois Institute of Technology, Chicago, IL, United States
| |
Collapse
|
4
|
Malle BF, Scheutz M, Cusimano C, Voiklis J, Komatsu T, Thapa S, Aladia S. People's judgments of humans and robots in a classic moral dilemma. Cognition 2024; 254:105958. [PMID: 39362054 DOI: 10.1016/j.cognition.2024.105958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 09/02/2024] [Accepted: 09/07/2024] [Indexed: 10/05/2024]
Abstract
How do ordinary people evaluate robots that make morally significant decisions? Previous work has found both equal and different evaluations, and different ones in either direction. In 13 studies (N = 7670), we asked people to evaluate humans and robots that make decisions in norm conflicts (variants of the classic trolley dilemma). We examined several conditions that may influence whether moral evaluations of human and robot agents are the same or different: the type of moral judgment (norms vs. blame); the structure of the dilemma (side effect vs. means-end); salience of particular information (victim, outcome); culture (Japan vs. US); and encouraged empathy. Norms for humans and robots are broadly similar, but blame judgments show a robust asymmetry under one condition: Humans are blamed less than robots specifically for inaction decisions-here, refraining from sacrificing one person for the good of many. This asymmetry may emerge because people appreciate that the human faces an impossible decision and deserves mitigated blame for inaction; when evaluating a robot, such appreciation appears to be lacking. However, our evidence for this explanation is mixed. We discuss alternative explanations and offer methodological guidance for future work into people's moral judgment of robots and humans.
Collapse
|
5
|
Chen J, Zhang Y, Wu Y. The impact of differential pricing subject on consumer behavior. BMC Psychol 2024; 12:431. [PMID: 39123225 PMCID: PMC11311943 DOI: 10.1186/s40359-024-01928-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 07/26/2024] [Indexed: 08/12/2024] Open
Abstract
The escalating use of artificial intelligence in marketing significantly impacts all aspects of consumer life. This research, grounded in attribution theory and S-O-R theory, employs scenario-based experimental methods to simulate two distinct purchasing contexts. The aim is to investigate consumers' psychological and behavioral responses to AI-initiated pricing. Through SPSS analysis of variance and Bootstrap analysis, the mechanisms of influence of AI-initiated pricing on consumer behavior are tested, revealing the mediating variables of mind perception and consumer perceived ethicality, as well as the mediating variables of perceived enterprise control. Data were collected from Chinese customers to test the model of this study. A total of 841 valid questionnaires were analyzed using ANOVA and Bootstrap analysis with SPSS. The results show that: (1) Consumers exhibit higher repurchase and word-of-mouth recommendation behaviors and lower complaint and switching behaviors for AI-initiated pricing compared to marketers; (2) AI-initiated pricing leads to diminished mind perceptions and augmented ethical perceptions among consumers. Ethical perceptions serve as a complete mediator, while mind perceptions play a less significant mediating role; (3) Perceived enterprise control plays a moderating role in the impact of AI-initiated pricing on consumer behavior. That is, when consumers know that the enterprise can control pricing agents, AI-initiated pricing leads to lower repurchase and word-of-mouth recommendation behaviors, and higher instances of complaining and switching behaviors than humans.
Collapse
Affiliation(s)
- Jinsong Chen
- School of Business Administration, Guizhou University of Finance and Economics, Guiyang, Guizhou, 550025, The People's Republic of China
| | - Yuexin Zhang
- School of Business Administration, Guizhou University of Finance and Economics, Guiyang, Guizhou, 550025, The People's Republic of China.
- School of Culture and Tourism, Chongqing City Management College, Chongqing, 401331, The People's Republic of China.
| | - Yumin Wu
- School of Business Administration, Guizhou University of Finance and Economics, Guiyang, Guizhou, 550025, The People's Republic of China
| |
Collapse
|
6
|
He L, Basar E, Krahmer E, Wiers R, Antheunis M. Effectiveness and User Experience of a Smoking Cessation Chatbot: Mixed Methods Study Comparing Motivational Interviewing and Confrontational Counseling. J Med Internet Res 2024; 26:e53134. [PMID: 39106097 PMCID: PMC11336496 DOI: 10.2196/53134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 04/18/2024] [Accepted: 05/02/2024] [Indexed: 08/07/2024] Open
Abstract
BACKGROUND Cigarette smoking poses a major public health risk. Chatbots may serve as an accessible and useful tool to promote cessation due to their high accessibility and potential in facilitating long-term personalized interactions. To increase effectiveness and acceptability, there remains a need to identify and evaluate counseling strategies for these chatbots, an aspect that has not been comprehensively addressed in previous research. OBJECTIVE This study aims to identify effective counseling strategies for such chatbots to support smoking cessation. In addition, we sought to gain insights into smokers' expectations of and experiences with the chatbot. METHODS This mixed methods study incorporated a web-based experiment and semistructured interviews. Smokers (N=229) interacted with either a motivational interviewing (MI)-style (n=112, 48.9%) or a confrontational counseling-style (n=117, 51.1%) chatbot. Both cessation-related (ie, intention to quit and self-efficacy) and user experience-related outcomes (ie, engagement, therapeutic alliance, perceived empathy, and interaction satisfaction) were assessed. Semistructured interviews were conducted with 16 participants, 8 (50%) from each condition, and data were analyzed using thematic analysis. RESULTS Results from a multivariate ANOVA showed that participants had a significantly higher overall rating for the MI (vs confrontational counseling) chatbot. Follow-up discriminant analysis revealed that the better perception of the MI chatbot was mostly explained by the user experience-related outcomes, with cessation-related outcomes playing a lesser role. Exploratory analyses indicated that smokers in both conditions reported increased intention to quit and self-efficacy after the chatbot interaction. Interview findings illustrated several constructs (eg, affective attitude and engagement) explaining people's previous expectations and timely and retrospective experience with the chatbot. CONCLUSIONS The results confirmed that chatbots are a promising tool in motivating smoking cessation and the use of MI can improve user experience. We did not find extra support for MI to motivate cessation and have discussed possible reasons. Smokers expressed both relational and instrumental needs in the quitting process. Implications for future research and practice are discussed.
Collapse
Affiliation(s)
- Linwei He
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, Netherlands
| | - Erkan Basar
- Behavioral Science Institute, Radboud University, Nijmegen, Netherlands
| | - Emiel Krahmer
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, Netherlands
| | - Reinout Wiers
- Addiction Development and Psychopathology (ADAPT)-lab, Department of Psychology and Centre for Urban Mental Health, University of Amsterdam, Amsterdam, Netherlands
| | - Marjolijn Antheunis
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, Netherlands
| |
Collapse
|
7
|
Hernandez I, Ritter RS, Preston JL. Minds of Monsters: Scary Imbalances Between Cognition and Emotion. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2024; 50:1297-1312. [PMID: 37078666 DOI: 10.1177/01461672231160035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
Four studies investigate a fear of imbalanced minds hypothesis that threatening agents perceived to be relatively mismatched in capacities for cognition (e.g., self-control and reasoning) and emotion (e.g., sensations and emotions) will be rated as scarier and more dangerous by observers. In ratings of fictional monsters (e.g., zombies and vampires), agents seen as more imbalanced between capacities for cognition and emotion (high cognition-low emotion or low cognition-high emotion) were rated as scarier compared to those with equally matched levels of cognition and emotion (Studies 1 and 2). Similar effects were observed using ratings of scary animals (e.g., tigers, sharks; Studies 2 and 3), and infected humans (Study 4). Moreover, these effects are explained through diminished perceived control/predictability over the target agent. These findings highlight the role of balance between cognition and emotion in appraisal of threatening agents, in part because those agents are seen as more chaotic and uncontrollable.
Collapse
|
8
|
Wu J, Du X, Liu Y, Tang W, Xue C. How the Degree of Anthropomorphism of Human-like Robots Affects Users' Perceptual and Emotional Processing: Evidence from an EEG Study. SENSORS (BASEL, SWITZERLAND) 2024; 24:4809. [PMID: 39123856 PMCID: PMC11314648 DOI: 10.3390/s24154809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 07/16/2024] [Accepted: 07/22/2024] [Indexed: 08/12/2024]
Abstract
Anthropomorphized robots are increasingly integrated into human social life, playing vital roles across various fields. This study aimed to elucidate the neural dynamics underlying users' perceptual and emotional responses to robots with varying levels of anthropomorphism. We investigated event-related potentials (ERPs) and event-related spectral perturbations (ERSPs) elicited while participants viewed, perceived, and rated the affection of robots with low (L-AR), medium (M-AR), and high (H-AR) levels of anthropomorphism. EEG data were recorded from 42 participants. Results revealed that H-AR induced a more negative N1 and increased frontal theta power, but decreased P2 in early time windows. Conversely, M-AR and L-AR elicited larger P2 compared to H-AR. In later time windows, M-AR generated greater late positive potential (LPP) and enhanced parietal-occipital theta oscillations than H-AR and L-AR. These findings suggest distinct neural processing phases: early feature detection and selective attention allocation, followed by later affective appraisal. Early detection of facial form and animacy, with P2 reflecting higher-order visual processing, appeared to correlate with anthropomorphism levels. This research advances the understanding of emotional processing in anthropomorphic robot design and provides valuable insights for robot designers and manufacturers regarding emotional and feature design, evaluation, and promotion of anthropomorphic robots.
Collapse
Affiliation(s)
| | | | | | | | - Chengqi Xue
- School of Mechanical Engineering, Southeast University, Suyuan Avenue 79, Nanjing 211189, China; (J.W.); (X.D.); (Y.L.); (W.T.)
| |
Collapse
|
9
|
Oudah M, Makovi K, Gray K, Battu B, Rahwan T. Perception of experience influences altruism and perception of agency influences trust in human-machine interactions. Sci Rep 2024; 14:12410. [PMID: 38811749 PMCID: PMC11136977 DOI: 10.1038/s41598-024-63360-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 05/28/2024] [Indexed: 05/31/2024] Open
Abstract
As robots become increasingly integrated into social economic interactions, it becomes crucial to understand how people perceive a robot's mind. It has been argued that minds are perceived along two dimensions: experience, i.e., the ability to feel, and agency, i.e., the ability to act and take responsibility for one's actions. However, the influence of these perceived dimensions on human-machine interactions, particularly those involving altruism and trust, remains unknown. We hypothesize that the perception of experience influences altruism, while the perception of agency influences trust. To test these hypotheses, we pair participants with bot partners in a dictator game (to measure altruism) and a trust game (to measure trust) while varying the bots' perceived experience and agency, either by manipulating the degree to which the bot resembles humans, or by manipulating the description of the bots' ability to feel and exercise self-control. The results demonstrate that the money transferred in the dictator game is influenced by the perceived experience, while the money transferred in the trust game is influenced by the perceived agency, thereby confirming our hypotheses. More broadly, our findings support the specificity of the mind hypothesis: Perceptions of different dimensions of the mind lead to different kinds of social behavior.
Collapse
Affiliation(s)
- Mayada Oudah
- Social Science Division, New York University Abu Dhabi, Abu Dhabi, UAE
| | - Kinga Makovi
- Social Science Division, New York University Abu Dhabi, Abu Dhabi, UAE
| | - Kurt Gray
- Department of Psychology and Neuroscience, University of North Carolina, Chapel Hill, USA
| | - Balaraju Battu
- Computer Science, Science Division, New York University Abu Dhabi, Abu Dhabi, UAE.
| | - Talal Rahwan
- Computer Science, Science Division, New York University Abu Dhabi, Abu Dhabi, UAE.
| |
Collapse
|
10
|
Yin Y, Jia N, Wakslak CJ. AI can help people feel heard, but an AI label diminishes this impact. Proc Natl Acad Sci U S A 2024; 121:e2319112121. [PMID: 38551835 PMCID: PMC10998586 DOI: 10.1073/pnas.2319112121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/29/2024] [Indexed: 04/02/2024] Open
Abstract
People want to "feel heard" to perceive that they are understood, validated, and valued. Can AI serve the deeply human function of making others feel heard? Our research addresses two fundamental issues: Can AI generate responses that make human recipients feel heard, and how do human recipients react when they believe the response comes from AI? We conducted an experiment and a follow-up study to disentangle the effects of actual source of a message and the presumed source. We found that AI-generated messages made recipients feel more heard than human-generated messages and that AI was better at detecting emotions. However, recipients felt less heard when they realized that a message came from AI (vs. human). Finally, in a follow-up study where the responses were rated by third-party raters, we found that compared with humans, AI demonstrated superior discipline in offering emotional support, a crucial element in making individuals feel heard, while avoiding excessive practical suggestions, which may be less effective in achieving this goal. Our research underscores the potential and limitations of AI in meeting human psychological needs. These findings suggest that while AI demonstrates enhanced capabilities to provide emotional support, the devaluation of AI responses poses a key challenge for effectively leveraging AI's capabilities.
Collapse
Affiliation(s)
- Yidan Yin
- Lloyd Greif Center for Entrepreneurial Studies, Marshall School of Business, University of Southern California, Los Angeles, CA90089
| | - Nan Jia
- Department of Management and Organization, Marshall School of Business, University of Southern California, Los Angeles, CA90089
| | - Cheryl J. Wakslak
- Department of Management and Organization, Marshall School of Business, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
11
|
Yam J, Gong T, Xu H. A stimulus exposure of 50 ms elicits the uncanny valley effect. Heliyon 2024; 10:e27977. [PMID: 38533075 PMCID: PMC10963319 DOI: 10.1016/j.heliyon.2024.e27977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/04/2024] [Accepted: 03/08/2024] [Indexed: 03/28/2024] Open
Abstract
The uncanny valley (UV) effect captures the observation that artificial entities with near-human appearances tend to create feelings of eeriness. Researchers have proposed many hypotheses to explain the UV effect, but the visual processing mechanisms of the UV have yet to be fully understood. In the present study, we examined if the UV effect is as accessible in brief stimulus exposures compared to long stimulus exposures (Experiment 1). Forty-one participants, aged 21-31, rated each human-robot face presented for either a brief (50 ms) or long duration (3 s) in terms of attractiveness, eeriness, and humanness (UV indices) in a 7-point Likert scale. We found that brief and long exposures to stimuli generated a similar UV effect. This suggests that the UV effect is accessible at early visual processing. We then examined the effect of exposure duration on the categorisation of visual stimuli in Experiment 2. Thirty-three participants, aged 21-31, categorised faces as either human or robot in a two-alternative forced choice task. Their response accuracy and variance were recorded. We found that brief stimulus exposures generated significantly higher response variation and errors than the long exposure condition. This indicated that participants were more uncertain in categorising faces in the brief exposure condition due to insufficient time. Further comparisons between Experiment 1 and 2 revealed that the eeriest faces were not the hardest to categorise. Overall, these findings indicate (1) that both the UV effect and categorical uncertainty can be elicited through brief stimulus exposure, but (2) that categorical uncertainty is unlikely to cause the UV effect. These findings provide insights towards the perception of robotic faces and implications for the design of robots, androids, avatars, and artificial intelligence agents.
Collapse
Affiliation(s)
- Jodie Yam
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore
| | - Tingchen Gong
- Department of Neuroscience, Physiology and Pharmacology, University College London, UK
| | - Hong Xu
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore
| |
Collapse
|
12
|
Guingrich RE, Graziano MSA. Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Front Psychol 2024; 15:1322781. [PMID: 38605842 PMCID: PMC11008604 DOI: 10.3389/fpsyg.2024.1322781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 03/13/2024] [Indexed: 04/13/2024] Open
Abstract
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI's inherent conscious or moral status.
Collapse
Affiliation(s)
- Rose E. Guingrich
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton School of Public and International Affairs, Princeton University, Princeton, NJ, United States
| | - Michael S. A. Graziano
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| |
Collapse
|
13
|
Chen Y, Stephani T, Bagdasarian MT, Hilsmann A, Eisert P, Villringer A, Bosse S, Gaebler M, Nikulin VV. Realness of face images can be decoded from non-linear modulation of EEG responses. Sci Rep 2024; 14:5683. [PMID: 38454099 PMCID: PMC10920746 DOI: 10.1038/s41598-024-56130-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 03/01/2024] [Indexed: 03/09/2024] Open
Abstract
Artificially created human faces play an increasingly important role in our digital world. However, the so-called uncanny valley effect may cause people to perceive highly, yet not perfectly human-like faces as eerie, bringing challenges to the interaction with virtual agents. At the same time, the neurocognitive underpinnings of the uncanny valley effect remain elusive. Here, we utilized an electroencephalography (EEG) dataset of steady-state visual evoked potentials (SSVEP) in which participants were presented with human face images of different stylization levels ranging from simplistic cartoons to actual photographs. Assessing neuronal responses both in frequency and time domain, we found a non-linear relationship between SSVEP amplitudes and stylization level, that is, the most stylized cartoon images and the real photographs evoked stronger responses than images with medium stylization. Moreover, realness of even highly similar stylization levels could be decoded from the EEG data with task-related component analysis (TRCA). Importantly, we also account for confounding factors, such as the size of the stimulus face's eyes, which previously have not been adequately addressed. Together, this study provides a basis for future research and neuronal benchmarking of real-time detection of face realness regarding three aspects: SSVEP-based neural markers, efficient classification methods, and low-level stimulus confounders.
Collapse
Affiliation(s)
- Yonghao Chen
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Tilman Stephani
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | | - Anna Hilsmann
- Department of Vision and Imaging Technologies, Fraunhofer HHI, Berlin, Germany
- Visual Computing Group, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Peter Eisert
- Department of Vision and Imaging Technologies, Fraunhofer HHI, Berlin, Germany
- Visual Computing Group, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Arno Villringer
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Clinic of Cognitive Neurology, University Hospital Leipzig, Leipzig, Germany
- MindBrainBody Institute at the Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Sebastian Bosse
- Department of Vision and Imaging Technologies, Fraunhofer HHI, Berlin, Germany
| | - Michael Gaebler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- MindBrainBody Institute at the Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Vadim V Nikulin
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
14
|
Lee I, Hahn S. On the relationship between mind perception and social support of chatbots. Front Psychol 2024; 15:1282036. [PMID: 38510306 PMCID: PMC10952123 DOI: 10.3389/fpsyg.2024.1282036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/14/2024] [Indexed: 03/22/2024] Open
Abstract
The social support provided by chatbots is typically designed to mimic the way humans support others. However, individuals have more conflicting attitudes toward chatbots providing emotional support (e.g., empathy and encouragement) compared to informational support (e.g., useful information and advice). This difference may be related to whether individuals associate a certain type of support with the realm of the human mind and whether they attribute human-like minds to chatbots. In the present study, we investigated whether perceiving human-like minds in chatbots affects users' acceptance of various support provided by the chatbot. In the experiment, the chatbot posed questions about participants' interpersonal stress events, prompting them to write down their stressful experiences. Depending on the experimental condition, the chatbot provided two kinds of social support: informational support or emotional support. Our results showed that when participants explicitly perceived a human-like mind in the chatbot, they considered the support to be more helpful in resolving stressful events. The relationship between implicit mind perception and perceived message effectiveness differed depending on the type of support. More specifically, if participants did not implicitly attribute a human-like mind to the chatbot, emotional support undermined the effectiveness of the message, whereas informational support did not. The present findings suggest that users' mind perception is essential for understanding the user experience of chatbot social support. Our findings imply that informational support can be trusted when building social support chatbots. In contrast, the effectiveness of emotional support depends on the users implicitly giving the chatbot a human-like mind.
Collapse
Affiliation(s)
| | - Sowon Hahn
- Human Factors Psychology Lab, Department of Psychology, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
15
|
Grigoreva AD, Rottman J, Tasimi A. When does "no" mean no? Insights from sex robots. Cognition 2024; 244:105687. [PMID: 38154450 DOI: 10.1016/j.cognition.2023.105687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 11/29/2023] [Accepted: 12/05/2023] [Indexed: 12/30/2023]
Abstract
Although sexual assault is widely accepted as morally wrong, not all instances of sexual assault are evaluated in the same way. Here, we ask whether different characteristics of victims affect people's moral evaluations of sexual assault perpetrators, and if so, how. We focus on sex robots (i.e., artificially intelligent humanoid social robots designed for sexual gratification) as victims in the present studies because they serve as a clean canvas onto which we can paint different human-like attributes to probe people's moral intuitions regarding sensitive topics. Across four pre-registered experiments conducted with American adults on Prolific (N = 2104), we asked people to evaluate the wrongness of sexual assault against AI-powered robots. People's moral judgments were influenced by the victim's mental capacities (Studies 1 & 2), the victim's interpersonal function (Study 3), the victim's ontological type (Study 4), and the transactional context of the human-robot relationship (Study 4). Overall, by investigating moral reasoning about transgressions against AI robots, we were able to gain unique insights into how people's moral judgments about sexual transgressions can be influenced by victim attributes.
Collapse
Affiliation(s)
| | - Joshua Rottman
- Department of Psychology, Franklin & Marshall College, P.O. Box 3003, Lancaster, PA 17604, USA
| | - Arber Tasimi
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA 30322, USA
| |
Collapse
|
16
|
Stein JP, Messingschlager T, Gnambs T, Hutmacher F, Appel M. Attitudes towards AI: measurement and associations with personality. Sci Rep 2024; 14:2909. [PMID: 38316898 PMCID: PMC10844202 DOI: 10.1038/s41598-024-53335-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 01/31/2024] [Indexed: 02/07/2024] Open
Abstract
Artificial intelligence (AI) has become an integral part of many contemporary technologies, such as social media platforms, smart devices, and global logistics systems. At the same time, research on the public acceptance of AI shows that many people feel quite apprehensive about the potential of such technologies-an observation that has been connected to both demographic and sociocultural user variables (e.g., age, previous media exposure). Yet, due to divergent and often ad-hoc measurements of AI-related attitudes, the current body of evidence remains inconclusive. Likewise, it is still unclear if attitudes towards AI are also affected by users' personality traits. In response to these research gaps, we offer a two-fold contribution. First, we present a novel, psychologically informed questionnaire (ATTARI-12) that captures attitudes towards AI as a single construct, independent of specific contexts or applications. Having observed good reliability and validity for our new measure across two studies (N1 = 490; N2 = 150), we examine several personality traits-the Big Five, the Dark Triad, and conspiracy mentality-as potential predictors of AI-related attitudes in a third study (N3 = 298). We find that agreeableness and younger age predict a more positive view towards artificially intelligent technology, whereas the susceptibility to conspiracy beliefs connects to a more negative attitude. Our findings are discussed considering potential limitations and future directions for research and practice.
Collapse
Affiliation(s)
- Jan-Philipp Stein
- Department of Media Psychology, Institute for Media Research, Chemnitz University of Technology, Thüringer Weg 11, 09126, Chemnitz, Germany.
| | - Tanja Messingschlager
- Psychology of Communication and New Media, Human-Computer-Media Institute, University of Würzburg, Würzburg, Germany
| | - Timo Gnambs
- Leibniz Institute for Educational Trajectories, Bamberg, Germany
| | - Fabian Hutmacher
- Psychology of Communication and New Media, Human-Computer-Media Institute, University of Würzburg, Würzburg, Germany
| | - Markus Appel
- Psychology of Communication and New Media, Human-Computer-Media Institute, University of Würzburg, Würzburg, Germany
| |
Collapse
|
17
|
Zhang Y, Cao Y, Proctor RW, Liu Y. Emotional experiences of service robots' anthropomorphic appearance: a multimodal measurement method. ERGONOMICS 2023; 66:2039-2057. [PMID: 36803343 DOI: 10.1080/00140139.2023.2182751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
Anthropomorphic appearance is a key factor to affect users' attitudes and emotions. This research aimed to measure emotional experience caused by robots' anthropomorphic appearance with three levels - high, moderate, and low - using multimodal measurement. Fifty participants' physiological and eye-tracker data were recorded synchronously while they observed robot images that were displayed in random order. Afterward, the participants reported subjective emotional experiences and attitudes towards those robots. The results showed that the images of the moderately anthropomorphic service robots induced higher pleasure and arousal ratings, and yielded significantly larger pupil diameter and faster saccade velocity, than did the low or high robots. Moreover, participants' facial electromyography, skin conductance, and heart-rate responses were higher when observing moderately anthropomorphic service robots. An implication of the research is that service robots' appearance should be designed to be moderately anthropomorphic; too many human-like features or machine-like features may disturb users' positive emotions and attitudes.Practitioner Summary: This research aimed to measure emotional experience caused by three types of anthropomorphic service robots using a multimodal measurement experiment. The results showed that moderately anthropomorphic service robots evoked more positive emotion than high and low anthropomorphic robots. Too many human-like features or machine-like features may disturb users' positive emotions.
Collapse
Affiliation(s)
- Yun Zhang
- School of Economics and Management, Anhui Polytechnic University, Wuhu, P. R. China
| | - Yaqin Cao
- School of Economics and Management, Anhui Polytechnic University, Wuhu, P. R. China
| | - Robert W Proctor
- Department of Psychological Sciences, Purdue University, West Lafayette, USA
| | - Yu Liu
- School of Economics and Management, Anhui Polytechnic University, Wuhu, P. R. China
| |
Collapse
|
18
|
Kang Y. Robot Death and Human Grief in Films: Qualitative Study. OMEGA-JOURNAL OF DEATH AND DYING 2023; 88:66-94. [PMID: 34452593 DOI: 10.1177/00302228211038139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Extant grief studies examine the way humans mourn the loss of a nonhuman, be it an animal, object, or abstract concept. Yet little is known about grief when it comes to robots. As humans are increasingly brought into contact with more human-like machines, it is relevant to consider the nature of our relationship to these technologies. Centered on a qualitative analysis of 35 films, this study seeks to determine whether humans experience grief when a robot is destroyed, and if so, under what conditions. Our observations of the relevant film scenes suggest that eight variables play a role in determining whether and to what extent a human experiences grief in response to a robot's destruction. As a result, we have devised a psychological mechanism by which different types of grief can be classified as a function of these eight variables.
Collapse
Affiliation(s)
- Youngjin Kang
- Department of Psychology, Haramaya University, Dire Dawa, Ethiopia
| |
Collapse
|
19
|
Abubshait A, Weis PP, Momen A, Wiese E. Perceptual discrimination in the face perception of robots is attenuated compared to humans. Sci Rep 2023; 13:16708. [PMID: 37794045 PMCID: PMC10550918 DOI: 10.1038/s41598-023-42510-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 09/11/2023] [Indexed: 10/06/2023] Open
Abstract
When interacting with groups of robots, we tend to perceive them as a homogenous group where all group members have similar capabilities. This overgeneralization of capabilities is potentially due to a lack of perceptual experience with robots or a lack of motivation to see them as individuals (i.e., individuation). This can undermine trust and performance in human-robot teams. One way to overcome this issue is by designing robots that can be individuated such that each team member can be provided tasks based on its actual skills. In two experiments, we examine if humans can effectively individuate robots: Experiment 1 (n = 225) investigates how individuation performance of robot stimuli compares to that of human stimuli that either belong to a social ingroup or outgroup. Experiment 2 (n = 177) examines to what extent robots' physical human-likeness (high versus low) affects individuation performance. Results show that although humans are able to individuate robots, they seem to individuate them to a lesser extent than both ingroup and outgroup human stimuli (Experiment 1). Furthermore, robots that are physically more humanlike are initially individuated better compared to robots that are physically less humanlike; this effect, however, diminishes over the course of the experiment, suggesting that the individuation of robots can be learned quite quickly (Experiment 2). Whether differences in individuation performance with robot versus human stimuli is primarily due to a reduced perceptual experience with robot stimuli or due to motivational aspects (i.e., robots as potential social outgroup) should be examined in future studies.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- Italian Institute of Technology, Genoa, Italy.
- George Mason University, Fairfax, VA, USA.
| | - Patrick P Weis
- George Mason University, Fairfax, VA, USA
- Julius Maximilians University, Würzburg, Germany
| | - Ali Momen
- George Mason University, Fairfax, VA, USA
- Air Force Academy, Colorado Springs, CO, USA
| | - Eva Wiese
- George Mason University, Fairfax, VA, USA
- Berlin Institute of Technology, Berlin, Germany
| |
Collapse
|
20
|
Schreibelmayr S, Moradbakhti L, Mara M. First impressions of a financial AI assistant: differences between high trust and low trust users. Front Artif Intell 2023; 6:1241290. [PMID: 37854078 PMCID: PMC10579608 DOI: 10.3389/frai.2023.1241290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/05/2023] [Indexed: 10/20/2023] Open
Abstract
Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system's uncanniness and potential dangers. Manipulating the AI assistant's agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.
Collapse
Affiliation(s)
| | | | - Martina Mara
- Robopsychology Lab, Linz Institute of Technology, Johannes Kepler University Linz, Linz, Austria
| |
Collapse
|
21
|
Chu Y, Liu P. Machines and humans in sacrificial moral dilemmas: Required similarly but judged differently? Cognition 2023; 239:105575. [PMID: 37517138 DOI: 10.1016/j.cognition.2023.105575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 07/19/2023] [Accepted: 07/21/2023] [Indexed: 08/01/2023]
Abstract
There is an increasing interest in understanding human-machine differences in morality. Prior research relying on Trolley-like, moral-impersonal dilemmas suggests that people might apply similar norms to humans and machines but judge their identical decisions differently. We examined people's moral norm imposed on humans and robots (Study 1) and moral judgment of their decisions (Study 2) in Trolley and Footbridge dilemmas. Participants imposed similar, utilitarian norms to them in Trolley but different norms in Footbridge where fewer participants thought humans versus robots should take action in the moral-personal dilemma. Unlike previous research, we witnessed a norm-judgment symmetry that prospective norm aligns with retrospective judgment. The more required decision was judged more moral across agents and dilemmas. We discussed the theoretical implications for machine morality.
Collapse
Affiliation(s)
- Yueying Chu
- Center for Psychological Sciences, Zhejiang University, 310063 Hangzhou, Zhejiang, China; Department of Psychology and Behavioral Sciences, Zhejiang University, 310030 Hangzhou, Zhejiang, China
| | - Peng Liu
- Center for Psychological Sciences, Zhejiang University, 310063 Hangzhou, Zhejiang, China.
| |
Collapse
|
22
|
McKee KR, Bai X, Fiske ST. Humans perceive warmth and competence in artificial intelligence. iScience 2023; 26:107256. [PMID: 37520710 PMCID: PMC10371826 DOI: 10.1016/j.isci.2023.107256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/04/2023] [Accepted: 06/27/2023] [Indexed: 08/01/2023] Open
Abstract
Artificial intelligence (A.I.) increasingly suffuses everyday life. However, people are frequently reluctant to interact with A.I. systems. This challenges both the deployment of beneficial A.I. technology and the development of deep learning systems that depend on humans for oversight, direction, and regulation. Nine studies (N = 3,300) demonstrate that social-cognitive processes guide human interactions across a diverse range of real-world A.I. systems. Across studies, perceived warmth and competence emerge prominently in participants' impressions of A.I. systems. Judgments of warmth and competence systematically depend on human-A.I. interdependence and autonomy. In particular, participants perceive systems that optimize interests aligned with human interests as warmer and systems that operate independently from human direction as more competent. Finally, a prisoner's dilemma game shows that warmth and competence judgments predict participants' willingness to cooperate with a deep-learning system. These results underscore the generality of intent detection to perceptions of a broad array of algorithmic actors.
Collapse
Affiliation(s)
| | - Xuechunzi Bai
- Department of Psychology, Princeton University, Princeton, NJ 08540, USA
- School of Public and International Affairs, Princeton University, Princeton, NJ 08540, USA
| | - Susan T. Fiske
- Department of Psychology, Princeton University, Princeton, NJ 08540, USA
- School of Public and International Affairs, Princeton University, Princeton, NJ 08540, USA
| |
Collapse
|
23
|
Berent I. The "Hard Problem of Consciousness" Arises from Human Psychology. Open Mind (Camb) 2023; 7:564-587. [PMID: 37637301 PMCID: PMC10449398 DOI: 10.1162/opmi_a_00094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 07/06/2023] [Indexed: 08/29/2023] Open
Abstract
Consciousness presents a "hard problem" to scholars. At stake is how the physical body gives rise to subjective experience. Why consciousness is "hard", however, is uncertain. One possibility is that the challenge arises from ontology-because consciousness is a special property/substance that is irreducible to the physical. Here, I show how the "hard problem" emerges from two intuitive biases that lie deep within human psychology: Essentialism and Dualism. To determine whether a subjective experience is transformative, people judge whether the experience pertains to one's essence, and per Essentialism, one's essence lies within one's body. Psychological states that seem embodied (e.g., "color vision" ∼ eyes) can thus give rise to transformative experience. Per intuitive Dualism, however, the mind is distinct from the body, and epistemic states (knowledge and beliefs) seem particularly ethereal. It follows that conscious perception (e.g., "seeing color") ought to seem more transformative than conscious knowledge (e.g., knowledge of how color vision works). Critically, the transformation arises precisely because the conscious perceptual experience seems readily embodied (rather than distinct from the physical body, as the ontological account suggests). In line with this proposal, five experiments show that, in laypeople's view (a) experience is transformative only when it seems anchored in the human body; (b) gaining a transformative experience effects a bodily change; and (c) the magnitude of the transformation correlates with both (i) the perceived embodiment of that experience, and (ii) with Dualist intuitions, generally. These results cannot solve the ontological question of whether consciousness is distinct from the physical. But they do suggest that the roots of the "hard problem" are partly psychological.
Collapse
|
24
|
Kawai Y, Miyake T, Park J, Shimaya J, Takahashi H, Asada M. Anthropomorphism-based causal and responsibility attributions to robots. Sci Rep 2023; 13:12234. [PMID: 37507519 PMCID: PMC10382529 DOI: 10.1038/s41598-023-39435-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 07/25/2023] [Indexed: 07/30/2023] Open
Abstract
People tend to expect mental capabilities in a robot based on anthropomorphism and often attribute the cause and responsibility for a failure in human-robot interactions to the robot. This study investigated the relationship between mind perception, a psychological scale of anthropomorphism, and attribution of the cause and responsibility in human-robot interactions. Participants played a repeated noncooperative game with a human, robot, or computer agent, where their monetary rewards depended on the outcome. They completed questionnaires on mind perception regarding the agent and whether the participant's own or the agent's decisions resulted in the unexpectedly small reward. We extracted two factors of Experience (capacity to sense and feel) and Agency (capacity to plan and act) from the mind perception scores. Then, correlation and structural equation modeling (SEM) approaches were used to analyze the data. The findings showed that mind perception influenced attribution processes differently for each agent type. In the human condition, decreased Agency score during the game led to greater causal attribution to the human agent, consequently also increasing the degree of responsibility attribution to the human agent. In the robot condition, the post-game Agency score decreased the degree of causal attribution to the robot, and the post-game Experience score increased the degree of responsibility to the robot. These relationships were not observed in the computer condition. The study highlights the importance of considering mind perception in designing appropriate causal and responsibility attribution in human-robot interactions and developing socially acceptable robots.
Collapse
Affiliation(s)
- Yuji Kawai
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, 565-0871, Japan.
| | - Tomohito Miyake
- Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, Suita, Osaka, 565-0871, Japan
| | - Jihoon Park
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, 565-0871, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Osaka, 565-0871, Japan
| | - Jiro Shimaya
- Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka, 560-0043, Japan
| | - Hideyuki Takahashi
- Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka, 560-0043, Japan
| | - Minoru Asada
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, 565-0871, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Osaka, 565-0871, Japan
- International Professional University of Technology in Osaka, Kita-ku, Osaka, 530-0001, Japan
- Chubu University Academy of Emerging Sciences, Kasugai, Aichi, 487-8501, Japan
| |
Collapse
|
25
|
Esterwood C, Robert LP. The theory of mind and human-robot trust repair. Sci Rep 2023; 13:9877. [PMID: 37337033 PMCID: PMC10279664 DOI: 10.1038/s41598-023-37032-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 06/13/2023] [Indexed: 06/21/2023] Open
Abstract
Nothing is perfect and robots can make as many mistakes as any human, which can lead to a decrease in trust in them. However, it is possible, for robots to repair a human's trust in them after they have made mistakes through various trust repair strategies such as apologies, denials, and promises. Presently, the efficacy of these trust repairs in the human-robot interaction literature has been mixed. One reason for this might be that humans have different perceptions of a robot's mind. For example, some repairs may be more effective when humans believe that robots are capable of experiencing emotion. Likewise, other repairs might be more effective when humans believe robots possess intentionality. A key element that determines these beliefs is mind perception. Therefore understanding how mind perception impacts trust repair may be vital to understanding trust repair in human-robot interaction. To investigate this, we conducted a study involving 400 participants recruited via Amazon Mechanical Turk to determine whether mind perception influenced the effectiveness of three distinct repair strategies. The study employed an online platform where the robot and participant worked in a warehouse to pick and load 10 boxes. The robot made three mistakes over the course of the task and employed either a promise, denial, or apology after each mistake. Participants then rated their trust in the robot before and after it made the mistake. Results of this study indicated that overall, individual differences in mind perception are vital considerations when seeking to implement effective apologies and denials between humans and robots.
Collapse
Affiliation(s)
- Connor Esterwood
- School of Information, University of Michigan, Ann Arbor, 48109, USA.
| | - Lionel P Robert
- School of Information, University of Michigan, Ann Arbor, 48109, USA
- Robotics Department, University of Michigan, Ann Arbor, 48109, USA
| |
Collapse
|
26
|
Vuong QH, La VP, Nguyen MH, Jin R, La MK, Le TT. How AI's Self-Prolongation Influences People's Perceptions of Its Autonomous Mind: The Case of U.S. Residents. Behav Sci (Basel) 2023; 13:470. [PMID: 37366721 DOI: 10.3390/bs13060470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/25/2023] [Accepted: 06/02/2023] [Indexed: 06/28/2023] Open
Abstract
The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI's capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, to make assessments. Employing information-processing-based Bayesian Mindsponge Framework (BMF) analytics on a dataset of 266 residents in the United States, we found that the more people believe that an AI agent seeks continued functioning, the more they believe in that AI agent's capability of having a mind of its own. Moreover, we also found that the above association becomes stronger if a person is more familiar with personally interacting with AI. This suggests a directional pattern of value reinforcement in perceptions of AI. As the information processing of AI becomes even more sophisticated in the future, it will be much harder to set clear boundaries about what it means to have an autonomous mind.
Collapse
Affiliation(s)
- Quan-Hoang Vuong
- Centre for Interdisciplinary Social Research, Phenikaa University, Yen Nghia Ward, Ha Dong District, Hanoi 100803, Vietnam
| | - Viet-Phuong La
- Centre for Interdisciplinary Social Research, Phenikaa University, Yen Nghia Ward, Ha Dong District, Hanoi 100803, Vietnam
- A.I. for Social Data Lab (AISDL), Vuong & Associates, Hanoi 100000, Vietnam
| | - Minh-Hoang Nguyen
- Centre for Interdisciplinary Social Research, Phenikaa University, Yen Nghia Ward, Ha Dong District, Hanoi 100803, Vietnam
| | - Ruining Jin
- Civil, Commercial and Economic Law School, China University of Political Science and Law, Beijing 100088, China
| | - Minh-Khanh La
- School of Electrical and Electronic Engineering, Hanoi University of Science and Technology, Hanoi 100000, Vietnam
| | - Tam-Tri Le
- Centre for Interdisciplinary Social Research, Phenikaa University, Yen Nghia Ward, Ha Dong District, Hanoi 100803, Vietnam
| |
Collapse
|
27
|
Cucciniello I, Sangiovanni S, Maggi G, Rossi S. Mind Perception in HRI: Exploring Users' Attribution of Mental and Emotional States to Robots with Different Behavioural Styles. Int J Soc Robot 2023; 15:867-877. [PMID: 37251279 PMCID: PMC10040176 DOI: 10.1007/s12369-023-00989-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/03/2023] [Indexed: 03/29/2023]
Abstract
Theory of Mind is crucial to understand and predict others' behaviour, underpinning the ability to engage in complex social interactions. Many studies have evaluated a robot's ability to attribute thoughts, beliefs, and emotions to humans during social interactions, but few studies have investigated human attribution to robots with such capabilities. This study contributes to this direction by evaluating how the cognitive and emotional capabilities attributed to the robot by humans may be influenced by some behavioural characteristics of robots during the interaction. For this reason, we used the Dimensions of Mind Perception questionnaire to measure participants' perceptions of different robot behaviour styles, namely Friendly, Neutral, and Authoritarian, which we designed and validated in our previous works. The results obtained confirmed our hypotheses because people judged the robot's mental capabilities differently depending on the interaction style. Particularly, the Friendly is considered more capable of experiencing positive emotions such as Pleasure, Desire, Consciousness, and Joy; conversely, the Authoritarian is considered more capable of experiencing negative emotions such as Fear, Pain, and Rage than the Friendly. Moreover, they confirmed that interaction styles differently impacted the perception of the participants on the Agency dimension, Communication, and Thought.
Collapse
Affiliation(s)
- Ilenia Cucciniello
- Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Via Claudio 80, 80125 Naples, Italy
| | - Sara Sangiovanni
- Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Via Claudio 80, 80125 Naples, Italy
| | - Gianpaolo Maggi
- Department of Psychology, University of Campania Luigi Vanvitelli, Viale Ellittico, 31, 81100 Caserta, Italy
| | - Silvia Rossi
- Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Via Claudio 80, 80125 Naples, Italy
| |
Collapse
|
28
|
Bezrukova K, Griffith TL, Spell C, Rice V, Yang HE. Artificial Intelligence and Groups: Effects of Attitudes and Discretion on Collaboration. GROUP & ORGANIZATION MANAGEMENT 2023. [DOI: 10.1177/10596011231160574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
We theorize about human-team collaboration with AI by drawing upon research in groups and teams, social psychology, information systems, engineering, and beyond. Based on our review, we focus on two main issues in the teams and AI arena. The first is whether the team generally views AI positively or negatively. The second is whether the decision to use AI is left up to the team members (voluntary use of AI) or mandated by top management or other policy-setters in the organization. These two aspects guide our creation of a team-level conceptual framework modeling how AI introduced as a mandated addition to the team can have asymmetric effects on collaboration level depending on the team’s attitudes about AI. When AI is viewed positively by the team, the effect of mandatory use suppresses collaboration in the team. But when a team has negative attitudes toward AI, mandatory use elevates team collaboration. Our model emphasizes the need for managing team attitudes and discretion strategies and promoting new research directions regarding AI’s implications for teamwork.
Collapse
Affiliation(s)
| | | | - Chester Spell
- Rutgers University School of Business, Camden NJ, USA
| | | | | |
Collapse
|
29
|
Moral Judgments of Human vs. AI Agents in Moral Dilemmas. Behav Sci (Basel) 2023; 13:bs13020181. [PMID: 36829410 PMCID: PMC9951994 DOI: 10.3390/bs13020181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/26/2023] [Accepted: 02/06/2023] [Indexed: 02/19/2023] Open
Abstract
Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments (n = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people's moral judgments. Specifically, participants rated AI agents' behavior as more immoral and deserving of more blame than humans' behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people's moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems.
Collapse
|
30
|
Students' adoption of AI-based teacher-bots (T-bots) for learning in higher education. INFORMATION TECHNOLOGY & PEOPLE 2023. [DOI: 10.1108/itp-02-2021-0152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
PurposeThe purpose of this paper is to investigate students' adoption intention (ADI) and actual usage (ATU) of artificial intelligence (AI)-based teacher bots (T-bots) for learning using technology adoption model (TAM) and context-specific variables.Design/methodology/approachA mixed-method design is used wherein the quantitative and qualitative approaches were used to explore the adoption of T-bots for learning. Overall, 45 principals/directors/deans/professors were interviewed and NVivo 8.0 was used for interview data analysis. Overall, 1,380 students of higher education institutes were surveyed, and the collected data was analyzed using the Partial Least Squares Structural Equation Modeling (PLS-SEM) technique.FindingsThe T-bot's ADI’s antecedents found were perceived ease of use, perceived usefulness, personalization, interactivity, perceived trust, anthropomorphism and perceived intelligence. The ADI influences the ATU of T-bots, and its relationship is negatively moderated by stickiness to learn from human teachers in the classroom. It comprehends the insights of senior authorities of the higher education institutions in India toward the adoption of T-bots.Practical implicationsThe research provides distinctive insights for principals, directors and professors in higher education institutes to understand the factors affecting the students' behavioral intention and use of T-bots. The developers and designers of T-bots need to ensure that T-bots are more interactive, provide personalized information to students and ensure the anthropomorphic characteristics of T-bots. The education policymakers can also comprehend the factors of T-bot adoption for developing the policies related to T-bots and their implications in education.Originality/valueT-bot is a new disruptive technology in the education sector, and this is the first step in exploring the adoption factors. The TAM model is extended with context-specific factors related to T-bot technology to offer a comprehensive explanatory power to the proposed model. The research outcome provides the unique antecedents of the adoption of T-bots.
Collapse
|
31
|
Benjamin R, Heine SJ. From Freud to Android: Constructing a Scale of Uncanny Feelings. J Pers Assess 2023; 105:121-133. [PMID: 35353019 DOI: 10.1080/00223891.2022.2048842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The uncanny valley is a topic for engineers, animators, and psychologists, yet uncanny emotions are without a clear definition. Across three studies, we developed an 8-item measure of unnerved feelings, finding that it was discriminable from other affective experiences. In Study 1, we conducted an exploratory factor analysis that yielded two factors; an unnerved factor, which connects to emotional reactions to the uncanny, and a disoriented factor, which connects to mental state changes more distally following uncanny experiences. Focusing on the unnerved measure, Study 2 tests the scale's convergent and discriminant validity, concluding that participants who watched an uncanny video were more unnerved than those who watched a disgusting, fearful, or a neutral video. In Study 3, we determined that our scale detects unnerved feelings created during early 2020 by the coronavirus pandemic; a distinct source of uncanniness. These studies contribute to the psychological and interdisciplinary understanding of this strange, eerie phenomenon of being confronted with what looms just beyond our understanding.
Collapse
Affiliation(s)
- Rachele Benjamin
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Steven J Heine
- Department of Psychology, University of British Columbia, Vancouver, Canada
| |
Collapse
|
32
|
Automated journalism: The effects of AI authorship and evaluative information on the perception of a science journalism article. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
33
|
What shapes our attitudes towards algorithms in urban governance? The role of perceived friendliness and controllability of the city, and human-algorithm cooperation. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
34
|
Ringwald M, Theben P, Gerlinger K, Hedrich A, Klein B. How Should Your Assistive Robot Look Like? A Scoping Review on Embodiment for Assistive Robots. J INTELL ROBOT SYST 2023. [DOI: 10.1007/s10846-022-01781-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
AbstractAssistive robots have the potential to support older people and people with disabilities in various tasks so that they can live more independently. One of the research challenges is the appearance of assistive robots so that they are accepted by prospective users and encourage interaction. This scoping review aims to identify studies that report preferences in order to derive indicators for the embodiment of a robot with assistance functions. A systematic literature research was conducted in the three electronic databases IEEE Xplore, ACM Digital Library and PubMed Central (PMC). Included papers date back not further than 2015 and report empirical studies about the preferred appearance of service robots. The search resulted in 1,760 papers. 29 were included, of which 20 papers reported quantitative studies, three described a qualitative and six a mixed-methods design. Out of these papers, seven categories of robot appearances and design components could be extracted. Most papers focused on humanoid or humanlike robots and components like facial features or gender aspects. Others relied on design that reflects the robot’s function or simulated emotions through light applications. Only eight studies focused on older adults, and no study on people with disabilities. The appearance of a humanoid robot is often described as favorable, but the definition of ‘humanoid’ varies widely within all analyzed studies and an explizit allocation of features is not possible. For their practical work, robot designers can extract various aspects from the papers; however, for generalization more research is necessary.
Collapse
|
35
|
Han E, Yin D, Zhang H. Bots with Feelings: Should AI Agents Express Positive Emotion in Customer Service? INFORMATION SYSTEMS RESEARCH 2022. [DOI: 10.1287/isre.2022.1179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
The rise of emotional intelligence technology and the recent debate about the possibility of a “sentient” artificial intelligence (AI) urge the need to study the role of emotion during people’s interactions with AIs. In customer service, human employees are increasingly replaced by AI agents, such as chatbots, and often these AI agents are equipped with emotion-expressing capabilities to replicate the positive impact of human-expressed positive emotion. But is it indeed beneficial? This research explores how, when, and why an AI agent’s expression of positive emotion affects customers’ service evaluations. Through controlled experiments in which the subjects interacted with a service agent (AI or human) to resolve a hypothetical service issue, we provide answers to these questions. We show that AI-expressed positive emotion can influence customers affectively (by evoking customers’ positive emotions) and cognitively (by violating customers’ expectations) in opposite directions. Thus, positive emotion expressed by an AI agent (versus a human employee) is less effective in facilitating service evaluations. We further underscore that, depending on customers’ expectations toward their relationship with a service agent, AI-expressed positive emotion may enhance or hurt service evaluations. Overall, our work provides useful guidance on how and when companies can best deploy emotion-expressing AI agents.
Collapse
Affiliation(s)
- Elizabeth Han
- Desautels Faculty of Management, McGill University, Montréal, Quebec H3A 1G5, Canada
| | - Dezhi Yin
- Muma College of Business, University of South Florida, Tampa, Florida 33620
| | - Han Zhang
- Scheller College of Business, Georgia Institute of Technology, Atlanta, Georgia 30308
| |
Collapse
|
36
|
Vaitonytė J, Alimardani M, Louwerse MM. Scoping review of the neural evidence on the uncanny valley. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
|
37
|
Heffernan KJ, Vetere F, Chang S. Socio-technical context for insertable devices. Front Psychol 2022; 13:991345. [DOI: 10.3389/fpsyg.2022.991345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 10/20/2022] [Indexed: 11/23/2022] Open
Abstract
In this article, we show that voluntarily inserting devices inside the body is contested and seek to understand why. This article discusses insertables as a source of contestation. To describe and understand the social acceptability, reactions toward, and rhetoric surrounding insertable devices, we examine (i) the technical capabilities of insertable devices (the technical context), (ii) human reactions toward insertables (the social context), and (iii) the regulatory environment. The paper offers explanations to the misperceptions about insertables.
Collapse
|
38
|
Pauketat JV, Anthis JR. Predicting the moral consideration of artificial intelligences. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
39
|
Improving evaluations of advanced robots by depicting them in harmful situations. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
40
|
Gorissen S, Lillie HM, Chavez-Yenter D, Vega A, John KK, Jensen JD. Explicitness, disgust, and safe sex behavior: A message experiment with U.S. adults. Soc Sci Med 2022; 313:115414. [PMID: 36209520 DOI: 10.1016/j.socscimed.2022.115414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/26/2022] [Accepted: 09/28/2022] [Indexed: 01/26/2023]
Abstract
Sexual health risks are challenging to communicate given the potential negative reactions of target audiences to explicit language. Grounded in research on pathogen avoidance, the current study examined the impact of varying levels of explicit language on message perceptions and safe sex behavioral intentions. U.S. adults (N = 498) were randomly assigned to view messages detailing pandemic safe sexual behavior that contained either low or high levels of explicit language. High explicit language significantly increased perceived disgust which also indirectly linked high explicit language with increased intentions to engage in safe sex behavior. Individual difference variables moderated the impact of message explicitness; dispositional hygiene disgust moderated the impact of high explicit, hygiene-focused messages on safe sex intentions. Those with relatively low levels of dispositional disgust were more positively impacted by explicit language. The results suggest the value of increased message explicitness for sexual health communication and have implications for pathogen avoidance behaviors, the behavioral immune system, and dispositional and affective forms of disgust.
Collapse
Affiliation(s)
- Sebastiaan Gorissen
- Minot State University, Division of Art and Professional Communication, 500 University Avenue West, Minot, ND, 58707, USA.
| | | | | | | | | | | |
Collapse
|
41
|
Predicting the change trajectory of employee robot-phobia in the workplace: The role of perceived robot advantageousness and anthropomorphism. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
42
|
Trouvain J, Weiss B. Thoughts on the usage of audible smiling in speech synthesis applications. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.885657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
In this perspective paper we explore the question how audible smiling can be integrated in speech synthesis applications. In human-human communication, smiling can serve various functions, such as signaling politeness or as a marker of trustworthiness and other aspects that raise and maintain the social likeability of a speaker. However, in human-machine communication, audible smiling is nearly unexplored, but could be an advantage in different applications such as dialog systems. The rather limited knowledge of the details of audible smiling and their exploitation for speech synthesis applications is a great challenge. This is also true for modeling smiling in spoken dialogs and testing it with users. Thus, this paper argues to fill the research gaps in identifying factors that constitute and affect audible smiling in order to incorporate it in speech synthesis applications. The major claim is to focus on the dynamics of audible smiling on various levels.
Collapse
|
43
|
Moral psychology of nursing robots: Exploring the role of robots in dilemmas of patient autonomy. EUROPEAN JOURNAL OF SOCIAL PSYCHOLOGY 2022. [DOI: 10.1002/ejsp.2890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
44
|
Cross-Cultural Differences in Comfort with Humanlike Robots. Int J Soc Robot 2022; 14:1865-1873. [PMID: 36120116 PMCID: PMC9466302 DOI: 10.1007/s12369-022-00920-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/10/2022]
Abstract
The uncanny valley hypothesis describes how people are often less comfortable with highly humanlike robots. However, this discomfort may vary cross-culturally. This research tests how increasing robots’ physical and mental human likeness affects people’s comfort with robots in the United States and Japan, countries whose cultural and religious contexts differ in ways that are relevant to the evaluation of humanlike robots. We find that increasing physical and mental human likeness decreases comfort among Americans but not among Japanese participants. One potential explanation for these differences it that Japanese participants perceived robots to be more animate, having more of a mind, a soul, and consciousness, relative to American participants.
Collapse
|
45
|
A study on the influence of service robots’ level of anthropomorphism on the willingness of users to follow their recommendations. Sci Rep 2022; 12:15266. [PMID: 36088470 PMCID: PMC9463504 DOI: 10.1038/s41598-022-19501-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 08/30/2022] [Indexed: 11/16/2022] Open
Abstract
Service robots are increasingly deployed in various industries including tourism. In spite of extensive research on the user’s experience in interaction with these robots, there are yet unanswered questions about the factors that influence user’s compliance. Through three online studies, we investigate the effect of the robot anthropomorphism and language style on customers’ willingness to follow its recommendations. The mediating role of the perceived mind and persuasiveness in this relationship is also investigated. Study 1 (n = 89) shows that a service robot with a higher level of anthropomorphic features positively influences the willingness of users to follow its recommendations while language style does not affect compliance. Study 2a (n = 168) further confirms this finding when we presented participants with a tablet vs. a service robot with an anthropomorphic appearance while communication style does not affect compliance. Finally, Study 2b (n = 122) supports the indirect effect of anthropomorphism level on the willingness to follow recommendations through perceived mind followed by persuasiveness. The findings provide valuable insight to enhance human–robot interaction in service settings.
Collapse
|
46
|
Diel A, Lewis M. The deviation-from-familiarity effect: Expertise increases uncanniness of deviating exemplars. PLoS One 2022; 17:e0273861. [PMID: 36048801 PMCID: PMC9436138 DOI: 10.1371/journal.pone.0273861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 08/16/2022] [Indexed: 11/19/2022] Open
Abstract
Humanlike entities deviating from the norm of human appearance are perceived as strange or uncanny. Explanations for the eeriness of deviating humanlike entities include ideas specific to human or animal stimuli like mate selection, avoidance of threat or disease, or dehumanization; however, deviation from highly familiar categories may provide a better explanation. Here it is tested whether experts and novices in a novel (greeble) category show different patterns of abnormality, attractiveness, and uncanniness responses to distorted and averaged greebles. Greeble-trained participants assessed the abnormality, attractiveness, uncanniness of normal, averaged, and distorted greebles and their responses were compared to participants who had not previously seen greebles. The data show that distorted greebles were more uncanny than normal greebles only in the training condition, and distorted greebles were more uncanny in the training compared to the control condition. In addition, averaged greebles were not more attractive than normal greebles regardless of condition. The results suggest uncanniness is elicited by deviations from stimulus categories of expertise rather than being a purely biological human- or animal-specific response.
Collapse
Affiliation(s)
- Alexander Diel
- School of Psychology, Cardiff University, Cardiff, United Kingdom
- * E-mail:
| | - Michael Lewis
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
47
|
When your boss is a robot: Workers are more spiteful to robot supervisors that seem more human. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2022. [DOI: 10.1016/j.jesp.2022.104360] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
48
|
Shape of the Uncanny Valley and Emotional Attitudes Toward Robots Assessed by an Analysis of YouTube Comments. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00905-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
AbstractThe uncanny valley hypothesis (UVH) suggests that almost, but not fully, humanlike artificial characters elicit a feeling of eeriness or discomfort in observers. This study used Natural Language Processing of YouTube comments to provide ecologically-valid, non-laboratory results about people’s emotional reactions toward robots. It contains analyses of 224,544 comments from 1515 videos showing robots from a wide humanlikeness spectrum. The humanlikeness scores were acquired from the Anthropomorphic roBOT database. The analysis showed that people use words related to eeriness to describe very humanlike robots. Humanlikeness was linearly related to both general sentiment and perceptions of eeriness—-more humanlike robots elicit more negative emotions. One of the subscales of humanlikeness, Facial Features, showed a UVH-like relationship with both sentiment and eeriness. The exploratory analysis demonstrated that the most suitable words for measuring the self-reported uncanny valley effect are: ‘scary’ and ‘creepy’. In contrast to theoretical expectations, the results showed that humanlikeness was not related to either pleasantness or attractiveness. Finally, it was also found that the size of robots influences sentiment toward the robots. According to the analysis, the reason behind this is the perception of smaller robots as more playable (as toys), although the prediction that bigger robots would be perceived as more threatening was not supported.
Collapse
|
49
|
Diel A, Lewis M. The uncanniness of written text is explained by configural deviation and not by processing disfluency. Perception 2022; 51:3010066221114436. [PMID: 35912496 DOI: 10.1177/03010066221114436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Deviating from human norms in human-looking artificial entities can elicit uncanny sensations, described as the uncanny valley. This study investigates in three tasks whether configural deviation in written text also increases uncanniness. It further evaluates whether the uncanniness of text is better explained by perceptual disfluency and especially deviations from specialized categories, or conceptual disfluency caused by ambiguity. In the first task, lower sentence readability predicted uncanniness, but deviating sentences were more uncanny than typical sentences despite being just as readable. Furthermore, familiarity with a language increased the effect of configural deviation on uncanniness but not the effect of non-configural deviation (blur). In the second and third tasks, semantically ambiguous words and sentences were not uncannier than typical sentences, but deviating, non-ambiguous sentences were. Deviations from categories with specialized processing mechanisms thus better fit the observed results as an explanation of the uncanny valley than ambiguity-based explanations.
Collapse
|
50
|
Xu Y, Zhang J, Deng G. Enhancing customer satisfaction with chatbots: The influence of communication styles and consumer attachment anxiety. Front Psychol 2022; 13:902782. [PMID: 35936304 PMCID: PMC9355322 DOI: 10.3389/fpsyg.2022.902782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 06/27/2022] [Indexed: 11/24/2022] Open
Abstract
Chatbots are increasingly occupying the online retailing landscape, and the volume of consumer-chatbot service interactions is exploding. Even so, it still remains unclear how chatbots should communicate with consumers to ensure positive customer service experiences and, in particular, to improve their satisfaction. A fundamental decision in this regard is the choice of a communication style, specifically, whether a social-oriented or a task-oriented communication style should be best used for chatbots. In this paper, we investigate how using a social-oriented versus task-oriented communication style can improve customer satisfaction. Two experimental studies reveal that using a social-oriented communication style boosts customer satisfaction. Warmth perception of the chatbot mediates this effect, while consumer attachment anxiety moderates these effects. Our results indicate that social-oriented communication style can be beneficial in enhancing service satisfaction for highly anxiously attached customers, but it does not work for the lowly anxiously attached. This study provides theoretical and practical implications about how to implement chatbots in service encounters.
Collapse
Affiliation(s)
- Ying Xu
- School of Economics and Management, Southwest University of Science and Technology, Mianyang, China
| | - Jianyu Zhang
- School of Business Administration, Southwestern University of Finance and Economics, Chengdu, China
| | - Guangkuan Deng
- School of Economics and Management, Southwest University of Science and Technology, Mianyang, China
- *Correspondence: Guangkuan Deng,
| |
Collapse
|