1
|
Meadows R, Hine C. Entanglements of Technologies, Agency and Selfhood: Exploring the Complexity in Attitudes Toward Mental Health Chatbots. Cult Med Psychiatry 2024:10.1007/s11013-024-09876-2. [PMID: 39153178 DOI: 10.1007/s11013-024-09876-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/02/2024] [Indexed: 08/19/2024]
Abstract
Whilst chatbots for mental health are becoming increasingly prevalent, research on user experiences and expectations is relatively scarce and also equivocal on their acceptability and utility. This paper asks how people formulate their understandings of what might be appropriate in this space. We draw on data from a group of non-users who have experienced a need for support, and so can imagine self as therapeutic target-enabling us to tap into their imaginative speculations of the self in relation to the chatbot other and the forms of agency they see as being at play; unconstrained by a specific actual chatbot. Analysis points towards ambiguity over some key issues: whether the apps were seen as having a role in specific episodes of mental health or in relation to an ongoing project of supporting wellbeing; whether the chatbot could be viewed as having a therapeutic agency or was a mere tool; and how far these issues related to matters of the user's personal qualities or the specific nature of the mental health condition. A range of traditions, norms and practices were used to construct diverse expectations on whether chatbots could offer a solution to cost-effective mental health support at scale.
Collapse
Affiliation(s)
- Robert Meadows
- Department of Sociology, University of Surrey, Guildford, GU2 7XH, UK.
| | - Christine Hine
- Department of Sociology, University of Surrey, Guildford, GU2 7XH, UK
| |
Collapse
|
2
|
Gampe A, Zahner-Ritter K, Müller JJ, Schmid S. How children speak with their voice assistant Sila depends on what they think about her. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
3
|
Meyer (née Mozafari) N, Schwede M, Hammerschmidt M, Weiger WH. Users taking the blame? How service failure, recovery, and robot design affect user attributions and retention. ELECTRONIC MARKETS 2023; 32:2491-2505. [PMID: 36691423 PMCID: PMC9849113 DOI: 10.1007/s12525-022-00613-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Accepted: 10/27/2022] [Indexed: 06/17/2023]
Abstract
Firms use robots to deliver an ever-expanding range of services. However, as service failures are common, service recovery actions are necessary to prevent user churn. This research further suggests that firms need to know how to design service robots that avoid alienating users in case of service failures. Robust evidence across two experiments demonstrates that users attribute successful service outcomes internally, while robot-induced service failures are blamed on the firm (and not the robot), confirming the well-known self-serving bias. While this external attributional shift occurs regardless of the robot design (i.e., it is the same for warm vs. competent robots), the findings imply that service recovery minimizes the undesirable external shift and that this effect is particularly pronounced for warm robots. For practitioners, this implies prioritizing service robots with a warm design for maximizing user retention for either type of service outcome (i.e., success, failure, and failure with recovery). For theory, this work demonstrates that attribution represents a meaningful mechanism to explain the proposed relationships. Supplementary Information The online version contains supplementary material available at 10.1007/s12525-022-00613-4.
Collapse
Affiliation(s)
- Nika Meyer (née Mozafari)
- University of Goettingen, Smart Retail Group, Platz Der Goettinger Sieben 3, 37073 Goettingen, Germany
| | - Melanie Schwede
- University of Goettingen, Smart Retail Group, Platz Der Goettinger Sieben 3, 37073 Goettingen, Germany
| | - Maik Hammerschmidt
- University of Goettingen, Smart Retail Group, Platz Der Goettinger Sieben 3, 37073 Goettingen, Germany
| | | |
Collapse
|
4
|
Kautish P, Khare A. Investigating the moderating role of AI-enabled services on flow and awe experience. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2022. [DOI: 10.1016/j.ijinfomgt.2022.102519] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
5
|
Jiang T, Guo Q, Wei Y, Cheng Q, Lu W. Investigating the relationships between dialog patterns and user satisfaction in customer service chat systems based on chat log analysis. J Inf Sci 2022. [DOI: 10.1177/01655515221124066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
While previous studies of customer service chat systems (CSCS) understood user satisfaction as individuals’ subjective perceptions and depended heavily on self-report methods for satisfaction measurement, this article presents an obtrusive chat log analysis that followed the established approaches of search log analysis and examined the relationships between dialog patterns and user satisfaction. An 81-day chat log was obtained from a real-world CSCS that involves both a chatbot and human representatives. A total of 75,918 chat sessions/147,972 sub-sessions containing 251,556 user messages and 349,416 system messages were extracted after data processing and analysed in terms of topic, length and path. As found in this study, the users were more likely to get satisfied on low-difficulty topics. The dialog between the CSCS and users was shallow in general. While human representatives’ elaboration contributed to user satisfaction, the chatbot was responsible for damaging user satisfaction. The significance of this study consists not only in providing objective evidence about user satisfaction in online chat but also in generating practical implications for the CSCS to improve user satisfaction.
Collapse
Affiliation(s)
- Tingting Jiang
- School of Information Management, Wuhan University, China; Center for Studies of Information Resources, Wuhan University, China
| | - Qian Guo
- School of Information Management, Wuhan University, China
| | - Yuhan Wei
- School of Information Management, Wuhan University, China
| | - Qikai Cheng
- School of Information Management, Wuhan University, China; Center for Studies of Information Resources, Wuhan University, China
| | - Wei Lu
- School of Information Management, Wuhan University, China; Center for Studies of Information Resources, Wuhan University, China
| |
Collapse
|
6
|
Caldwell S, Sweetser P, O’Donnell N, Knight MJ, Aitchison M, Gedeon T, Johnson D, Brereton M, Gallagher M, Conroy D. An Agile New Research Framework for Hybrid Human-AI Teaming: Trust, Transparency, and Transferability. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3514257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
We propose a new research framework by which the nascent discipline of human-AI teaming can be explored within experimental environments in preparation for transferal to real-world contexts. We examine the existing literature and unanswered research questions through the lens of an Agile approach to construct our proposed framework. Our framework aims to provide a structure for understanding the macro features of this research landscape, supporting holistic research into the acceptability of human-AI teaming to human team members and the affordances of AI team members. The framework has the potential to enhance decision-making and performance of hybrid human-AI teams. Further, our framework proposes the application of Agile methodology for research management and knowledge discovery. We propose a transferability pathway for hybrid teaming to be initially tested in a safe environment, such as a real-time strategy video game, with elements of lessons learned that can be transferred to real-world situations.
Collapse
Affiliation(s)
| | | | | | | | | | - Tom Gedeon
- Australian National University, Australia
| | | | | | | | | |
Collapse
|
7
|
Epstein R, Bordyug M, Chen YH, Chen Y, Ginther A, Kirkish G, Stead H. Toward the search for the perfect blade runner: a large-scale, international assessment of a test that screens for “humanness sensitivity”. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01398-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
8
|
Tsai WS, Lun D, Carcioppolo N, Chuan C. Human versus chatbot: Understanding the role of emotion in health marketing communication for vaccines. PSYCHOLOGY & MARKETING 2021; 38:2377-2392. [PMID: 34539051 PMCID: PMC8441681 DOI: 10.1002/mar.21556] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/11/2021] [Accepted: 07/14/2021] [Indexed: 06/05/2023]
Abstract
Based on the theoretical framework of agency effect, this study examined the role of affect in influencing the effects of chatbot versus human brand representatives in the context of health marketing communication about HPV vaccines. We conducted a 2 (perceived agency: chatbot vs. human) × 3 (affect elicitation: embarrassment, anger, neutral) between-subject lab experiment with 142 participants, who were randomly assigned to interact with either a perceived chatbot or a human representative. Key findings from self-reported and behavioral data highlight the complexity of consumer-chatbot communication. Specifically, participants reported lower interaction satisfaction with the chatbot than with the human representative when anger was evoked. However, participants were more likely to disclose concerns of HPV risks and provide more elaborate answers to the perceived human representative when embarrassment was elicited. Overall, the chatbot performed comparably to the human representative in terms of perceived usefulness and influence over participants' compliance intention in all emotional contexts. The findings complement the Computers as Social Actors paradigm and offer strategic guidelines to capitalize on the relative advantages of chatbot versus human representatives.
Collapse
Affiliation(s)
| | - Di Lun
- Department of Communication StudiesUniversity of MiamiMiamiFloridaUSA
| | | | - Ching‐Hua Chuan
- Department of Interactive MediaUniversity of MiamiMiamiFloridaUSA
| |
Collapse
|
9
|
|
10
|
Eagle R, Lander R, Hall PD. Questioning ‘what makes us human’: How audiences react to an artificial intelligence–driven show. COGNITIVE COMPUTATION AND SYSTEMS 2021. [DOI: 10.1049/ccs2.12018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Rob Eagle
- Digital Cultures Research Centre Faculty of Arts, Creative Industries and Education (ACE) University of the West of England Bristol Bristol UK
| | - Rik Lander
- Digital Cultures Research Centre Faculty of Arts, Creative Industries and Education (ACE) University of the West of England Bristol Bristol UK
| | | |
Collapse
|
11
|
Xiao C, Xu L, Sui Y, Zhou R. Do People Regard Robots as Human-Like Social Partners? Evidence From Perspective-Taking in Spatial Descriptions. Front Psychol 2021; 11:578244. [PMID: 33613351 PMCID: PMC7892441 DOI: 10.3389/fpsyg.2020.578244] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 12/23/2020] [Indexed: 11/16/2022] Open
Abstract
Spatial communications are essential to the survival and social interaction of human beings. In science fiction and the near future, robots are supposed to be able to understand spatial languages to collaborate and cooperate with humans. However, it remains unknown whether human speakers regard robots as human-like social partners. In this study, human speakers describe target locations to an imaginary human or robot addressee under various scenarios varying in relative speaker–addressee cognitive burden. Speakers made equivalent perspective choices to human and robot addressees, which consistently shifted according to the relative speaker–addressee cognitive burden. However, speakers’ perspective choice was only significantly correlated to their social skills when the addressees were humans but not robots. These results suggested that people generally assume robots and humans with equal capabilities in understanding spatial descriptions but do not regard robots as human-like social partners.
Collapse
Affiliation(s)
- Chengli Xiao
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, China
| | - Liufei Xu
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, China
| | - Yuqing Sui
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, China
| | - Renlai Zhou
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, China
| |
Collapse
|
12
|
Swanson EB. Available to meet: advances in professional communications. INFORMATION TECHNOLOGY & PEOPLE 2020. [DOI: 10.1108/itp-06-2019-0311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeThis viewpoint paper calls in to question the current design approach to personal artificial intelligence (AI) assistance in support of everyday professional communications, where a bot emulates a human in this role. It aims to stimulate fresh thought among designers and users of this technology. It also calls upon scholars to more widely share incidental insights that arise in their own encounters with such new AI.Design/methodology/approachThe paper employs a case of an email exchange gone wrong to demonstrate the current failings of personal AI assistance in support of professional communications and to yield broader insights into bot design and use. The viewpoint is intended to provoke discussion.FindingsFrom the case, it is indicated that industrial-strength personal AI assistance is not here yet. Designing a personal AI assistant to emulate a human is found to be deeply problematic, in particular. The case illuminates what might be called the problem of blinded agency, in performative contexts where human, robotic and organizational identities are at least partially masked and actions, inactions and intentions can too easily disappear in a thick fog of digital exchange. The problem arises where parties must act in contexts not known to each other, and where who is responsible for what in a mundane exchange is obscured (intentionally or not) by design or by actions (or inactions) of the parties. An insight is that while humans act with a sense of agency to affect outcomes that naturally invoke a corresponding sense of responsibility for what transpires, bots in social interaction simply act and feign responsibility as they have no sense of it beyond their code and data. A personal AI assistant is probably best designed to communicate its artificiality clearly. Missing today are distinctive social conventions for identifying machine agency in everyday interactions as well as an accepted etiquette for AI deployment in these settings.Originality/valueAs a viewpoint contribution, the paper's value is as a stimulant to discussion of alternate approaches to design and use of personal AI assistance in professional communications and where we should be going with this. The presented case of an email exchange gone wrong is simple on the face of it but reveals in its examination a number of complexities and broader insights.
Collapse
|
13
|
Araujo T. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. COMPUTERS IN HUMAN BEHAVIOR 2018. [DOI: 10.1016/j.chb.2018.03.051] [Citation(s) in RCA: 297] [Impact Index Per Article: 49.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
14
|
Zarouali B, Van den Broeck E, Walrave M, Poels K. Predicting Consumer Responses to a Chatbot on Facebook. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2018; 21:491-497. [PMID: 30036074 DOI: 10.1089/cyber.2017.0518] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
As chatbots have become increasingly popular over the past years, most social networking sites have recognized their far-reaching potential for commercial purposes. Their rapid and widespread usage warrants a better understanding. This study examines the effectiveness of chatbots on Facebook for brands. The study proposes and tests a model based on the Consumer Acceptance of Technology model (CAT-model) including three cognitive (i.e., perceived usefulness, perceived ease-of-use, and perceived helpfulness) and three affective (pleasure, arousal, and dominance; PAD-dimensions) determinants that potentially influence consumers' attitude toward brands providing a chatbot, and hence, their likelihood to use and recommend the chatbot (i.e., patronage intention). Structural equation modeling analyses show that two cognitive (i.e., perceived usefulness and perceived helpfulness) and all three affective predictors are positively related to consumers' attitude toward the chatbot brand. The findings further indicate that attitude toward the brand explained a significant amount of variation in consumers' patronage intention. Finally, all the significant determinants also have an indirect effect on patronage intention, mediated through attitude toward the brand. In conclusion, our findings hold valuable practical implications, as well as relevant suggestions for future research.
Collapse
Affiliation(s)
- Brahim Zarouali
- Department of Communication Studies, University of Antwerp , Antwerp, Belgium
| | | | - Michel Walrave
- Department of Communication Studies, University of Antwerp , Antwerp, Belgium
| | - Karolien Poels
- Department of Communication Studies, University of Antwerp , Antwerp, Belgium
| |
Collapse
|
15
|
|
16
|
Gillespie A, Corti K. The Body That Speaks: Recombining Bodies and Speech Sources in Unscripted Face-to-Face Communication. Front Psychol 2016; 7:1300. [PMID: 27660616 PMCID: PMC5015481 DOI: 10.3389/fpsyg.2016.01300] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2016] [Accepted: 08/15/2016] [Indexed: 11/13/2022] Open
Abstract
This article examines advances in research methods that enable experimental substitution of the speaking body in unscripted face-to-face communication. A taxonomy of six hybrid social agents is presented by combining three types of bodies (mechanical, virtual, and human) with either an artificial or human speech source. Our contribution is to introduce and explore the significance of two particular hybrids: (1) the cyranoid method that enables humans to converse face-to-face through the medium of another person's body, and (2) the echoborg method that enables artificial intelligence to converse face-to-face through the medium of a human body. These two methods are distinct in being able to parse the unique influence of the human body when combined with various speech sources. We also introduce a new framework for conceptualizing the body's role in communication, distinguishing three levels: self's perspective on the body, other's perspective on the body, and self's perspective of other's perspective on the body. Within each level the cyranoid and echoborg methodologies make important research questions tractable. By conceptualizing and synthesizing these methods, we outline a novel paradigm of research on the role of the body in unscripted face-to-face communication.
Collapse
Affiliation(s)
- Alex Gillespie
- Department of Social Psychology, London School of Economics and Political Science London, UK
| | - Kevin Corti
- Department of Social Psychology, London School of Economics and Political Science London, UK
| |
Collapse
|