1
|
He L, Basar E, Krahmer E, Wiers R, Antheunis M. Effectiveness and User Experience of a Smoking Cessation Chatbot: Mixed Methods Study Comparing Motivational Interviewing and Confrontational Counseling. J Med Internet Res 2024; 26:e53134. [PMID: 39106097 PMCID: PMC11336496 DOI: 10.2196/53134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 04/18/2024] [Accepted: 05/02/2024] [Indexed: 08/07/2024] Open
Abstract
BACKGROUND Cigarette smoking poses a major public health risk. Chatbots may serve as an accessible and useful tool to promote cessation due to their high accessibility and potential in facilitating long-term personalized interactions. To increase effectiveness and acceptability, there remains a need to identify and evaluate counseling strategies for these chatbots, an aspect that has not been comprehensively addressed in previous research. OBJECTIVE This study aims to identify effective counseling strategies for such chatbots to support smoking cessation. In addition, we sought to gain insights into smokers' expectations of and experiences with the chatbot. METHODS This mixed methods study incorporated a web-based experiment and semistructured interviews. Smokers (N=229) interacted with either a motivational interviewing (MI)-style (n=112, 48.9%) or a confrontational counseling-style (n=117, 51.1%) chatbot. Both cessation-related (ie, intention to quit and self-efficacy) and user experience-related outcomes (ie, engagement, therapeutic alliance, perceived empathy, and interaction satisfaction) were assessed. Semistructured interviews were conducted with 16 participants, 8 (50%) from each condition, and data were analyzed using thematic analysis. RESULTS Results from a multivariate ANOVA showed that participants had a significantly higher overall rating for the MI (vs confrontational counseling) chatbot. Follow-up discriminant analysis revealed that the better perception of the MI chatbot was mostly explained by the user experience-related outcomes, with cessation-related outcomes playing a lesser role. Exploratory analyses indicated that smokers in both conditions reported increased intention to quit and self-efficacy after the chatbot interaction. Interview findings illustrated several constructs (eg, affective attitude and engagement) explaining people's previous expectations and timely and retrospective experience with the chatbot. CONCLUSIONS The results confirmed that chatbots are a promising tool in motivating smoking cessation and the use of MI can improve user experience. We did not find extra support for MI to motivate cessation and have discussed possible reasons. Smokers expressed both relational and instrumental needs in the quitting process. Implications for future research and practice are discussed.
Collapse
Affiliation(s)
- Linwei He
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, Netherlands
| | - Erkan Basar
- Behavioral Science Institute, Radboud University, Nijmegen, Netherlands
| | - Emiel Krahmer
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, Netherlands
| | - Reinout Wiers
- Addiction Development and Psychopathology (ADAPT)-lab, Department of Psychology and Centre for Urban Mental Health, University of Amsterdam, Amsterdam, Netherlands
| | - Marjolijn Antheunis
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, Netherlands
| |
Collapse
|
2
|
Laymouna M, Ma Y, Lessard D, Schuster T, Engler K, Lebouché B. Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review. J Med Internet Res 2024; 26:e56930. [PMID: 39042446 PMCID: PMC11303905 DOI: 10.2196/56930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 04/07/2024] [Accepted: 04/12/2024] [Indexed: 07/24/2024] Open
Abstract
BACKGROUND Chatbots, or conversational agents, have emerged as significant tools in health care, driven by advancements in artificial intelligence and digital technology. These programs are designed to simulate human conversations, addressing various health care needs. However, no comprehensive synthesis of health care chatbots' roles, users, benefits, and limitations is available to inform future research and application in the field. OBJECTIVE This review aims to describe health care chatbots' characteristics, focusing on their diverse roles in the health care pathway, user groups, benefits, and limitations. METHODS A rapid review of published literature from 2017 to 2023 was performed with a search strategy developed in collaboration with a health sciences librarian and implemented in the MEDLINE and Embase databases. Primary research studies reporting on chatbot roles or benefits in health care were included. Two reviewers dual-screened the search results. Extracted data on chatbot roles, users, benefits, and limitations were subjected to content analysis. RESULTS The review categorized chatbot roles into 2 themes: delivery of remote health services, including patient support, care management, education, skills building, and health behavior promotion, and provision of administrative assistance to health care providers. User groups spanned across patients with chronic conditions as well as patients with cancer; individuals focused on lifestyle improvements; and various demographic groups such as women, families, and older adults. Professionals and students in health care also emerged as significant users, alongside groups seeking mental health support, behavioral change, and educational enhancement. The benefits of health care chatbots were also classified into 2 themes: improvement of health care quality and efficiency and cost-effectiveness in health care delivery. The identified limitations encompassed ethical challenges, medicolegal and safety concerns, technical difficulties, user experience issues, and societal and economic impacts. CONCLUSIONS Health care chatbots offer a wide spectrum of applications, potentially impacting various aspects of health care. While they are promising tools for improving health care efficiency and quality, their integration into the health care system must be approached with consideration of their limitations to ensure optimal, safe, and equitable use.
Collapse
Affiliation(s)
- Moustafa Laymouna
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
| | - Yuanchao Ma
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
- Department of Biomedical Engineering, Polytechnique Montréal, Montreal, QC, Canada
| | - David Lessard
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Tibor Schuster
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Kim Engler
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Bertrand Lebouché
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| |
Collapse
|
3
|
Lee J, Park J, Han HS. Using ChatGPT for Kidney Transplantation: Perceived Information Quality by Race and Education Levels. Clin Transplant 2024; 38:e15378. [PMID: 38934705 DOI: 10.1111/ctr.15378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Revised: 05/13/2024] [Accepted: 05/30/2024] [Indexed: 06/28/2024]
Abstract
BACKGROUND Kidney transplantation is a complex process requiring extensive preparation and ongoing monitoring. Artificial intelligence (AI)-powered chatbots hold potential for providing accessible health information, but our understanding of their role in offering health advice for kidney transplantation and how individuals assess such advice remains limited. This study investigates how individuals evaluate ChatGPT's responses to kidney transplantation questions in terms of information quality and empathy, focusing on potential differences across race/ethnicity and educational backgrounds. METHODS We collected Reddit posts (N = 4624) regarding kidney transplantation and selected 86 questions to represent typical clinician inquiries. These questions were used as input prompts for ChatGPT. A total of 565 participants assessed ChatGPT's responses through online surveys, rating information quality and empathy using Likert scales. RESULTS Multilevel analyses (N = 2825) show that there is a significant interaction between race/ethnicity and education levels in various measures related to perceived information quality, but not perceived empathy of ChatGPT's responses: accuracy (p < 0.05); authenticity (p < 0.01); believability (p < 0.05); informativeness (p = 0.053); usefulness (p < 0.05); recognizing users' feelings (p = 0.70) and understanding feelings and situations (p = 0.65). Among non-White individuals, higher education levels predicted higher perceived quality of ChatGPT's responses across all information quality measures. Notably, this trend was reversed for White individuals, where higher education levels led to lower perceived information quality. CONCLUSIONS Our results highlight the importance of developing AI tools sensitive to diverse communication styles and information needs.
Collapse
Affiliation(s)
- Jihye Lee
- Stan Richards School of Advertising and Public Relations, Moody College of Communication, The University of Texas at Austin, Austin, Texas, USA
| | - Jeeyun Park
- Stan Richards School of Advertising and Public Relations, Moody College of Communication, The University of Texas at Austin, Austin, Texas, USA
| | - Hwarang Stephen Han
- Division of Nephrology, Department of Internal Medicine, Dell Medical School, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
4
|
Liu J. ChatGPT: perspectives from human-computer interaction and psychology. Front Artif Intell 2024; 7:1418869. [PMID: 38957452 PMCID: PMC11217544 DOI: 10.3389/frai.2024.1418869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 06/04/2024] [Indexed: 07/04/2024] Open
Abstract
The release of GPT-4 has garnered widespread attention across various fields, signaling the impending widespread adoption and application of Large Language Models (LLMs). However, previous research has predominantly focused on the technical principles of ChatGPT and its social impact, overlooking its effects on human-computer interaction and user psychology. This paper explores the multifaceted impacts of ChatGPT on human-computer interaction, psychology, and society through a literature review. The author investigates ChatGPT's technical foundation, including its Transformer architecture and RLHF (Reinforcement Learning from Human Feedback) process, enabling it to generate human-like responses. In terms of human-computer interaction, the author studies the significant improvements GPT models bring to conversational interfaces. The analysis extends to psychological impacts, weighing the potential of ChatGPT to mimic human empathy and support learning against the risks of reduced interpersonal connections. In the commercial and social domains, the paper discusses the applications of ChatGPT in customer service and social services, highlighting the improvements in efficiency and challenges such as privacy issues. Finally, the author offers predictions and recommendations for ChatGPT's future development directions and its impact on social relationships.
Collapse
Affiliation(s)
- Jiaxi Liu
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
5
|
Lee I, Hahn S. On the relationship between mind perception and social support of chatbots. Front Psychol 2024; 15:1282036. [PMID: 38510306 PMCID: PMC10952123 DOI: 10.3389/fpsyg.2024.1282036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/14/2024] [Indexed: 03/22/2024] Open
Abstract
The social support provided by chatbots is typically designed to mimic the way humans support others. However, individuals have more conflicting attitudes toward chatbots providing emotional support (e.g., empathy and encouragement) compared to informational support (e.g., useful information and advice). This difference may be related to whether individuals associate a certain type of support with the realm of the human mind and whether they attribute human-like minds to chatbots. In the present study, we investigated whether perceiving human-like minds in chatbots affects users' acceptance of various support provided by the chatbot. In the experiment, the chatbot posed questions about participants' interpersonal stress events, prompting them to write down their stressful experiences. Depending on the experimental condition, the chatbot provided two kinds of social support: informational support or emotional support. Our results showed that when participants explicitly perceived a human-like mind in the chatbot, they considered the support to be more helpful in resolving stressful events. The relationship between implicit mind perception and perceived message effectiveness differed depending on the type of support. More specifically, if participants did not implicitly attribute a human-like mind to the chatbot, emotional support undermined the effectiveness of the message, whereas informational support did not. The present findings suggest that users' mind perception is essential for understanding the user experience of chatbot social support. Our findings imply that informational support can be trusted when building social support chatbots. In contrast, the effectiveness of emotional support depends on the users implicitly giving the chatbot a human-like mind.
Collapse
Affiliation(s)
| | - Sowon Hahn
- Human Factors Psychology Lab, Department of Psychology, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
6
|
Xie Z, Wang Z. Longitudinal Examination of the Relationship Between Virtual Companionship and Social Anxiety: Emotional Expression as a Mediator and Mindfulness as a Moderator. Psychol Res Behav Manag 2024; 17:765-782. [PMID: 38434960 PMCID: PMC10906104 DOI: 10.2147/prbm.s447487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 02/08/2024] [Indexed: 03/05/2024] Open
Abstract
Purpose As the interweaving of human interaction and Artificial Intelligence (AI) intensifies, understanding the psychological impact, especially regarding social anxiety, of engaging with AI-driven virtual companionship becomes crucial. While a substantial body of research on social anxiety has concentrated on interactions between individuals, both online and offline, there is a noticeable deficit in explorations concerning how human-computer interactions influence social anxiety. This study offers a comprehensive, longitudinal examination of this underinvestigated relationship, intricately dissecting the roles of emotional expression and mindfulness within the context of AI-based interactions. Methods We use social support theory and emotion regulation theory as our theoretical foundation. Data were collected from 618 undergraduate students in Eastern China over two intervals (May 15, 2023 and September 15, 2023). We utilized SPSS 26.0 to conduct descriptive statistics, while AMOS 25.0 facilitated multi-group confirmatory factor analysis (CFA) and the cross-lagged panel modeling. Results Our findings indicate that as the frequency of virtual companionship use increases, there's a decline in online social anxiety but a rise in offline social anxiety. Emotional expression emerges as a significant mediator, with heightened emotional expression leading to reduced social anxiety in both contexts. Mindfulness serves as a potent moderator, suggesting its protective role against the potential pitfalls of frequent virtual interactions. Conclusion This research not only deepens our theoretical understanding of the dynamics between virtual interactions and social anxiety but also serves as a cornerstone for future endeavors aimed at optimizing AI and devising therapeutic interventions tailored for the digital generation.
Collapse
Affiliation(s)
- Zehang Xie
- School of Media and Communication, Shanghai Jiao Tong University, Shanghai, People’s Republic of China
| | - Zeyu Wang
- School of Media and Communication, Shanghai Jiao Tong University, Shanghai, People’s Republic of China
| |
Collapse
|
7
|
Choi DS, Park J, Loeser M, Seo K. Improving counseling effectiveness with virtual counselors through nonverbal compassion involving eye contact, facial mimicry, and head-nodding. Sci Rep 2024; 14:506. [PMID: 38177239 PMCID: PMC10766597 DOI: 10.1038/s41598-023-51115-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 12/30/2023] [Indexed: 01/06/2024] Open
Abstract
An effective way to reduce emotional distress is by sharing negative emotions with others. This is why counseling with a virtual counselor is an emerging methodology, where the sharer can consult freely anytime and anywhere without having to fear being judged. To improve counseling effectiveness, most studies so far have focused on designing verbal compassion for virtual counselors. However, recent studies showed that virtual counselors' nonverbal compassion through eye contact, facial mimicry, and head-nodding also have significant impact on the overall counseling experience. To verify this, we designed the virtual counselor's nonverbal compassion and examined its effects on counseling effectiveness (i.e., reduce the intensity of anger and improve general affect). A total of 40 participants were recruited from the university community. Participants were then randomly assigned to one of two virtual counselor conditions: a neutral virtual counselor condition without nonverbal compassion and a compassionate virtual counselor condition with nonverbal compassion (i.e., eye contact, facial mimicry, and head-nodding). Participants shared their anger-inducing episodes with the virtual counselor for an average of 16.30 min. Note that the virtual counselor was operated by the Wizard-of-Oz method without actually being technically implemented. Results showed that counseling with a compassionate virtual counselor reduced the intensity of anger significantly more than counseling with a neutral virtual counselor (F(1, 37) = 30.822, p < 0.001, ηp2 = 0.454). In addition, participants who counseled with a compassionate virtual counselor responded that they experienced higher empathy than those who counseled with a neutral virtual counselor (p < 0.001). These findings suggest that nonverbal compassion through eye contact, facial mimicry, and head-nodding of the virtual counselor makes the participants feel more empathy, which contributes to improving the counseling effectiveness by reducing the intensity of anger.
Collapse
Affiliation(s)
- Doo Sung Choi
- Department of Applied Artificial Intelligence, Seoul National University of Science and Technology, 232 Gongneung-ro, Gongneung-dong, Nowon-gu, Seoul, 01811, Korea
| | - Jongyoul Park
- Department of Applied Artificial Intelligence, Seoul National University of Science and Technology, 232 Gongneung-ro, Gongneung-dong, Nowon-gu, Seoul, 01811, Korea
| | - Martin Loeser
- Department of Computer Science, Electrical Engineering and Mechatronics, ZHAW Zurich University of Applied Sciences, Winterthur, Switzerland
| | - Kyoungwon Seo
- Department of Applied Artificial Intelligence, Seoul National University of Science and Technology, 232 Gongneung-ro, Gongneung-dong, Nowon-gu, Seoul, 01811, Korea.
| |
Collapse
|
8
|
Armeni P, Polat I, De Rossi LM, Diaferia L, Meregalli S, Gatti A. Exploring the potential of digital therapeutics: An assessment of progress and promise. Digit Health 2024; 10:20552076241277441. [PMID: 39291152 PMCID: PMC11406628 DOI: 10.1177/20552076241277441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 08/06/2024] [Indexed: 09/19/2024] Open
Abstract
Digital therapeutics (DTx), a burgeoning subset of digital health solutions, has garnered considerable attention in recent times. These cutting-edge therapeutic interventions employ diverse technologies, powered by software algorithms, to treat, manage, and prevent a wide array of diseases and disorders. Although DTx shows significant promise as an integral component of medical care, its widespread integration is still in the preliminary stages. This limited adoption can be largely attributed to the scarcity of comprehensive research that delves into DTx's scope, including its technological underpinnings, potential application areas, and challenges-namely, regulatory hurdles and modest physician uptake. This review aims to bridge this knowledge gap by offering an in-depth overview of DTx products' value to both patients and clinicians. It evaluates the current state of maturity of DTx applications driven by digital technologies and investigates the obstacles that developers and regulators encounter in the market introduction phase.
Collapse
Affiliation(s)
- Patrizio Armeni
- LIFT Lab, CERGAS GHNP Division, SDA Bocconi School of Management, Milano, Italy
| | - Irem Polat
- LIFT Lab, CERGAS GHNP Division, SDA Bocconi School of Management, Milano, Italy
| | - Leonardo Maria De Rossi
- LIFT Lab, CERGAS GHNP Division, and DEVO Lab, Claudio Demattè Research Division, SDA Bocconi School of Management, Milano, Italy
| | - Lorenzo Diaferia
- LIFT Lab, CERGAS GHNP Division, and DEVO Lab, Claudio Demattè Research Division, SDA Bocconi School of Management, Milano, Italy
| | - Severino Meregalli
- LIFT Lab, CERGAS GHNP Division, and DEVO Lab, Claudio Demattè Research Division, SDA Bocconi School of Management, Milano, Italy
| | - Anna Gatti
- LIFT Lab, CERGAS GHNP Division, SDA Bocconi School of Management, Milano, Italy
| |
Collapse
|
9
|
Morady Moghaddam M, Tommerdahl J. 'I Hope You Can Rise Again': Linguistic Variation in Online Condolences. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2023; 52:2793-2809. [PMID: 37773426 DOI: 10.1007/s10936-023-10020-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/06/2023] [Indexed: 10/01/2023]
Abstract
In the wake of tragic events such as the 'Paris Attacks of 2015', the expression of condolences through e-messages has become a common way for individuals to offer support and sympathy to those affected. However, limited research has been conducted on the linguistic aspects of condolence e-messages and how they reflect the speech act of condolence. This study aims to fill this gap by examining the syntactic and pragmatic elements present in these messages. The aim is to understand how the syntactic and pragmatic elements of these messages contribute to the expression of the communicative speech act of condolence. Condolence e-messages were identified and analyzed using an adapted version of Elwood's (2004) coding scheme. The analysis focused on common themes in the condolence sentences, revealing that some linguistic functions were overtly used to express grief. Additionally, specific words such as 'pray', 'love', and 'condolence' were frequently used in conjunction with the expressions of condolence. The findings highlight the influence of sociocultural factors in shaping the norms and variations in the production of speech acts across different cultures. Understanding these linguistic variations can contribute to effective communication and cultural sensitivity in expressing condolences.
Collapse
|
10
|
Kang A, Hetrick S, Cargo T, Hopkins S, Ludin N, Bodmer S, Stevenson K, Holt-Quick C, Stasiak K. Exploring Young Adults' Views About Aroha, a Chatbot for Stress Associated With the COVID-19 Pandemic: Interview Study Among Students. JMIR Form Res 2023; 7:e44556. [PMID: 37527545 PMCID: PMC10574714 DOI: 10.2196/44556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/14/2023] [Accepted: 05/15/2023] [Indexed: 08/03/2023] Open
Abstract
BACKGROUND In March 2020, New Zealand was plunged into its first nationwide lockdown to halt the spread of COVID-19. Our team rapidly adapted our existing chatbot platform to create Aroha, a well-being chatbot intended to address the stress experienced by young people aged 13 to 24 years in the early phase of the pandemic. Aroha was made available nationally within 2 weeks of the lockdown and continued to be available throughout 2020. OBJECTIVE In this study, we aimed to evaluate the acceptability and relevance of the chatbot format and Aroha's content in young adults and to identify areas for improvement. METHODS We conducted qualitative in-depth and semistructured interviews with young adults as well as in situ demonstrations of Aroha to elicit immediate feedback. Interviews were recorded, transcribed, and analyzed using thematic analysis assisted by NVivo (version 12; QSR International). RESULTS A total of 15 young adults (age in years: median 20; mean 20.07, SD 3.17; female students: n=13, 87%; male students: n=2, 13%; all tertiary students) were interviewed in person. Participants spoke of the challenges of living during the lockdown, including social isolation, loss of motivation, and the demands of remote work or study, although some were able to find silver linings. Aroha was well liked for sounding like a "real person" and peer with its friendly local "Kiwi" communication style, rather than an authoritative adult or counselor. The chatbot was praised for including content that went beyond traditional mental health advice. Participants particularly enjoyed the modules on gratitude, being active, anger management, job seeking, and how to deal with alcohol and drugs. Aroha was described as being more accessible than traditional mental health counseling and resources. It was an appealing option for those who did not want to talk to someone in person for fear of the stigma associated with mental health. However, participants disliked the software bugs. They also wanted a more sophisticated conversational interface where they could express themselves and "vent" in free text. There were several suggestions for making Aroha more relevant to a diverse range of users, including developing content on navigating relationships and diverse chatbot avatars. CONCLUSIONS Chatbots are an acceptable format for scaling up the delivery of public mental health and well-being-enhancing strategies. We make the following recommendations for others interested in designing and rolling out mental health chatbots to better support young people: make the chatbot relatable to its target audience by working with them to develop an authentic and relevant communication style; consider including holistic health and lifestyle content beyond traditional "mental health" support; and focus on developing features that make users feel heard, understood, and empowered.
Collapse
Affiliation(s)
- Annie Kang
- Faculty of Arts, University of Auckland, Auckland, New Zealand
| | - Sarah Hetrick
- Department of Psychological Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Tania Cargo
- Department of Psychological Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Sarah Hopkins
- Department of Psychological Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Nicola Ludin
- Department of Psychological Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Sarah Bodmer
- Department of Psychological Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Kiani Stevenson
- Department of Psychological Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | | | - Karolina Stasiak
- Department of Psychological Medicine, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| |
Collapse
|
11
|
Orden-Mejía M, Carvache-Franco M, Huertas A, Carvache-Franco O, Carvache-Franco W. Modeling users' satisfaction and visit intention using AI-based chatbots. PLoS One 2023; 18:e0286427. [PMID: 37682931 PMCID: PMC10490898 DOI: 10.1371/journal.pone.0286427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 05/16/2023] [Indexed: 09/10/2023] Open
Abstract
AI-based chatbots are an emerging technology disrupting the tourism industry. Although chatbots have received increasing attention, there is little evidence of their impact on tourists' decisions to visit a destination. This study evaluates the key attributes of chatbots and their effects on user satisfaction and visit intention. We use structural equation modeling with covariance procedures to test the proposed model and its hypotheses. The results showed that informativeness, empathy, and interactivity are critical attributes for satisfaction, which drive tourists' intention to visit a destination.
Collapse
Affiliation(s)
- Miguel Orden-Mejía
- Facultat de Turisme i Geografia, Universitat Rovira I Virgili, Vila-seca, Spain
| | | | - Assumpció Huertas
- Department of Communication, Universitat Rovira I Virgili, Tarragona, Spain
| | - Orly Carvache-Franco
- Facultad de Econonía y Empresa, Universidad Católica de Santiago de Guayaquil, Guayaquil, Ecuador
| | - Wilmer Carvache-Franco
- Facultad de Ciencias Sociales y Humanísticas, Escuela Superior Politécnica del Litoral, ESPOL, Guayaquil, Ecuador
| |
Collapse
|
12
|
Abonizio HQ, Barbon APADC, Rodrigues R, Santos M, Martínez-Vizcaíno V, Mesas AE, Barbon Junior S. How people interact with a chatbot against disinformation and fake news in COVID-19 in Brazil: The CoronaAI case. Int J Med Inform 2023; 177:105134. [PMID: 37369153 PMCID: PMC10289820 DOI: 10.1016/j.ijmedinf.2023.105134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/16/2023] [Accepted: 06/19/2023] [Indexed: 06/29/2023]
Abstract
BACKGROUND The search for valid information was one of the main challenges encountered during the COVID-19 pandemic, which resulted in the development of several online alternatives. OBJECTIVES To describe the development of a computational solution to interact with users of different levels of digital literacy on topics related to COVID-19 and to map the correlations between user behavior and events and news that occurred throughout the pandemic. METHOD CoronaAI, a chatbot based on Google's Dialogflow technology, was developed at a public university in Brazil and made available on WhatsApp. The dataset with users' interactions with the chatbot comprises approximately 7,000 hits recorded throughout eleven months of CoronaAI usage. RESULTS CoronaAI was widely accessed by users in search of valuable and updated information on COVID-19, including checking the veracity of possible fake news about the spread of cases, deaths, symptoms, tests and protocols, among others. The mapping of users' behavior revealed that as the number of cases and deaths increased and as COVID-19 became closer, users showed a greater need for information applicable to self-care compared to following the statistical data. In addition, they showed that the constant updating of this technology may contribute to public health by enhancing general information on the pandemic and at the individual level by clarifying specific doubts about COVID-19. CONCLUSION Our findings reinforce the potential usefulness of chatbot technology to resolve a wide spectrum of citizens' doubts about COVID-19, acting as a cost-effective tool against the parallel pandemic of misinformation and fake news.
Collapse
Affiliation(s)
- Hugo Queiroz Abonizio
- Department of Computer Science, Universidade Estadual de Londrina (UEL), Londrina, Brazil.
| | | | - Renne Rodrigues
- Department of Public Health, Universidade Estadual de Londrina, Londrina, Brazil.
| | - Mayara Santos
- Department of Public Health, Universidade Estadual de Londrina, Londrina, Brazil.
| | - Vicente Martínez-Vizcaíno
- Health and Social Research Center, Universidad de Castilla-La Mancha, Cuenca, Spain; Facultad de Ciencias de la Salud, Universidad Autónoma de Chile, Talca, Chile.
| | - Arthur Eumann Mesas
- Health and Social Research Center, Universidad de Castilla-La Mancha, Cuenca, Spain.
| | - Sylvio Barbon Junior
- Dipartimento di Ingegneria e Architettura, Università degli studi di Trieste, Trieste, Italy.
| |
Collapse
|
13
|
Chow JCL, Wong V, Sanders L, Li K. Developing an AI-Assisted Educational Chatbot for Radiotherapy Using the IBM Watson Assistant Platform. Healthcare (Basel) 2023; 11:2417. [PMID: 37685452 PMCID: PMC10487627 DOI: 10.3390/healthcare11172417] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 08/25/2023] [Accepted: 08/26/2023] [Indexed: 09/10/2023] Open
Abstract
Objectives: This study aims to make radiotherapy knowledge regarding healthcare accessible to the general public by developing an AI-powered chatbot. The interactive nature of the chatbot is expected to facilitate better understanding of information on radiotherapy through communication with users. Methods: Using the IBM Watson Assistant platform on IBM Cloud, the chatbot was constructed following a pre-designed flowchart that outlines the conversation flow. This approach ensured the development of the chatbot with a clear mindset and allowed for effective tracking of the conversation. The chatbot is equipped to furnish users with information and quizzes on radiotherapy to assess their understanding of the subject. Results: By adopting a question-and-answer approach, the chatbot can engage in human-like communication with users seeking information about radiotherapy. As some users may feel anxious and struggle to articulate their queries, the chatbot is designed to be user-friendly and reassuring, providing a list of questions for the user to choose from. Feedback on the chatbot's content was mostly positive, despite a few limitations. The chatbot performed well and successfully conveyed knowledge as intended. Conclusions: There is a need to enhance the chatbot's conversation approach to improve user interaction. Including translation capabilities to cater to individuals with different first languages would also be advantageous. Lastly, the newly launched ChatGPT could potentially be developed into a medical chatbot to facilitate knowledge transfer.
Collapse
Affiliation(s)
- James C. L. Chow
- Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, ON M5G 1X6, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| | - Valerie Wong
- Department of Physics, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada;
| | - Leslie Sanders
- Department of Humanities, York University, Toronto, ON M3J 1P3, Canada;
| | - Kay Li
- Department of English, University of Toronto, Toronto, ON M5R 2M8, Canada;
| |
Collapse
|
14
|
Pauw LS. Support provision in a digitalized world: The consequences of social sharing across different communication channels. Curr Opin Psychol 2023; 52:101597. [PMID: 37329648 DOI: 10.1016/j.copsyc.2023.101597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/11/2023] [Accepted: 05/19/2023] [Indexed: 06/19/2023]
Abstract
People tend to share their emotional experiences with others, with sharing increasingly done online. This raises questions about the quality of computer-mediated vs. face-to-face sharing. The present review (1) outlines conditions for sharing to improve emotional and relational well-being, (2) discusses when computer-mediated sharing with other people may (not) be conducive to these conditions, and (3) reviews recent research on the effectiveness of computer-mediated sharing with humans and virtual agents. It is concluded that the emotional and relational consequences of sharing are dependent on the responsiveness of the listener, regardless of the communication channel. Differences exist, however, in the extent to which channels are conducive to various forms of responding, with implications for speakers' emotional and relational well-being.
Collapse
|
15
|
Massa P, de Souza Ferraz DA, Magno L, Silva AP, Greco M, Dourado I, Grangeiro A. A Transgender Chatbot (Amanda Selfie) to Create Pre-exposure Prophylaxis Demand Among Adolescents in Brazil: Assessment of Acceptability, Functionality, Usability, and Results. J Med Internet Res 2023; 25:e41881. [PMID: 37351920 PMCID: PMC10337301 DOI: 10.2196/41881] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 03/01/2023] [Accepted: 04/18/2023] [Indexed: 06/24/2023] Open
Abstract
BACKGROUND HIV incidence rates have increased in adolescent men who have sex with men (AMSM) and adolescent transgender women (ATGW). Thus, it is essential to promote access to HIV prevention, including pre-exposure prophylaxis (PrEP), among these groups. Moreover, using artificial intelligence and online social platforms to create demand and access to health care services are essential tools for adolescents and youth. OBJECTIVE This study aims to describe the participative process of developing a chatbot using artificial intelligence to create demand for PrEP use among AMSM and ATGW in Brazil. Furthermore, it analyzes the chatbot's acceptability, functionality, and usability and its results on the demand creation for PrEP. METHODS The chatbot Amanda Selfie integrates the demand creation strategies based on social networks (DCSSNs) of the PrEP1519 study. She was conceived as a Black transgender woman and to function as a virtual peer educator. The development process occurred in 3 phases (conception, trial, and final version) and lasted 21 months. A mixed methodology was used for the evaluations. Qualitative approaches, such as in-depth adolescent interviews, were used to analyze acceptability and usability, while quantitative methods were used to analyze the functionality and result of the demand creation for PrEP based on interactions with Amanda and information from health care services about using PrEP. To evaluate Amanda's result on the demand creation for PrEP, we analyzed sociodemographic profiles of adolescents who interacted at least once with her and developed a cascade model containing the number of people at various stages between the first interaction and initiation of PrEP (PrEP uptake). These indicators were compared with other DCSs developed in the PrEP1519 study using chi-square tests and residual analysis (P=.05). RESULTS Amanda Selfie was well accepted as a peer educator, clearly and objectively communicating on topics such as gender identity, sexual experiences, HIV, and PrEP. The chatbot proved appropriate for answering questions in an agile and confidential manner, using the language used by AMSM and ATGW and with a greater sense of security and less judgment. The interactions with Amanda Selfie combined with a health professional were well evaluated and improved the appointment scheduling. The chatbot interacted with most people (757/1239, 61.1%) reached by the DCSSNs. However, when compared with the other DCSSNs, Amanda was not efficient in identifying AMSM/ATGW (359/482, 74.5% vs 130/757, 17.2% of total interactions, respectively) and in PrEP uptake (90/359, 25.1% vs 19/130, 14.6%). The following profiles were associated (P<.001) with Amanda Selfie's demand creation, when compared with other DCS: ATGW and adolescents with higher levels of schooling and White skin color. CONCLUSIONS Using a chatbot to create PrEP demand among AMSM and ATGW was well accepted, especially for ATGW with higher levels of schooling. A complimentary dialog with a health professional increased PrEP uptake, although it remained lower than the results of the other DCSSNs.
Collapse
Affiliation(s)
- Paula Massa
- Faculdade de Medicina Preventiva, Universidade de São Paulo, São Paulo, Brazil
| | - Dulce Aurélia de Souza Ferraz
- Unité Mixte de Recherche 1296 Radiations: défense, santé et environnements, Lyon 2 University, Lyon, France
- Escola de Governo em Saúde, Gerencia Regional Brasília, Fundação Oswaldo Cruz, Brasília, Brazil
| | - Laio Magno
- Instituto de Saúde Coletiva, Universidade Federal da Bahia, Salvador, Brazil
- Departamento de Ciências da Vida, Universidade do Estado da Bahia, Salvador, Brazil
| | - Ana Paula Silva
- Faculdade de Medicina, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Marília Greco
- Faculdade de Medicina, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Inês Dourado
- Instituto de Saúde Coletiva, Universidade Federal da Bahia, Salvador, Brazil
| | - Alexandre Grangeiro
- Faculdade de Medicina Preventiva, Universidade de São Paulo, São Paulo, Brazil
| |
Collapse
|
16
|
Choi TR, Choi JH. You Are Not Alone: A Serial Mediation of Social Attraction, Privacy Concerns, and Satisfaction in Voice AI Use. Behav Sci (Basel) 2023; 13:bs13050431. [PMID: 37232668 DOI: 10.3390/bs13050431] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/14/2023] [Accepted: 05/17/2023] [Indexed: 05/27/2023] Open
Abstract
The popularity of voice-activated artificial intelligence (voice AI) has grown rapidly as people continue to use smart speakers such as Amazon Alexa and Google Home to support everyday tasks. However, little is known about how loneliness relates to voice AI use, or the potential mediators in this association. This study investigates the mediating roles of users' perceptions (i.e., social attraction, privacy concerns, and satisfaction) in the relationship between users' social loneliness and intentions to continue using voice AI. A serial mediation model based on survey data from current voice AI users showed that users' perceptions were positively associated with behavioral intentions. Several full serial mediations were observed: people who felt lonely perceived (1) voice AI as a more socially attractive agent and (2) had fewer privacy concerns. These aspects were each tied to satisfaction and subsequent usage intention. Theoretical and practical implications are discussed.
Collapse
Affiliation(s)
- Tae Rang Choi
- Department of Strategic Communication, Texas Christian University, Fort Worth, TX 76109, USA
| | - Jung Hwa Choi
- Department of Communication, University of South Alabama, Mobile, AL 36688, USA
| |
Collapse
|
17
|
Understanding AI-based customer service resistance: A perspective of defective AI features and tri-dimensional distrusting beliefs. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2022.103257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
18
|
Can chatbots satisfy me? A mixed-method comparative study of satisfaction with task-oriented chatbots in mainland China and Hong Kong. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
|
19
|
Morrow E, Zidaru T, Ross F, Mason C, Patel KD, Ream M, Stockley R. Artificial intelligence technologies and compassion in healthcare: A systematic scoping review. Front Psychol 2023; 13:971044. [PMID: 36733854 PMCID: PMC9887144 DOI: 10.3389/fpsyg.2022.971044] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 12/05/2022] [Indexed: 01/18/2023] Open
Abstract
Background Advances in artificial intelligence (AI) technologies, together with the availability of big data in society, creates uncertainties about how these developments will affect healthcare systems worldwide. Compassion is essential for high-quality healthcare and research shows how prosocial caring behaviors benefit human health and societies. However, the possible association between AI technologies and compassion is under conceptualized and underexplored. Objectives The aim of this scoping review is to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion, to inform future research and practice. The review questions were: How is compassion discussed in relation to AI technologies in healthcare? How are AI technologies being used to enhance compassion in healthcare? What are the gaps in current knowledge and unexplored potential? What are the key areas where AI technologies could support compassion in healthcare? Materials and methods A systematic scoping review following five steps of Joanna Briggs Institute methodology. Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews). Eligibility criteria were defined according to 3 concept constructs (AI technologies, compassion, healthcare) developed from the literature and informed by medical subject headings (MeSH) and key words for the electronic searches. Sources of evidence were Web of Science and PubMed databases, articles published in English language 2011-2022. Articles were screened by title/abstract using inclusion/exclusion criteria. Data extracted (author, date of publication, type of article, aim/context of healthcare, key relevant findings, country) was charted using data tables. Thematic analysis used an inductive-deductive approach to generate code categories from the review questions and the data. A multidisciplinary team assessed themes for resonance and relevance to research and practice. Results Searches identified 3,124 articles. A total of 197 were included after screening. The number of articles has increased over 10 years (2011, n = 1 to 2021, n = 47 and from Jan-Aug 2022 n = 35 articles). Overarching themes related to the review questions were: (1) Developments and debates (7 themes) Concerns about AI ethics, healthcare jobs, and loss of empathy; Human-centered design of AI technologies for healthcare; Optimistic speculation AI technologies will address care gaps; Interrogation of what it means to be human and to care; Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare; Calls for curricula development and healthcare professional education; Implementation of AI applications to enhance health and wellbeing of the healthcare workforce. (2) How AI technologies enhance compassion (10 themes) Empathetic awareness; Empathetic response and relational behavior; Communication skills; Health coaching; Therapeutic interventions; Moral development learning; Clinical knowledge and clinical assessment; Healthcare quality assessment; Therapeutic bond and therapeutic alliance; Providing health information and advice. (3) Gaps in knowledge (4 themes) Educational effectiveness of AI-assisted learning; Patient diversity and AI technologies; Implementation of AI technologies in education and practice settings; Safety and clinical effectiveness of AI technologies. (4) Key areas for development (3 themes) Enriching education, learning and clinical practice; Extending healing spaces; Enhancing healing relationships. Conclusion There is an association between AI technologies and compassion in healthcare and interest in this association has grown internationally over the last decade. In a range of healthcare contexts, AI technologies are being used to enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; and to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. These elements can operate at an individual (human or machine) and collective systems level (healthcare organizations or systems) as a cyclical system to alleviate different types of suffering. New and novel approaches to human-AI intelligent caring could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships. Implications In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined up thinking about human-AI intelligent caring. Research funders can encourage research and development into the topic of AI technologies and compassion as a system of human-AI intelligent caring. Educators, technologists, and health professionals can inform themselves about the system of human-AI intelligent caring.
Collapse
Affiliation(s)
| | - Teodor Zidaru
- Department of Anthropology, London School of Economics and Political Sciences, London, United Kingdom
| | - Fiona Ross
- Faculty of Health, Science, Social Care and Education, Kingston University London, London, United Kingdom
| | - Cindy Mason
- Artificial Intelligence Researcher (Independent), Palo Alto, CA, United States
| | | | - Melissa Ream
- Kent Surrey Sussex Academic Health Science Network (AHSN) and the National AHSN Network Artificial Intelligence (AI) Initiative, Surrey, United Kingdom
| | - Rich Stockley
- Head of Research and Engagement, Surrey Heartlands Health and Care Partnership, Surrey, United Kingdom
| |
Collapse
|
20
|
Zhou Q, Li B, Han L, Jou M. Talking to a bot or a wall? How chatbots vs. human agents affect anticipated communication quality. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
21
|
Kukafka R, Zhao L. Designing Emotions for Health Care Chatbots: Text-Based or Icon-Based Approach. J Med Internet Res 2022; 24:e39573. [PMID: 36454078 PMCID: PMC9782388 DOI: 10.2196/39573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/24/2022] [Accepted: 11/30/2022] [Indexed: 12/03/2022] Open
Affiliation(s)
| | - Luming Zhao
- School of Journalism, Fudan University, Shanghai, China
| |
Collapse
|
22
|
A systematic review on cross-culture, humor and empathy dimensions in conversational chatbots: the case of second language acquisition. Heliyon 2022; 8:e12056. [DOI: 10.1016/j.heliyon.2022.e12056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 03/04/2022] [Accepted: 11/24/2022] [Indexed: 12/12/2022] Open
|
23
|
Park G, Chung J, Lee S. Effect of AI chatbot emotional disclosure on user satisfaction and reuse intention for mental health counseling: a serial mediation model. CURRENT PSYCHOLOGY 2022; 42:1-11. [PMID: 36406852 PMCID: PMC9643933 DOI: 10.1007/s12144-022-03932-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/25/2022] [Indexed: 11/11/2022]
Abstract
This study explored the effect of chatbot emotional disclosure on user satisfaction and reuse intention for a chatbot counseling service. It also examined the independent and sequential mediation roles of user emotional disclosure intention and perceived intimacy with a chatbot on the relationship between chatbot emotional disclosure, user satisfaction, and reuse intention for chatbot counseling. In total, 348 American adults were recruited to participate in a mental health counseling session with either of the two types of artificial intelligence-powered mental health counseling chatbots. These included a chatbot disclosing factual information only or a chatbot disclosing humanlike emotions. The results revealed that chatbot emotional disclosure significantly increased user satisfaction and reuse intention for a chatbot counseling service. The results further revealed that user emotional disclosure intention and perceived intimacy with a chatbot independently and serially mediates the effect of chatbot emotional disclosure on user satisfaction and chatbot counseling service reuse intention. The results indicate positive effects of artificial emotions and their disclosure in the context of chatbot moderated mental health counseling. Practical implications and psychological mechanisms are discussed.
Collapse
Affiliation(s)
- Gain Park
- Department of Journalism and Media Studies, New Mexico State University, 2915 McFie Circle, Milton Hall 158, Las Cruces, NM 88003 USA
| | - Jiyun Chung
- Convergence and Open Sharing System-Artificial Intelligence, Sungkyunkwan University, 25-2 Sungkyunkwan-Ro, 50212 Hoam Hall, Jongno-Gu, Seoul, South Korea 03063
| | - Seyoung Lee
- Department of Media and Communication, Sungkyunkwan University, 25-2, Sungkyunkwan-Ro, 50505 Hoam Hall, Jongno-Gu, Seoul, South Korea 03063
| |
Collapse
|
24
|
Liu YL, Yan W, Hu B, Li Z, Lai YL. Effects of personalization and source expertise on users' health beliefs and usage intention toward health chatbots: Evidence from an online experiment. Digit Health 2022; 8:20552076221129718. [PMID: 36211799 PMCID: PMC9536110 DOI: 10.1177/20552076221129718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 09/13/2022] [Indexed: 11/05/2022] Open
Abstract
Objective Based on the heuristic–systematic model (HSM) and health belief model (HBM), this study aims to investigate how personalization and source expertise in responses from a health chatbot influence users’ health belief-related factors (i.e. perceived benefits, self-efficacy and privacy concerns) as well as usage intention. Methods A 2 (personalization vs. non-personalization) × 2 (source expertise vs. non-source expertise) online between-subject experiment was designed. Participants were recruited in China between April and May 2021. Data from 260 valid observations were used for the data analysis. Results Source expertise moderated the effects of personalization on health belief factors. Perceived benefits and self-efficacy mediated the relationship between personalization and usage intention when the source expertise cue was presented. However, the privacy concerns were not influenced by personalization and source expertise and did not significantly affect usage intention toward the health chatbot. Discussion This study verified that in the health chatbot context, source expertise as a heuristic cue may be a necessary condition for effects of the systematic cue (i.e. personalization), which supports the HSM's arguments. By introducing the HBM in the chatbot experiment, this study is expected to provide new insights into the acceptance of healthcare AI consulting services.
Collapse
Affiliation(s)
| | | | - Bo Hu
- Bo Hu, Department of Media and Communication, City University of Hong Kong, Run Run Shaw Creative Media Centre, 18 Tat Hong Avenue, Kowloon Tong, Hong Kong, China.
| | | | | |
Collapse
|
25
|
Chang IC, Shih YS, Kuo KM. Why would you use medical chatbots? interview and survey. Int J Med Inform 2022; 165:104827. [DOI: 10.1016/j.ijmedinf.2022.104827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 05/24/2022] [Accepted: 06/28/2022] [Indexed: 11/28/2022]
|
26
|
Jin E, Eastin MS. When a Chatbot Smiles at You: The Psychological Mechanism Underlying the Effects of Friendly Language Use by Product Recommendation Chatbots. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2022; 25:597-604. [PMID: 35976080 DOI: 10.1089/cyber.2021.0318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Based on the computers are social actors theory and social presence theory, the current study investigates the psychological mechanism by which the use of friendly language by a personalized product recommendation chatbot influences product attitudes. Results indicated that the effect of the friendly chatbot on more positive product attitudes was sequentially mediated by social presence and user satisfaction. Previous experience with product recommendation chatbots was found to moderate the serial mediation effects. Furthermore, the current study found that a friendly chatbot led to higher rates of contact information disclosure by consumers. Theoretical and practical implications are discussed.
Collapse
Affiliation(s)
- Eunjoo Jin
- Stan Richards School of Advertising & Public Relations, Moody College of Communication, The University of Texas at Austin, Austin, Texas, USA
| | - Matthew S Eastin
- Stan Richards School of Advertising & Public Relations, Moody College of Communication, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
27
|
Jiang Q, Zhang Y, Pian W. Chatbot as an emergency exist: Mediated empathy for resilience via human-AI interaction during the COVID-19 pandemic. Inf Process Manag 2022; 59:103074. [PMID: 36059428 PMCID: PMC9428597 DOI: 10.1016/j.ipm.2022.103074] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 08/18/2022] [Accepted: 08/25/2022] [Indexed: 12/23/2022]
Abstract
As a global health crisis, the COVID-19 pandemic has also made heavy mental and emotional tolls become shared experiences of global communities, especially among females who were affected more by the pandemic than males for anxiety and depression. By connecting multiple facets of empathy as key mechanisms of information processing with the communication theory of resilience, the present study examines human-AI interactions during the COVID-19 pandemic in order to understand digitally mediated empathy and how the intertwining of empathic and communicative processes of resilience works as coping strategies for COVID-19 disruption. Mixed methods were adopted to explore the using experiences and effects of Replika, a chatbot companion powered by AI, with ethnographic research, in-depth interviews, and grounded theory-based analysis. Findings of this research extend empathy theories from interpersonal communication to human-AI interactions and show five types of digitally mediated empathy among Chinese female Replika users with varying degrees of cognitive empathy, affective empathy, and empathic response involved in the information processing processes, i.e., companion buddy, responsive diary, emotion-handling program, electronic pet, and tool for venting. When processing information obtained from AI and collaborative interactions with the AI chatbot, multiple facets of mediated empathy become unexpected pathways to resilience and enhance users’ well-being. This study fills the research gap by exploring empathy and resilience processes in human-AI interactions. Practical implications, especially for increasing individuals’ psychological resilience as an important component of global recovery from the pandemic, suggestions for future chatbot design, and future research directions are also discussed.
Collapse
Affiliation(s)
- Qiaolei Jiang
- School of Journalism and Communication, Tsinghua University, Beijing 100084, China
| | - Yadi Zhang
- School of Journalism and Communication, Tsinghua University, Beijing 100084, China
| | - Wenjing Pian
- School of Economics and Management, Fuzhou University, Xueyuan Road, Qishan Campus, Fuzhou 350116, China
- Center for Studies of Information Resources, Wuhan University, 299 Bayi Road, Wuhan City 430072, China
| |
Collapse
|
28
|
The avatar will see you now: Support from a virtual human provides socio-emotional benefits. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
29
|
Yun J, Park J. The Effects of Chatbot Service Recovery With Emotion Words on Customer Satisfaction, Repurchase Intention, and Positive Word-Of-Mouth. Front Psychol 2022; 13:922503. [PMID: 35712132 PMCID: PMC9194808 DOI: 10.3389/fpsyg.2022.922503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 05/09/2022] [Indexed: 12/01/2022] Open
Abstract
This study sought to examine the effect of the quality of chatbot services on customer satisfaction, repurchase intention, and positive word-of-mouth by comparing two groups, namely chatbots with and without emotion words. An online survey was conducted for 2 weeks in May 2021. A total of 380 responses were collected and analyzed using structural equation modeling to test the hypothesis. The theoretical basis of the study was the SERVQUAL theory, which is widely used in measuring and managing service quality in various industries. The results showed that the assurance and reliability of chatbots positively impact customer satisfaction for both groups. However, empathy and interactivity positively affect customer satisfaction only for chatbots with emotion words. Responsiveness did not have an impact on customer satisfaction for both groups. Customer satisfaction positively impacts repurchase intention and positive word-of-mouth for both groups. The findings of this study can serve as a priori research to empirically prove the effectiveness of chatbots with emotion words.
Collapse
|
30
|
Eagle T, Blau C, Bales S, Desai N, Li V, Whittaker AS. “I don't Know what you Mean by ‘I am Anxious’”: A New Method for Evaluating Conversational Agent Responses to Standardized Mental Health Inputs for Anxiety and Depression. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3488057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
Conversational agents (CAs) are increasingly ubiquitous and are now commonly used to access medical information. However, we lack systematic data about the quality of advice such agents provide. This paper evaluates CA advice for mental health (MH) questions, a pressing issue given that we are undergoing a mental health crisis. Building on prior work, we define a new method to systematically evaluate mental health responses from CAs. We develop multi-utterance conversational probes derived from two widely used mental health diagnostic surveys, the PHQ-9 (Depression) and the GAD-7 (Anxiety). We evaluate the responses of two text-based chatbots and four voice assistants to determine whether CAs provide relevant responses and treatments. Evaluations were conducted both by clinicians and immersively by trained raters, yielding consistent results across all raters. Although advice and recommendations were generally low quality, they were better for Crisis probes and for probes concerning symptoms of Anxiety rather than Depression. Responses were slightly improved for text versus speech-based agents, and when CAs had access to extended dialogue context. Design implications include suggestions for improved responses through clarification sub-dialogues. Responses may also be improved by the incorporation of empathy although this needs to be combined with effective treatments or advice.
Collapse
Affiliation(s)
| | | | | | | | - Victor Li
- University of California, Santa Cruz
| | | |
Collapse
|
31
|
Nißen M, Rüegger D, Stieger M, Flückiger C, Allemand M, V Wangenheim F, Kowatsch T. The Effects of Health Care Chatbot Personas With Different Social Roles on the Client-Chatbot Bond and Usage Intentions: Development of a Design Codebook and Web-Based Study. J Med Internet Res 2022; 24:e32630. [PMID: 35475761 PMCID: PMC9096656 DOI: 10.2196/32630] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 01/21/2022] [Accepted: 02/17/2022] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND The working alliance refers to an important relationship quality between health professionals and clients that robustly links to treatment success. Recent research shows that clients can develop an affective bond with chatbots. However, few research studies have investigated whether this perceived relationship is affected by the social roles of differing closeness a chatbot can impersonate and by allowing users to choose the social role of a chatbot. OBJECTIVE This study aimed at understanding how the social role of a chatbot can be expressed using a set of interpersonal closeness cues and examining how these social roles affect clients' experiences and the development of an affective bond with the chatbot, depending on clients' characteristics (ie, age and gender) and whether they can freely choose a chatbot's social role. METHODS Informed by the social role theory and the social response theory, we developed a design codebook for chatbots with different social roles along an interpersonal closeness continuum. Based on this codebook, we manipulated a fictitious health care chatbot to impersonate one of four distinct social roles common in health care settings-institution, expert, peer, and dialogical self-and examined effects on perceived affective bond and usage intentions in a web-based lab study. The study included a total of 251 participants, whose mean age was 41.15 (SD 13.87) years; 57.0% (143/251) of the participants were female. Participants were either randomly assigned to one of the chatbot conditions (no choice: n=202, 80.5%) or could freely choose to interact with one of these chatbot personas (free choice: n=49, 19.5%). Separate multivariate analyses of variance were performed to analyze differences (1) between the chatbot personas within the no-choice group and (2) between the no-choice and the free-choice groups. RESULTS While the main effect of the chatbot persona on affective bond and usage intentions was insignificant (P=.87), we found differences based on participants' demographic profiles: main effects for gender (P=.04, ηp2=0.115) and age (P<.001, ηp2=0.192) and a significant interaction effect of persona and age (P=.01, ηp2=0.102). Participants younger than 40 years reported higher scores for affective bond and usage intentions for the interpersonally more distant expert and institution chatbots; participants 40 years or older reported higher outcomes for the closer peer and dialogical-self chatbots. The option to freely choose a persona significantly benefited perceptions of the peer chatbot further (eg, free-choice group affective bond: mean 5.28, SD 0.89; no-choice group affective bond: mean 4.54, SD 1.10; P=.003, ηp2=0.117). CONCLUSIONS Manipulating a chatbot's social role is a possible avenue for health care chatbot designers to tailor clients' chatbot experiences using user-specific demographic factors and to improve clients' perceptions and behavioral intentions toward the chatbot. Our results also emphasize the benefits of letting clients freely choose between chatbots.
Collapse
Affiliation(s)
- Marcia Nißen
- Centre for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Dominik Rüegger
- Centre for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
- Pathmate Technologies AG, Zurich, Switzerland
| | - Mirjam Stieger
- Department of Psychology, Brandeis University, Waltham, MA, United States
- Institute of Communication and Marketing, Lucerne University of Applied Sciences and Arts, Lucerne, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | | | - Mathias Allemand
- Department of Psychology, University of Zurich, Zurich, Switzerland
- University Research Priority Programs, Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
| | - Florian V Wangenheim
- Centre for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Tobias Kowatsch
- Centre for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
- Centre for Digital Health Interventions, Institute of Technology Management, University of St.Gallen, St.Gallen, Switzerland
| |
Collapse
|
32
|
Stein JP, Cimander P, Appel M. Power-Posing Robots: The Influence of a Humanoid Robot’s Posture and Size on its Perceived Dominance, Competence, Eeriness, and Threat. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00878-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
AbstractWhen interacting with sophisticated digital technologies, people often fall back on the same interaction scripts they apply to the communication with other humans—especially if the technology in question provides strong anthropomorphic cues (e.g., a human-like embodiment). Accordingly, research indicates that observers tend to interpret the body language of social robots in the same way as they would with another human being. Backed by initial evidence, we assumed that a humanoid robot will be considered as more dominant and competent, but also as more eerie and threatening once it strikes a so-called power pose. Moreover, we pursued the research question whether these effects might be accentuated by the robot’s body size. To this end, the current study presented 204 participants with pictures of the robot NAO in different poses (expansive vs. constrictive), while also manipulating its height (child-sized vs. adult-sized). Our results show that NAO’s posture indeed exerted strong effects on perceptions of dominance and competence. Conversely, participants’ threat and eeriness ratings remained statistically independent of the robot’s depicted body language. Further, we found that the machine’s size did not affect any of the measured interpersonal perceptions in a notable way. The study findings are discussed considering limitations and future research directions.
Collapse
|
33
|
Smriti D, Kao TSA, Rathod R, Shin JY, Peng W, Williams J, Mujib MI, Colosimo M, Huh-Yoo J. MICA: Motivational Interviewing Conversational Agent for Parents as Proxies for their Children in Healthy Eating (Preprint). JMIR Hum Factors 2022; 9:e38908. [PMID: 36206036 PMCID: PMC9587490 DOI: 10.2196/38908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 08/05/2022] [Accepted: 08/06/2022] [Indexed: 12/02/2022] Open
Abstract
Background Increased adoption of off-the-shelf conversational agents (CAs) brings opportunities to integrate therapeutic interventions. Motivational Interviewing (MI) can then be integrated with CAs for cost-effective access to it. MI can be especially beneficial for parents who often have low motivation because of limited time and resources to eat healthy together with their children. Objective We developed a Motivational Interviewing Conversational Agent (MICA) to improve healthy eating in parents who serve as a proxy for health behavior change in their children. Proxy relationships involve a person serving as a catalyst for behavior change in another person. Parents, serving as proxies, can bring about behavior change in their children. Methods We conducted user test sessions of the MICA prototype to understand the perceived acceptability and usefulness of the MICA prototype by parents. A total of 24 parents of young children participated in 2 user test sessions with MICA, approximately 2 weeks apart. After parents’ interaction with the MICA prototype in each user test session, we used qualitative interviews to understand parents’ perceptions and suggestions for improvements in MICA. Results Findings showed participants’ perceived usefulness of MICAs for helping them self-reflect and motivating them to adopt healthier eating habits together with their children. Participants further suggested various ways in which MICA can help them safely manage their children’s eating behaviors and provide customized support for their proxy needs and goals. Conclusions We have discussed how the user experience of CAs can be improved to uniquely offer support to parents who serve as proxies in changing the behavior of their children. We have concluded with implications for a larger context of designing MI-based CAs for supporting proxy relationships for health behavior change.
Collapse
Affiliation(s)
- Diva Smriti
- College of Computing and Informatics, Drexel University, Philadelphia, PA, United States
| | - Tsui-Sui Annie Kao
- College of Nursing, Michigan State University, East Lansing, MI, United States
| | - Rahil Rathod
- Tata Consultancy Services, Edison, NJ, United States
| | - Ji Youn Shin
- College of Design, University of Minnesota, Minneapolis, MN, United States
| | - Wei Peng
- College of Communication Arts and Sciences, Michigan State University, East Lansing, MI, United States
| | - Jake Williams
- College of Computing and Informatics, Drexel University, Philadelphia, PA, United States
| | - Munif Ishad Mujib
- College of Computing and Informatics, Drexel University, Philadelphia, PA, United States
| | | | - Jina Huh-Yoo
- College of Computing and Informatics, Drexel University, Philadelphia, PA, United States
| |
Collapse
|
34
|
|
35
|
Lv L, Huang M, Huang R. Anthropomorphize service robots: the role of human nature traits. SERVICE INDUSTRIES JOURNAL 2022. [DOI: 10.1080/02642069.2022.2048821] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Linxiang Lv
- Economics and Management School, Wuhan University, Wuhan, People’s Republic of China
| | - Minxue Huang
- Economics and Management School, Wuhan University, Wuhan, People’s Republic of China
| | - Ruyao Huang
- Economics and Management School, Wuhan University, Wuhan, People’s Republic of China
| |
Collapse
|
36
|
Zhao T, Cui J, Hu J, Dai Y, Zhou Y. Is Artificial Intelligence Customer Service Satisfactory? Insights Based on Microblog Data and User Interviews. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2022; 25:110-117. [PMID: 34935458 DOI: 10.1089/cyber.2021.0155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
A growing number of sectors are delivering customer services powered by artificial intelligence (AI) instead of humans, with evidence indicating labor cost reduction and efficiency improvement. However, it would be worthwhile to examine the extent to which consumers are satisfied with AI service agents. In two studies based on an analysis of 17,673 Weibo data (Study 1) and 33 interviews (Study 2), we constructed a pair of theoretical models of consumer attitudes toward AI services: a sentiment model and an evaluation model. The results from Weibo data analysis showed that consumers display a stronger negative attitude toward AI customer service than toward their human counterparts. Complaints regarding AI customer service is mainly about its poor problem-solving ability, while untimely response and lack of human touch also dissatisfy customers. Whether consumers offer positive feedback mainly depends on voice traits and service attitudes. The results from the interviews confirm an overall negative attitude of consumers toward AI customer service. Consumers also recognize AI customer service agents as human like and social interaction stress relieving. Taken together, these findings reveal Chinese customers' attitudes toward AI service solutions and provide concrete suggestions for the development and upgrade of AI customer services.
Collapse
Affiliation(s)
- Tengfei Zhao
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Jingjing Cui
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Jiayu Hu
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Yan Dai
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Yang Zhou
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| |
Collapse
|
37
|
Wang H, Gupta S, Singhal A, Muttreja P, Singh S, Sharma P, Piterova A. An Artificial Intelligence Chatbot for Young People's Sexual and Reproductive Health in India (SnehAI): Instrumental Case Study. J Med Internet Res 2022; 24:e29969. [PMID: 34982034 PMCID: PMC8764609 DOI: 10.2196/29969] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 07/31/2021] [Accepted: 11/21/2021] [Indexed: 01/04/2023] Open
Abstract
Background Leveraging artificial intelligence (AI)–driven apps for health education and promotion can help in the accomplishment of several United Nations sustainable development goals. SnehAI, developed by the Population Foundation of India, is the first Hinglish (Hindi + English) AI chatbot, deliberately designed for social and behavioral changes in India. It provides a private, nonjudgmental, and safe space to spur conversations about taboo topics (such as safe sex and family planning) and offers accurate, relatable, and trustworthy information and resources. Objective This study aims to use the Gibson theory of affordances to examine SnehAI and offer scholarly guidance on how AI chatbots can be used to educate adolescents and young adults, promote sexual and reproductive health, and advocate for the health entitlements of women and girls in India. Methods We adopted an instrumental case study approach that allowed us to explore SnehAI from the perspectives of technology design, program implementation, and user engagement. We also used a mix of qualitative insights and quantitative analytics data to triangulate our findings. Results SnehAI demonstrated strong evidence across fifteen functional affordances: accessibility, multimodality, nonlinearity, compellability, queriosity, editability, visibility, interactivity, customizability, trackability, scalability, glocalizability, inclusivity, connectivity, and actionability. SnehAI also effectively engaged its users, especially young men, with 8.2 million messages exchanged across a 5-month period. Almost half of the incoming user messages were texts of deeply personal questions and concerns about sexual and reproductive health, as well as allied topics. Overall, SnehAI successfully presented itself as a trusted friend and mentor; the curated content was both entertaining and educational, and the natural language processing system worked effectively to personalize the chatbot response and optimize user experience. Conclusions SnehAI represents an innovative, engaging, and educational intervention that enables vulnerable and hard-to-reach population groups to talk and learn about sensitive and important issues. SnehAI is a powerful testimonial of the vital potential that lies in AI technologies for social good.
Collapse
Affiliation(s)
- Hua Wang
- Department of Communication, University at Buffalo, The State University of New York, Buffalo, NY, United States
| | - Sneha Gupta
- Department of Communication, University at Buffalo, The State University of New York, Buffalo, NY, United States
| | - Arvind Singhal
- Department of Communication, The University of Texas at El Paso, El Paso, TX, United States.,School of Business and Social Sciences, Inland University of Applied Sciences, Elverum, Norway
| | | | | | | | | |
Collapse
|
38
|
Mistry P. The New Frontiers of AI in Medicine. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_56] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
39
|
The New Frontiers of AI in Medicine. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-58080-3_56-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
40
|
Curtis RG, Bartel B, Ferguson T, Blake HT, Northcott C, Virgara R, Maher CA. Improving User Experience of Virtual Health Assistants: Scoping Review. J Med Internet Res 2021; 23:e31737. [PMID: 34931997 PMCID: PMC8734926 DOI: 10.2196/31737] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 09/19/2021] [Accepted: 10/21/2021] [Indexed: 01/15/2023] Open
Abstract
Background Virtual assistants can be used to deliver innovative health programs that provide appealing, personalized, and convenient health advice and support at scale and low cost. Design characteristics that influence the look and feel of the virtual assistant, such as visual appearance or language features, may significantly influence users’ experience and engagement with the assistant. Objective This scoping review aims to provide an overview of the experimental research examining how design characteristics of virtual health assistants affect user experience, summarize research findings of experimental research examining how design characteristics of virtual health assistants affect user experience, and provide recommendations for the design of virtual health assistants if sufficient evidence exists. Methods We searched 5 electronic databases (Web of Science, MEDLINE, Embase, PsycINFO, and ACM Digital Library) to identify the studies that used an experimental design to compare the effects of design characteristics between 2 or more versions of an interactive virtual health assistant on user experience among adults. Data were synthesized descriptively. Health domains, design characteristics, and outcomes were categorized, and descriptive statistics were used to summarize the body of research. Results for each study were categorized as positive, negative, or no effect, and a matrix of the design characteristics and outcome categories was constructed to summarize the findings. Results The database searches identified 6879 articles after the removal of duplicates. We included 48 articles representing 45 unique studies in the review. The most common health domains were mental health and physical activity. Studies most commonly examined design characteristics in the categories of visual design or conversational style and relational behavior and assessed outcomes in the categories of personality, satisfaction, relationship, or use intention. Over half of the design characteristics were examined by only 1 study. Results suggest that empathy and relational behavior and self-disclosure are related to more positive user experience. Results also suggest that if a human-like avatar is used, realistic rendering and medical attire may potentially be related to more positive user experience; however, more research is needed to confirm this. Conclusions There is a growing body of scientific evidence examining the impact of virtual health assistants’ design characteristics on user experience. Taken together, data suggest that the look and feel of a virtual health assistant does affect user experience. Virtual health assistants that show empathy, display nonverbal relational behaviors, and disclose personal information about themselves achieve better user experience. At present, the evidence base is broad, and the studies are typically small in scale and highly heterogeneous. Further research, particularly using longitudinal research designs with repeated user interactions, is needed to inform the optimal design of virtual health assistants.
Collapse
Affiliation(s)
- Rachel G Curtis
- UniSA Allied Health and Human Performance, Alliance for Research in Exercise, Nutrition and Activity, University of South Australia, Adelaide, Australia
| | - Bethany Bartel
- UniSA Allied Health and Human Performance, Alliance for Research in Exercise, Nutrition and Activity, University of South Australia, Adelaide, Australia
| | - Ty Ferguson
- UniSA Allied Health and Human Performance, Alliance for Research in Exercise, Nutrition and Activity, University of South Australia, Adelaide, Australia
| | - Henry T Blake
- UniSA Allied Health and Human Performance, Alliance for Research in Exercise, Nutrition and Activity, University of South Australia, Adelaide, Australia
| | - Celine Northcott
- UniSA Allied Health and Human Performance, Alliance for Research in Exercise, Nutrition and Activity, University of South Australia, Adelaide, Australia
| | - Rosa Virgara
- UniSA Allied Health and Human Performance, Alliance for Research in Exercise, Nutrition and Activity, University of South Australia, Adelaide, Australia
| | - Carol A Maher
- UniSA Allied Health and Human Performance, Alliance for Research in Exercise, Nutrition and Activity, University of South Australia, Adelaide, Australia
| |
Collapse
|
41
|
Dang J, Liu L. A growth mindset about human minds promotes positive responses to intelligent technology. Cognition 2021; 220:104985. [PMID: 34920301 DOI: 10.1016/j.cognition.2021.104985] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 11/30/2021] [Accepted: 12/05/2021] [Indexed: 11/03/2022]
Abstract
Perceiving minds in technology agents, for example, robots designed with artificial intelligence (AI), is common and crucial in modern life. However, past studies have revealed that robots with a high level of minds elicit polarized responses. From a human-robot interaction perspective, we proposed that people' responses to robots, in part, originate from differences between fixed and growth mindsets about human minds-the beliefs regarding whether humans' mental capacities are fixed or incremental. We conducted five studies to test this assumption. A growth mindset about human minds was associated with or led to lower levels of negative feelings about robots (Study 1), more perceptions of robots as allies versus enemies (Study 2), more support for robotic research (Studies 3 and 4), and greater willingness to interact with robots (Study 5). Furthermore, the effect of a growth mindset about human minds on favorable responses to robots was more pronounced when robots were perceived as having high (versus low) levels of minds (Studies 3-5) and mediated by decreased concerns about robots (Study 5). By emphasizing the nuanced role of mindset beliefs about human minds in responses to intelligent technology, this research provides not only a new perspective on research into minds but also important implications for human-technology relationships.
Collapse
Affiliation(s)
- Jianning Dang
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China.
| | - Li Liu
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China.
| |
Collapse
|
42
|
Christoforakos L, Feicht N, Hinkofer S, Löscher A, Schlegl SF, Diefenbach S. Connect With Me. Exploring Influencing Factors in a Human-Technology Relationship Based on Regular Chatbot Use. Front Digit Health 2021; 3:689999. [PMID: 34870266 PMCID: PMC8636701 DOI: 10.3389/fdgth.2021.689999] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 10/11/2021] [Indexed: 11/19/2022] Open
Abstract
Companion technologies, such as social robots and conversational chatbots, take increasing responsibility for daily tasks and support our physical and mental health. Especially in the domain of healthcare, where technologies are often applied for long-term use, our experience with and relationship to such technologies become ever more relevant. Based on a 2-week interaction period with a conversational chatbot, our study (N = 58) explores the relationship between humans and technology. In particular, our study focuses on felt social connectedness of participants to the technology, possibly related characteristics of technology and users (e.g., individual tendency to anthropomorphize, individual need to belong), as well as possibly affected outcome variables (e.g., desire to socialize with other humans). The participants filled in short daily and 3 weekly questionnaires. Results showed that interaction duration and intensity positively predicted social connectedness to the chatbot. Thereby, perceiving the chatbot as anthropomorphic mediated the interrelation of interaction intensity and social connectedness to the chatbot. Also, the perceived social presence of the chatbot mediated the relationship between interaction duration as well as interaction intensity and social connectedness to the chatbot. Characteristics of the user did not affect the interrelations of chatbot interaction duration or intensity and perceived anthropomorphism or social presence. Furthermore, we did not find a negative correlation between felt social connectedness of users to the technology and their desire to socialize with other humans. In sum, our findings provide both theoretical and practical contributions. Our study suggests that regular interaction with a technology can foster feelings of social connectedness, implying transferability of dynamics known from interpersonal interaction. Moreover, social connectedness could be supported by technology design that facilitates perceptions of anthropomorphism and social presence. While such means could help to establish an intense relationship between users and technology and long-term engagement, the contexts in which anthropomorphic design is, actually, the means of choice should be carefully reflected. Future research should examine individual and societal consequences to foster responsible technology development in healthcare and beyond.
Collapse
Affiliation(s)
- Lara Christoforakos
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Nina Feicht
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Simone Hinkofer
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Annalena Löscher
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Sonja F Schlegl
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Sarah Diefenbach
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
43
|
Maenhout L, Peuters C, Cardon G, Compernolle S, Crombez G, DeSmet A. Participatory Development and Pilot Testing of an Adolescent Health Promotion Chatbot. Front Public Health 2021; 9:724779. [PMID: 34858919 PMCID: PMC8632020 DOI: 10.3389/fpubh.2021.724779] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 10/13/2021] [Indexed: 11/13/2022] Open
Abstract
Background: The use of chatbots may increase engagement with digital behavior change interventions in youth by providing human-like interaction. Following a Person-Based Approach (PBA), integrating user preferences in digital tool development is crucial for engagement, whereas information on youth preferences for health chatbots is currently limited. Objective: The aim of this study was to gain an in-depth understanding of adolescents' expectations and preferences for health chatbots and describe the systematic development of a health promotion chatbot. Methods: Three studies in three different stages of PBA were conducted: (1) a qualitative focus group study (n = 36), (2) log data analysis during pretesting (n = 6), and (3) a mixed-method pilot testing (n = 73). Results: Confidentiality, connection to youth culture, and preferences when referring to other sources were important aspects for youth in chatbots. Youth also wanted a chatbot to provide small talk and broader support (e.g., technical support with the tool) rather than specifically in relation to health behaviors. Despite the meticulous approach of PBA, user engagement with the developed chatbot was modest. Conclusion: This study highlights that conducting formative research at different stages is an added value and that adolescents have different chatbot preferences than adults. Further improvement to build an engaging chatbot for youth may stem from using living databases.
Collapse
Affiliation(s)
- Laura Maenhout
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium.,Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium.,Research Foundation Flanders (FWO), Brussels, Belgium
| | - Carmen Peuters
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium.,Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Greet Cardon
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium
| | - Sofie Compernolle
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium.,Research Foundation Flanders (FWO), Brussels, Belgium
| | - Geert Crombez
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Ann DeSmet
- Faculty of Psychology and Educational Sciences, Université Libre de Bruxelles, Brussels, Belgium.,Department of Communication Studies, Faculty of Social Sciences, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
44
|
Tsai WS, Lun D, Carcioppolo N, Chuan C. Human versus chatbot: Understanding the role of emotion in health marketing communication for vaccines. PSYCHOLOGY & MARKETING 2021; 38:2377-2392. [PMID: 34539051 PMCID: PMC8441681 DOI: 10.1002/mar.21556] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/11/2021] [Accepted: 07/14/2021] [Indexed: 06/05/2023]
Abstract
Based on the theoretical framework of agency effect, this study examined the role of affect in influencing the effects of chatbot versus human brand representatives in the context of health marketing communication about HPV vaccines. We conducted a 2 (perceived agency: chatbot vs. human) × 3 (affect elicitation: embarrassment, anger, neutral) between-subject lab experiment with 142 participants, who were randomly assigned to interact with either a perceived chatbot or a human representative. Key findings from self-reported and behavioral data highlight the complexity of consumer-chatbot communication. Specifically, participants reported lower interaction satisfaction with the chatbot than with the human representative when anger was evoked. However, participants were more likely to disclose concerns of HPV risks and provide more elaborate answers to the perceived human representative when embarrassment was elicited. Overall, the chatbot performed comparably to the human representative in terms of perceived usefulness and influence over participants' compliance intention in all emotional contexts. The findings complement the Computers as Social Actors paradigm and offer strategic guidelines to capitalize on the relative advantages of chatbot versus human representatives.
Collapse
Affiliation(s)
| | - Di Lun
- Department of Communication StudiesUniversity of MiamiMiamiFloridaUSA
| | | | - Ching‐Hua Chuan
- Department of Interactive MediaUniversity of MiamiMiamiFloridaUSA
| |
Collapse
|
45
|
Social cues and implications for designing expert and competent artificial agents: A systematic review. TELEMATICS AND INFORMATICS 2021. [DOI: 10.1016/j.tele.2021.101721] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
46
|
Xu L, Sanders L, Li K, Chow JCL. Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning: Systematic Review. JMIR Cancer 2021; 7:e27850. [PMID: 34847056 PMCID: PMC8669585 DOI: 10.2196/27850] [Citation(s) in RCA: 107] [Impact Index Per Article: 35.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 07/02/2021] [Accepted: 09/18/2021] [Indexed: 01/01/2023] Open
Abstract
Background Chatbot is a timely topic applied in various fields, including medicine and health care, for human-like knowledge transfer and communication. Machine learning, a subset of artificial intelligence, has been proven particularly applicable in health care, with the ability for complex dialog management and conversational flexibility. Objective This review article aims to report on the recent advances and current trends in chatbot technology in medicine. A brief historical overview, along with the developmental progress and design characteristics, is first introduced. The focus will be on cancer therapy, with in-depth discussions and examples of diagnosis, treatment, monitoring, patient support, workflow efficiency, and health promotion. In addition, this paper will explore the limitations and areas of concern, highlighting ethical, moral, security, technical, and regulatory standards and evaluation issues to explain the hesitancy in implementation. Methods A search of the literature published in the past 20 years was conducted using the IEEE Xplore, PubMed, Web of Science, Scopus, and OVID databases. The screening of chatbots was guided by the open-access Botlist directory for health care components and further divided according to the following criteria: diagnosis, treatment, monitoring, support, workflow, and health promotion. Results Even after addressing these issues and establishing the safety or efficacy of chatbots, human elements in health care will not be replaceable. Therefore, chatbots have the potential to be integrated into clinical practice by working alongside health practitioners to reduce costs, refine workflow efficiencies, and improve patient outcomes. Other applications in pandemic support, global health, and education are yet to be fully explored. Conclusions Further research and interdisciplinary collaboration could advance this technology to dramatically improve the quality of care for patients, rebalance the workload for clinicians, and revolutionize the practice of medicine.
Collapse
Affiliation(s)
- Lu Xu
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada.,Department of Medical Biophysics, Western University, London, ON, Canada
| | - Leslie Sanders
- Department of Humanities, York University, Toronto, ON, Canada
| | - Kay Li
- Department of English, York University, Toronto, ON, Canada
| | - James C L Chow
- Department of Medical Physics, Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.,Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
47
|
Liu B, Wei L. Machine gaze in online behavioral targeting: The effects of algorithmic human likeness on social presence and social influence. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106926] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
48
|
Kim W, Ryoo Y. Hypocrisy Induction: Using Chatbots to Promote COVID-19 Social Distancing. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2021; 25:27-36. [PMID: 34652216 DOI: 10.1089/cyber.2021.0057] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Considering widespread resistance to COVID-19 preventive measures, the authors draw on hypocrisy induction theory to examine whether online chatbots can be used to induce hypocrisy and increase compliance with social distancing guidelines. The experiment demonstrates that when a chatbot induces hypocrisy by reminding participants that they have failed to comply with social distancing recommendations, they feel guilty about violating social norms. To reinstate confidence in their personal standards, they form favorable attitudes toward the chatbot ad and establish intentions to comply with recommendations. Interestingly, the persuasive power of hypocrisy induction differs depending on the level of anthropomorphism of the chatbot. When a humanlike chatbot reminds them of their hypocritical behavior, participants feel higher levels of guilt and act more desirably, but a machinelike chatbot is not effective for creating guilt or generating compliance.
Collapse
Affiliation(s)
- WooJin Kim
- Charles H. Sandage Department of Advertising, College of Media, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Yuhosua Ryoo
- School of Journalism, College of Arts and Media, Southern Illinois University, Carbondale, Illinois, USA
| |
Collapse
|
49
|
Mattiassi ADA, Sarrica M, Cavallo F, Fortunati L. What do humans feel with mistreated humans, animals, robots, and objects? Exploring the role of cognitive empathy. MOTIVATION AND EMOTION 2021. [DOI: 10.1007/s11031-021-09886-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractThe aim of this paper is to present a study in which we compare the degree of empathy that a convenience sample of university students expressed with humans, animals, robots and objects. The present study broadens the spectrum of elements eliciting empathy that has been previously explored while at the same time comparing different facets of empathy. Here we used video clips of mistreated humans, animals, robots, and objects to elicit empathic reactions and to measure attributed emotions. The use of such a broad spectrum of elements allowed us to infer the role of different features of the selected elements, specifically experience (how much the element is able to understand the events of the environment) and degree of anthropo-/zoomorphization. The results show that participants expressed empathy differently with the various social actors being mistreated. A comparison between the present results and previous results on vicarious feelings shows that congruence between self and other experience was not always held, and it was modulated by familiarity with robotic artefacts of daily usage.
Collapse
|
50
|
Erel H, Trayman D, Levy C, Manor A, Mikulincer M, Zuckerman O. Enhancing Emotional Support: The Effect of a Robotic Object on Human–Human Support Quality. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00779-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|