51
|
Social cues and implications for designing expert and competent artificial agents: A systematic review. TELEMATICS AND INFORMATICS 2021. [DOI: 10.1016/j.tele.2021.101721] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
52
|
Xu L, Sanders L, Li K, Chow JCL. Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning: Systematic Review. JMIR Cancer 2021; 7:e27850. [PMID: 34847056 PMCID: PMC8669585 DOI: 10.2196/27850] [Citation(s) in RCA: 148] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 07/02/2021] [Accepted: 09/18/2021] [Indexed: 01/01/2023] Open
Abstract
Background Chatbot is a timely topic applied in various fields, including medicine and health care, for human-like knowledge transfer and communication. Machine learning, a subset of artificial intelligence, has been proven particularly applicable in health care, with the ability for complex dialog management and conversational flexibility. Objective This review article aims to report on the recent advances and current trends in chatbot technology in medicine. A brief historical overview, along with the developmental progress and design characteristics, is first introduced. The focus will be on cancer therapy, with in-depth discussions and examples of diagnosis, treatment, monitoring, patient support, workflow efficiency, and health promotion. In addition, this paper will explore the limitations and areas of concern, highlighting ethical, moral, security, technical, and regulatory standards and evaluation issues to explain the hesitancy in implementation. Methods A search of the literature published in the past 20 years was conducted using the IEEE Xplore, PubMed, Web of Science, Scopus, and OVID databases. The screening of chatbots was guided by the open-access Botlist directory for health care components and further divided according to the following criteria: diagnosis, treatment, monitoring, support, workflow, and health promotion. Results Even after addressing these issues and establishing the safety or efficacy of chatbots, human elements in health care will not be replaceable. Therefore, chatbots have the potential to be integrated into clinical practice by working alongside health practitioners to reduce costs, refine workflow efficiencies, and improve patient outcomes. Other applications in pandemic support, global health, and education are yet to be fully explored. Conclusions Further research and interdisciplinary collaboration could advance this technology to dramatically improve the quality of care for patients, rebalance the workload for clinicians, and revolutionize the practice of medicine.
Collapse
Affiliation(s)
- Lu Xu
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada.,Department of Medical Biophysics, Western University, London, ON, Canada
| | - Leslie Sanders
- Department of Humanities, York University, Toronto, ON, Canada
| | - Kay Li
- Department of English, York University, Toronto, ON, Canada
| | - James C L Chow
- Department of Medical Physics, Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.,Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
53
|
Liu B, Wei L. Machine gaze in online behavioral targeting: The effects of algorithmic human likeness on social presence and social influence. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106926] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
54
|
Kim W, Ryoo Y. Hypocrisy Induction: Using Chatbots to Promote COVID-19 Social Distancing. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2021; 25:27-36. [PMID: 34652216 DOI: 10.1089/cyber.2021.0057] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Considering widespread resistance to COVID-19 preventive measures, the authors draw on hypocrisy induction theory to examine whether online chatbots can be used to induce hypocrisy and increase compliance with social distancing guidelines. The experiment demonstrates that when a chatbot induces hypocrisy by reminding participants that they have failed to comply with social distancing recommendations, they feel guilty about violating social norms. To reinstate confidence in their personal standards, they form favorable attitudes toward the chatbot ad and establish intentions to comply with recommendations. Interestingly, the persuasive power of hypocrisy induction differs depending on the level of anthropomorphism of the chatbot. When a humanlike chatbot reminds them of their hypocritical behavior, participants feel higher levels of guilt and act more desirably, but a machinelike chatbot is not effective for creating guilt or generating compliance.
Collapse
Affiliation(s)
- WooJin Kim
- Charles H. Sandage Department of Advertising, College of Media, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Yuhosua Ryoo
- School of Journalism, College of Arts and Media, Southern Illinois University, Carbondale, Illinois, USA
| |
Collapse
|
55
|
Mattiassi ADA, Sarrica M, Cavallo F, Fortunati L. What do humans feel with mistreated humans, animals, robots, and objects? Exploring the role of cognitive empathy. MOTIVATION AND EMOTION 2021. [DOI: 10.1007/s11031-021-09886-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractThe aim of this paper is to present a study in which we compare the degree of empathy that a convenience sample of university students expressed with humans, animals, robots and objects. The present study broadens the spectrum of elements eliciting empathy that has been previously explored while at the same time comparing different facets of empathy. Here we used video clips of mistreated humans, animals, robots, and objects to elicit empathic reactions and to measure attributed emotions. The use of such a broad spectrum of elements allowed us to infer the role of different features of the selected elements, specifically experience (how much the element is able to understand the events of the environment) and degree of anthropo-/zoomorphization. The results show that participants expressed empathy differently with the various social actors being mistreated. A comparison between the present results and previous results on vicarious feelings shows that congruence between self and other experience was not always held, and it was modulated by familiarity with robotic artefacts of daily usage.
Collapse
|
56
|
Erel H, Trayman D, Levy C, Manor A, Mikulincer M, Zuckerman O. Enhancing Emotional Support: The Effect of a Robotic Object on Human–Human Support Quality. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00779-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
57
|
Ireland D, Bradford D, Szepe E, Lynch E, Martyn M, Hansen D, Gaff C. Introducing Edna: A trainee chatbot designed to support communication about additional (secondary) genomic findings. PATIENT EDUCATION AND COUNSELING 2021; 104:739-749. [PMID: 33234441 DOI: 10.1016/j.pec.2020.11.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 10/23/2020] [Accepted: 11/05/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE To support informed decision-making about reanalysis of clinical genomic data for risk of preventable conditions ('additional findings') by developing a chatbot (electronic genetic resource, 'eDNA'). METHODS Interactions in pre-test genetic counseling sessions (13.5 h) about additional findings were characterized using proponent, thematic and semantic analyses of transcripts. We then wrote interfaces to draw supplementary data from external genetics applications. To create Edna, this content was programmed using a chatbot framework which interacts with patients via speech-to-text. RESULTS Conditions, terms, explanations of concepts, and key factors to consider in decision making were all encoded into chatbot conversations emulating counseling session flows. Patient agency can be enhanced by prompted consideration of the personal and familial implications of testing. Similarly, health literacy can be broadened through explanation of genetic conditions and terminology. Novel aspects include sentiment analysis and collection of family history. Medical advice and the impact of existing genetic conditions were deemed inappropriate for inclusion. CONCLUSION Edna's successful development represents a movement towards accessible, acceptable and well-supported digital health processes for patients to make informed decisions for additional findings. PRACTICE IMPLICATIONS Edna complements genetic counseling by collecting and providing genomic information before or after pre-test consultations.
Collapse
Affiliation(s)
- David Ireland
- Australian e-Health Research Centre, CSIRO, UQ Health Sciences Building 901/16, Royal Brisbane and Women's Hospital, Herston, 4029, Australia.
| | - DanaKai Bradford
- Australian e-Health Research Centre, CSIRO, UQ Health Sciences Building 901/16, Royal Brisbane and Women's Hospital, Herston, 4029, Australia.
| | - Emma Szepe
- Melbourne Genomics Health Alliance, Walter and Eliza Hall Institute, 1G Royal Parade, Parkville, 3052, Australia; Department of Paediatrics, University of Melbourne, Flemington Road, Parkville, 3052, Australia
| | - Ella Lynch
- Melbourne Genomics Health Alliance, Walter and Eliza Hall Institute, 1G Royal Parade, Parkville, 3052, Australia; Victorian Clinical Genetics Services, Flemington Road, Parkville, 3052, Australia; Murdoch Children's Research Institute, Flemington Road, Parkville, 3052, Australia.
| | - Melissa Martyn
- Melbourne Genomics Health Alliance, Walter and Eliza Hall Institute, 1G Royal Parade, Parkville, 3052, Australia; Department of Paediatrics, University of Melbourne, Flemington Road, Parkville, 3052, Australia; Murdoch Children's Research Institute, Flemington Road, Parkville, 3052, Australia.
| | - David Hansen
- Australian e-Health Research Centre, CSIRO, UQ Health Sciences Building 901/16, Royal Brisbane and Women's Hospital, Herston, 4029, Australia.
| | - Clara Gaff
- Melbourne Genomics Health Alliance, Walter and Eliza Hall Institute, 1G Royal Parade, Parkville, 3052, Australia; Department of Paediatrics, University of Melbourne, Flemington Road, Parkville, 3052, Australia; Murdoch Children's Research Institute, Flemington Road, Parkville, 3052, Australia.
| |
Collapse
|
58
|
Bérubé C, Schachner T, Keller R, Fleisch E, V Wangenheim F, Barata F, Kowatsch T. Voice-Based Conversational Agents for the Prevention and Management of Chronic and Mental Health Conditions: Systematic Literature Review. J Med Internet Res 2021; 23:e25933. [PMID: 33658174 PMCID: PMC8042539 DOI: 10.2196/25933] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/10/2021] [Accepted: 03/03/2021] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND Chronic and mental health conditions are increasingly prevalent worldwide. As devices in our everyday lives offer more and more voice-based self-service, voice-based conversational agents (VCAs) have the potential to support the prevention and management of these conditions in a scalable manner. However, evidence on VCAs dedicated to the prevention and management of chronic and mental health conditions is unclear. OBJECTIVE This study provides a better understanding of the current methods used in the evaluation of health interventions for the prevention and management of chronic and mental health conditions delivered through VCAs. METHODS We conducted a systematic literature review using PubMed MEDLINE, Embase, PsycINFO, Scopus, and Web of Science databases. We included primary research involving the prevention or management of chronic or mental health conditions through a VCA and reporting an empirical evaluation of the system either in terms of system accuracy, technology acceptance, or both. A total of 2 independent reviewers conducted the screening and data extraction, and agreement between them was measured using Cohen kappa. A narrative approach was used to synthesize the selected records. RESULTS Of 7170 prescreened papers, 12 met the inclusion criteria. All studies were nonexperimental. The VCAs provided behavioral support (n=5), health monitoring services (n=3), or both (n=4). The interventions were delivered via smartphones (n=5), tablets (n=2), or smart speakers (n=3). In 2 cases, no device was specified. A total of 3 VCAs targeted cancer, whereas 2 VCAs targeted diabetes and heart failure. The other VCAs targeted hearing impairment, asthma, Parkinson disease, dementia, autism, intellectual disability, and depression. The majority of the studies (n=7) assessed technology acceptance, but only few studies (n=3) used validated instruments. Half of the studies (n=6) reported either performance measures on speech recognition or on the ability of VCAs to respond to health-related queries. Only a minority of the studies (n=2) reported behavioral measures or a measure of attitudes toward intervention-targeted health behavior. Moreover, only a minority of studies (n=4) reported controlling for participants' previous experience with technology. Finally, risk bias varied markedly. CONCLUSIONS The heterogeneity in the methods, the limited number of studies identified, and the high risk of bias show that research on VCAs for chronic and mental health conditions is still in its infancy. Although the results of system accuracy and technology acceptance are encouraging, there is still a need to establish more conclusive evidence on the efficacy of VCAs for the prevention and management of chronic and mental health conditions, both in absolute terms and in comparison with standard health care.
Collapse
Affiliation(s)
- Caterina Bérubé
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Theresa Schachner
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Roman Keller
- Future Health Technologies Programme, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore-ETH Centre, Singapore, Singapore
| | - Elgar Fleisch
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
- Future Health Technologies Programme, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore-ETH Centre, Singapore, Singapore
- Center for Digital Health Interventions, Institute of Technology Management, University of St. Gallen, St. Gallen, Switzerland
| | - Florian V Wangenheim
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
- Future Health Technologies Programme, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore-ETH Centre, Singapore, Singapore
| | - Filipe Barata
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Tobias Kowatsch
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
- Future Health Technologies Programme, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore-ETH Centre, Singapore, Singapore
- Center for Digital Health Interventions, Institute of Technology Management, University of St. Gallen, St. Gallen, Switzerland
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore, Singapore
| |
Collapse
|
59
|
Chung K, Cho HY, Park JY. A Chatbot for Perinatal Women's and Partners' Obstetric and Mental Health Care: Development and Usability Evaluation Study. JMIR Med Inform 2021; 9:e18607. [PMID: 33656442 PMCID: PMC7970298 DOI: 10.2196/18607] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 08/15/2020] [Accepted: 12/09/2020] [Indexed: 01/17/2023] Open
Abstract
Background To motivate people to adopt medical chatbots, the establishment of a specialized medical knowledge database that fits their personal interests is of great importance in developing a chatbot for perinatal care, particularly with the help of health professionals. Objective The objectives of this study are to develop and evaluate a user-friendly question-and-answer (Q&A) knowledge database–based chatbot (Dr. Joy) for perinatal women’s and their partners’ obstetric and mental health care by applying a text-mining technique and implementing contextual usability testing (UT), respectively, thus determining whether this medical chatbot built on mobile instant messenger (KakaoTalk) can provide its male and female users with good user experience. Methods Two men aged 38 and 40 years and 13 women aged 27 to 43 years in pregnancy preparation or different pregnancy stages were enrolled. All participants completed the 7-day-long UT, during which they were given the daily tasks of asking Dr. Joy at least 3 questions at any time and place and then giving the chatbot either positive or negative feedback with emoji, using at least one feature of the chatbot, and finally, sending a facilitator all screenshots for the history of the day’s use via KakaoTalk before midnight. One day after the UT completion, all participants were asked to fill out a questionnaire on the evaluation of usability, perceived benefits and risks, intention to seek and share health information on the chatbot, and strengths and weaknesses of its use, as well as demographic characteristics. Results Despite the relatively higher score of ease of learning (EOL), the results of the Spearman correlation indicated that EOL was not significantly associated with usefulness (ρ=0.26; P=.36), ease of use (ρ=0.19; P=.51), satisfaction (ρ=0.21; P=.46), or total usability scores (ρ=0.32; P=.24). Unlike EOL, all 3 subfactors and the total usability had significant positive associations with each other (all ρ>0.80; P<.001). Furthermore, perceived risks exhibited no significant negative associations with perceived benefits (ρ=−0.29; P=.30) or intention to seek (SEE; ρ=−0.28; P=.32) or share (SHA; ρ=−0.24; P=.40) health information on the chatbot via KakaoTalk, whereas perceived benefits exhibited significant positive associations with both SEE and SHA. Perceived benefits were more strongly associated with SEE (ρ=0.94; P<.001) than with SHA (ρ=0.70; P=.004). Conclusions This study provides the potential for the uptake of this newly developed Q&A knowledge database–based KakaoTalk chatbot for obstetric and mental health care. As Dr. Joy had quality contents with both utilitarian and hedonic value, its male and female users could be encouraged to use medical chatbots in a convenient, easy-to-use, and enjoyable manner. To boost their continued usage intention for Dr. Joy, its Q&A sets need to be periodically updated to satisfy user intent by monitoring both male and female user utterances.
Collapse
Affiliation(s)
- Kyungmi Chung
- Department of Psychiatry, Yonsei University College of Medicine, Yongin Severance Hospital, Yonsei University Health System, Yongin-si, Republic of Korea.,Center for Digital Health, Yongin Severance Hospital, Yonsei University Health System, Yongin-si, Republic of Korea.,Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Yonsei University Health System, Seoul, Republic of Korea
| | - Hee Young Cho
- Department of Obstetrics and Gynecology, CHA Gangnam Medical Center, CHA University, Seoul, Republic of Korea
| | - Jin Young Park
- Department of Psychiatry, Yonsei University College of Medicine, Yongin Severance Hospital, Yonsei University Health System, Yongin-si, Republic of Korea.,Center for Digital Health, Yongin Severance Hospital, Yonsei University Health System, Yongin-si, Republic of Korea.,Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Yonsei University Health System, Seoul, Republic of Korea
| |
Collapse
|
60
|
Dang J, Liu L. Robots are friends as well as foes: Ambivalent attitudes toward mindful and mindless AI robots in the United States and China. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2020.106612] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
61
|
Fox J, Gambino A. Relationship Development with Humanoid Social Robots: Applying Interpersonal Theories to Human-Robot Interaction. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2021; 24:294-299. [PMID: 33434097 DOI: 10.1089/cyber.2020.0181] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Humanoid social robots (HSRs) are human-made technologies that can take physical or digital form, resemble people in form or behavior to some degree, and are designed to interact with people. A common assumption is that social robots can and should mimic humans, such that human-robot interaction (HRI) closely resembles human-human (i.e., interpersonal) interaction. Research is often framed from the assumption that rules and theories that apply to interpersonal interaction should apply to HRI (e.g., the computers are social actors framework). Here, we challenge these assumptions and consider more deeply the relevance and applicability of our knowledge about personal relationships to relationships with social robots. First, we describe the typical characteristics of HSRs available to consumers currently, elaborating characteristics relevant to understanding social interactions with robots such as form anthropomorphism and behavioral anthropomorphism. We also consider common social affordances of modern HSRs (persistence, personalization, responsiveness, contingency, and conversational control) and how these align with human capacities and expectations. Next, we present predominant interpersonal theories whose primary claims are foundational to our understanding of human relationship development (social exchange theories, including resource theory, interdependence theory, equity theory, and social penetration theory). We consider whether interpersonal theories are viable frameworks for studying HRI and human-robot relationships given their theoretical assumptions and claims. We conclude by providing suggestions for researchers and designers, including alternatives to equating human-robot relationships to human-human relationships.
Collapse
Affiliation(s)
- Jesse Fox
- School of Communication, The Ohio State University, Columbus, Ohio, USA
| | - Andrew Gambino
- Donald P. Bellisario College of Communications, The Pennsylvania State University, University Park, Pennsylvania, USA
| |
Collapse
|
62
|
Lu X, Zhang R. Impact of patient information behaviours in online health communities on patient compliance and the mediating role of patients' perceived empathy. PATIENT EDUCATION AND COUNSELING 2021; 104:186-193. [PMID: 32665071 DOI: 10.1016/j.pec.2020.07.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 06/29/2020] [Accepted: 07/01/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Patient health information seeking and physician-patient communication in OHCs proved to have impacts on patient compliance, but related studies from psychological perspectives are limited. This study aims to investigate the impact of patient health information seeking and physician-patient communication in OHCs on patient compliance. METHODS This study established a research model and proposed six hypotheses. An anonymous investigation was conducted using Chinese OHCs. Confirmatory factor analysis, partial least squares, and structural equation modelling were used to test the hypotheses. RESULTS We received 371 responses, and 316 of them were valid. Patient health information seeking and physician-patient communication frequency in OHCs had positive impacts on patients' perceived affective and cognitive empathies, which positively impacted patient compliance. CONCLUSIONS Patient compliance can be improved by patient health information seeking and physician-patient communication in OHCs and affective and cognitive empathies. Patients' perceived affective empathy is the preferred perspective to improve patient compliance. PRACTICE IMPLICATIONS Physicians should encourage patients to seek health information and communicate with them through OHCs, be concerned about patients' experiences, feelings, and attitudes, understand patients' demands and mental states, and show their patients that they can feel patients' pain. Increasing physician-patient communication frequency in OHCs can help improve patient compliance.
Collapse
Affiliation(s)
- Xinyi Lu
- School of Economics and Management, Beijing Jiaotong University, Beijing, China
| | - Runtong Zhang
- School of Economics and Management, Beijing Jiaotong University, Beijing, China.
| |
Collapse
|
63
|
Mistry P. The New Frontiers of AI in Medicine. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_56-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
64
|
Kim J, Shin S, Bae K, Oh S, Park E, del Pobil AP. Can AI be a content generator? Effects of content generators and information delivery methods on the psychology of content consumers. TELEMATICS AND INFORMATICS 2020. [DOI: 10.1016/j.tele.2020.101452] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
65
|
Zhou Y, Ren F. CERG: Chinese Emotional Response Generator with Retrieval Method. RESEARCH 2020; 2020:2616410. [PMID: 33015633 PMCID: PMC7510341 DOI: 10.34133/2020/2616410] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2020] [Accepted: 08/11/2020] [Indexed: 12/02/2022]
Abstract
The dialogue system has always been one of the important topics in the domain of artificial intelligence. So far, most of the mature dialogue systems are task-oriented based, while non-task-oriented dialogue systems still have a lot of room for improvement. We propose a data-driven non-task-oriented dialogue generator “CERG” based on neural networks. This model has the emotion recognition capability and can generate corresponding responses. The data set we adopt comes from the NTCIR-14 STC-3 CECG subtask, which contains more than 1.7 million Chinese Weibo post-response pairs and 6 emotion categories. We try to concatenate the post and the response with the emotion, then mask the response part of the input text character by character to emulate the encoder-decoder framework. We use the improved transformer blocks as the core to build the model and add regularization methods to alleviate the problems of overcorrection and exposure bias. We introduce the retrieval method to the inference process to improve the semantic relevance of generated responses. The results of the manual evaluation show that our proposed model can make different responses to different emotions to improve the human-computer interaction experience. This model can be applied to lots of domains, such as automatic reply robots of social application.
Collapse
Affiliation(s)
- Yangyang Zhou
- Faculty of Engineer, University of Tokushima, Tokushima 770-8506, Japan
| | - Fuji Ren
- Faculty of Engineer, University of Tokushima, Tokushima 770-8506, Japan
| |
Collapse
|
66
|
Zhang J, Oh YJ, Lange P, Yu Z, Fukuoka Y. Artificial Intelligence Chatbot Behavior Change Model for Designing Artificial Intelligence Chatbots to Promote Physical Activity and a Healthy Diet: Viewpoint. J Med Internet Res 2020; 22:e22845. [PMID: 32996892 PMCID: PMC7557439 DOI: 10.2196/22845] [Citation(s) in RCA: 76] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 09/03/2020] [Accepted: 09/17/2020] [Indexed: 12/11/2022] Open
Abstract
BACKGROUND Chatbots empowered by artificial intelligence (AI) can increasingly engage in natural conversations and build relationships with users. Applying AI chatbots to lifestyle modification programs is one of the promising areas to develop cost-effective and feasible behavior interventions to promote physical activity and a healthy diet. OBJECTIVE The purposes of this perspective paper are to present a brief literature review of chatbot use in promoting physical activity and a healthy diet, describe the AI chatbot behavior change model our research team developed based on extensive interdisciplinary research, and discuss ethical principles and considerations. METHODS We conducted a preliminary search of studies reporting chatbots for improving physical activity and/or diet in four databases in July 2020. We summarized the characteristics of the chatbot studies and reviewed recent developments in human-AI communication research and innovations in natural language processing. Based on the identified gaps and opportunities, as well as our own clinical and research experience and findings, we propose an AI chatbot behavior change model. RESULTS Our review found a lack of understanding around theoretical guidance and practical recommendations on designing AI chatbots for lifestyle modification programs. The proposed AI chatbot behavior change model consists of the following four components to provide such guidance: (1) designing chatbot characteristics and understanding user background; (2) building relational capacity; (3) building persuasive conversational capacity; and (4) evaluating mechanisms and outcomes. The rationale and evidence supporting the design and evaluation choices for this model are presented in this paper. CONCLUSIONS As AI chatbots become increasingly integrated into various digital communications, our proposed theoretical framework is the first step to conceptualize the scope of utilization in health behavior change domains and to synthesize all possible dimensions of chatbot features to inform intervention design and evaluation. There is a need for more interdisciplinary work to continue developing AI techniques to improve a chatbot's relational and persuasive capacities to change physical activity and diet behaviors with strong ethical principles.
Collapse
Affiliation(s)
- Jingwen Zhang
- Department of Communication, University of California, Davis, Davis, CA, United States
- Department of Public Health Sciences, University of California, Davis, Davis, CA, United States
| | - Yoo Jung Oh
- Department of Communication, University of California, Davis, Davis, CA, United States
| | - Patrick Lange
- Department of Computer Science, University of California, Davis, Davis, CA, United States
| | - Zhou Yu
- Department of Computer Science, University of California, Davis, Davis, CA, United States
| | - Yoshimi Fukuoka
- Department of Physiological Nursing, University of California, San Francisco, San Francisco, CA, United States
| |
Collapse
|
67
|
Tudor Car L, Dhinagaran DA, Kyaw BM, Kowatsch T, Joty S, Theng YL, Atun R. Conversational Agents in Health Care: Scoping Review and Conceptual Analysis. J Med Internet Res 2020; 22:e17158. [PMID: 32763886 PMCID: PMC7442948 DOI: 10.2196/17158] [Citation(s) in RCA: 187] [Impact Index Per Article: 37.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 04/11/2020] [Accepted: 06/13/2020] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Conversational agents, also known as chatbots, are computer programs designed to simulate human text or verbal conversations. They are increasingly used in a range of fields, including health care. By enabling better accessibility, personalization, and efficiency, conversational agents have the potential to improve patient care. OBJECTIVE This study aimed to review the current applications, gaps, and challenges in the literature on conversational agents in health care and provide recommendations for their future research, design, and application. METHODS We performed a scoping review. A broad literature search was performed in MEDLINE (Medical Literature Analysis and Retrieval System Online; Ovid), EMBASE (Excerpta Medica database; Ovid), PubMed, Scopus, and Cochrane Central with the search terms "conversational agents," "conversational AI," "chatbots," and associated synonyms. We also searched the gray literature using sources such as the OCLC (Online Computer Library Center) WorldCat database and ResearchGate in April 2019. Reference lists of relevant articles were checked for further articles. Screening and data extraction were performed in parallel by 2 reviewers. The included evidence was analyzed narratively by employing the principles of thematic analysis. RESULTS The literature search yielded 47 study reports (45 articles and 2 ongoing clinical trials) that matched the inclusion criteria. The identified conversational agents were largely delivered via smartphone apps (n=23) and used free text only as the main input (n=19) and output (n=30) modality. Case studies describing chatbot development (n=18) were the most prevalent, and only 11 randomized controlled trials were identified. The 3 most commonly reported conversational agent applications in the literature were treatment and monitoring, health care service support, and patient education. CONCLUSIONS The literature on conversational agents in health care is largely descriptive and aimed at treatment and monitoring and health service support. It mostly reports on text-based, artificial intelligence-driven, and smartphone app-delivered conversational agents. There is an urgent need for a robust evaluation of diverse health care conversational agents' formats, focusing on their acceptability, safety, and effectiveness.
Collapse
Affiliation(s)
- Lorainne Tudor Car
- Family Medicine and Primary Care, Lee Kong Chian School of Medicine, Nanyang Technological University Singapore, Singapore
- Department of Primary Care and Public Health, School of Public Health, Imperial College London, London, United Kingdom
| | - Dhakshenya Ardhithy Dhinagaran
- Family Medicine and Primary Care, Lee Kong Chian School of Medicine, Nanyang Technological University Singapore, Singapore
| | - Bhone Myint Kyaw
- Family Medicine and Primary Care, Lee Kong Chian School of Medicine, Nanyang Technological University Singapore, Singapore
| | - Tobias Kowatsch
- Future Health Technologies programme, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore-ETH Centre, Singapore
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
- Center for Digital Health Interventions, Institute of Technology Management, University of St Gallen, St Gallen, Switzerland
| | - Shafiq Joty
- School of Computer Sciences and Engineering, Nanyang Technological University Singapore, Singapore
| | - Yin-Leng Theng
- Centre for Healthy and Sustainable Cities, Nanyang Technological University, Singapore
| | - Rifat Atun
- Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, United States
| |
Collapse
|
68
|
Abd-Alrazaq A, Safi Z, Alajlani M, Warren J, Househ M, Denecke K. Technical Metrics Used to Evaluate Health Care Chatbots: Scoping Review. J Med Internet Res 2020; 22:e18301. [PMID: 32442157 PMCID: PMC7305563 DOI: 10.2196/18301] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 04/13/2020] [Accepted: 04/15/2020] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND Dialog agents (chatbots) have a long history of application in health care, where they have been used for tasks such as supporting patient self-management and providing counseling. Their use is expected to grow with increasing demands on health systems and improving artificial intelligence (AI) capability. Approaches to the evaluation of health care chatbots, however, appear to be diverse and haphazard, resulting in a potential barrier to the advancement of the field. OBJECTIVE This study aims to identify the technical (nonclinical) metrics used by previous studies to evaluate health care chatbots. METHODS Studies were identified by searching 7 bibliographic databases (eg, MEDLINE and PsycINFO) in addition to conducting backward and forward reference list checking of the included studies and relevant reviews. The studies were independently selected by two reviewers who then extracted data from the included studies. Extracted data were synthesized narratively by grouping the identified metrics into categories based on the aspect of chatbots that the metrics evaluated. RESULTS Of the 1498 citations retrieved, 65 studies were included in this review. Chatbots were evaluated using 27 technical metrics, which were related to chatbots as a whole (eg, usability, classifier performance, speed), response generation (eg, comprehensibility, realism, repetitiveness), response understanding (eg, chatbot understanding as assessed by users, word error rate, concept error rate), and esthetics (eg, appearance of the virtual agent, background color, and content). CONCLUSIONS The technical metrics of health chatbot studies were diverse, with survey designs and global usability metrics dominating. The lack of standardization and paucity of objective measures make it difficult to compare the performance of health chatbots and could inhibit advancement of the field. We suggest that researchers more frequently include metrics computed from conversation logs. In addition, we recommend the development of a framework of technical metrics with recommendations for specific circumstances for their inclusion in chatbot studies.
Collapse
Affiliation(s)
- Alaa Abd-Alrazaq
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Zeineb Safi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mohannad Alajlani
- Institute of Digital Healthcare, University of Warwick, Coventry, United Kingdom
| | - Jim Warren
- School of Computer Science, University of Auckland, Auckland, New Zealand
| | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Kerstin Denecke
- Institute for Medical Informatics, Bern University of Applied Sciences, Bern, Switzerland
| |
Collapse
|
69
|
Abstract
The concepts of empathy and care are the subject of numerous publications that testify to a surge in interest in the subject. What are the possibilities of teaching empathy to medical and nursing students? A proposed definition and a literature review can be developed to identify the effects of empathy in care and the opportunities for teaching empathy to students and practicing professionals.
Collapse
Affiliation(s)
- Éric Maeker
- Association Emp@thies, pour l'humanisation des soins www.empathies.fr, 12 rue Jean-Jaurès Apt B22/23, 62223 Anzin-Saint-Aubin, France.
| | - Bérengère Maeker-Poquet
- Association Emp@thies, pour l'humanisation des soins www.empathies.fr, 12 rue Jean-Jaurès Apt B22/23, 62223 Anzin-Saint-Aubin, France
| |
Collapse
|
70
|
Hauser-Ulrich S, Künzli H, Meier-Peterhans D, Kowatsch T. A Smartphone-Based Health Care Chatbot to Promote Self-Management of Chronic Pain (SELMA): Pilot Randomized Controlled Trial. JMIR Mhealth Uhealth 2020; 8:e15806. [PMID: 32242820 PMCID: PMC7165314 DOI: 10.2196/15806] [Citation(s) in RCA: 101] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 12/01/2019] [Accepted: 01/26/2020] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND Ongoing pain is one of the most common diseases and has major physical, psychological, social, and economic impacts. A mobile health intervention utilizing a fully automated text-based health care chatbot (TBHC) may offer an innovative way not only to deliver coping strategies and psychoeducation for pain management but also to build a working alliance between a participant and the TBHC. OBJECTIVE The objectives of this study are twofold: (1) to describe the design and implementation to promote the chatbot painSELfMAnagement (SELMA), a 2-month smartphone-based cognitive behavior therapy (CBT) TBHC intervention for pain self-management in patients with ongoing or cyclic pain, and (2) to present findings from a pilot randomized controlled trial, in which effectiveness, influence of intention to change behavior, pain duration, working alliance, acceptance, and adherence were evaluated. METHODS Participants were recruited online and in collaboration with pain experts, and were randomized to interact with SELMA for 8 weeks either every day or every other day concerning CBT-based pain management (n=59), or weekly concerning content not related to pain management (n=43). Pain-related impairment (primary outcome), general well-being, pain intensity, and the bond scale of working alliance were measured at baseline and postintervention. Intention to change behavior and pain duration were measured at baseline only, and acceptance postintervention was assessed via self-reporting instruments. Adherence was assessed via usage data. RESULTS From May 2018 to August 2018, 311 adults downloaded the SELMA app, 102 of whom consented to participate and met the inclusion criteria. The average age of the women (88/102, 86.4%) and men (14/102, 13.6%) participating was 43.7 (SD 12.7) years. Baseline group comparison did not differ with respect to any demographic or clinical variable. The intervention group reported no significant change in pain-related impairment (P=.68) compared to the control group postintervention. The intention to change behavior was positively related to pain-related impairment (P=.01) and pain intensity (P=.01). Working alliance with the TBHC SELMA was comparable to that obtained in guided internet therapies with human coaches. Participants enjoyed using the app, perceiving it as useful and easy to use. Participants of the intervention group replied with an average answer ratio of 0.71 (SD 0.20) to 200 (SD 58.45) conversations initiated by SELMA. Participants' comments revealed an appreciation of the empathic and responsible interaction with the TBHC SELMA. A main criticism was that there was no option to enter free text for the patients' own comments. CONCLUSIONS SELMA is feasible, as revealed mainly by positive feedback and valuable suggestions for future revisions. For example, the participants' intention to change behavior or a more homogenous sample (eg, with a specific type of chronic pain) should be considered in further tailoring of SELMA. TRIAL REGISTRATION German Clinical Trials Register DRKS00017147; https://tinyurl.com/vx6n6sx, Swiss National Clinical Trial Portal: SNCTP000002712; https://www.kofam.ch/de/studienportal/suche/70582/studie/46326.
Collapse
Affiliation(s)
- Sandra Hauser-Ulrich
- Department of Applied Psychology, University of Applied Sciences Zurich, Zurich, Switzerland
| | - Hansjörg Künzli
- Department of Applied Psychology, University of Applied Sciences Zurich, Zurich, Switzerland
| | | | - Tobias Kowatsch
- Center for Digital Health Interventions, Institute of Technology Management, University of St Gallen, St Gallen, Switzerland.,Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
71
|
Abd-alrazaq A, Safi Z, Alajlani M, Warren J, Househ M, Denecke K. Technical Metrics Used to Evaluate Health Care Chatbots: Scoping Review (Preprint).. [DOI: 10.2196/preprints.18301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
BACKGROUND
Dialog agents (chatbots) have a long history of application in health care, where they have been used for tasks such as supporting patient self-management and providing counseling. Their use is expected to grow with increasing demands on health systems and improving artificial intelligence (AI) capability. Approaches to the evaluation of health care chatbots, however, appear to be diverse and haphazard, resulting in a potential barrier to the advancement of the field.
OBJECTIVE
This study aims to identify the technical (nonclinical) metrics used by previous studies to evaluate health care chatbots.
METHODS
Studies were identified by searching 7 bibliographic databases (eg, MEDLINE and PsycINFO) in addition to conducting backward and forward reference list checking of the included studies and relevant reviews. The studies were independently selected by two reviewers who then extracted data from the included studies. Extracted data were synthesized narratively by grouping the identified metrics into categories based on the aspect of chatbots that the metrics evaluated.
RESULTS
Of the 1498 citations retrieved, 65 studies were included in this review. Chatbots were evaluated using 27 technical metrics, which were related to chatbots as a whole (eg, usability, classifier performance, speed), response generation (eg, comprehensibility, realism, repetitiveness), response understanding (eg, chatbot understanding as assessed by users, word error rate, concept error rate), and esthetics (eg, appearance of the virtual agent, background color, and content).
CONCLUSIONS
The technical metrics of health chatbot studies were diverse, with survey designs and global usability metrics dominating. The lack of standardization and paucity of objective measures make it difficult to compare the performance of health chatbots and could inhibit advancement of the field. We suggest that researchers more frequently include metrics computed from conversation logs. In addition, we recommend the development of a framework of technical metrics with recommendations for specific circumstances for their inclusion in chatbot studies.
Collapse
|
72
|
Kocaballi AB, Quiroz JC, Rezazadegan D, Berkovsky S, Magrabi F, Coiera E, Laranjo L. Responses of Conversational Agents to Health and Lifestyle Prompts: Investigation of Appropriateness and Presentation Structures. J Med Internet Res 2020; 22:e15823. [PMID: 32039810 PMCID: PMC7055771 DOI: 10.2196/15823] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 11/21/2019] [Accepted: 12/16/2019] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Conversational agents (CAs) are systems that mimic human conversations using text or spoken language. Their widely used examples include voice-activated systems such as Apple Siri, Google Assistant, Amazon Alexa, and Microsoft Cortana. The use of CAs in health care has been on the rise, but concerns about their potential safety risks often remain understudied. OBJECTIVE This study aimed to analyze how commonly available, general-purpose CAs on smartphones and smart speakers respond to health and lifestyle prompts (questions and open-ended statements) by examining their responses in terms of content and structure alike. METHODS We followed a piloted script to present health- and lifestyle-related prompts to 8 CAs. The CAs' responses were assessed for their appropriateness on the basis of the prompt type: responses to safety-critical prompts were deemed appropriate if they included a referral to a health professional or service, whereas responses to lifestyle prompts were deemed appropriate if they provided relevant information to address the problem prompted. The response structure was also examined according to information sources (Web search-based or precoded), response content style (informative and/or directive), confirmation of prompt recognition, and empathy. RESULTS The 8 studied CAs provided in total 240 responses to 30 prompts. They collectively responded appropriately to 41% (46/112) of the safety-critical and 39% (37/96) of the lifestyle prompts. The ratio of appropriate responses deteriorated when safety-critical prompts were rephrased or when the agent used a voice-only interface. The appropriate responses included mostly directive content and empathy statements for the safety-critical prompts and a mix of informative and directive content for the lifestyle prompts. CONCLUSIONS Our results suggest that the commonly available, general-purpose CAs on smartphones and smart speakers with unconstrained natural language interfaces are limited in their ability to advise on both the safety-critical health prompts and lifestyle prompts. Our study also identified some response structures the CAs employed to present their appropriate responses. Further investigation is needed to establish guidelines for designing suitable response structures for different prompt types.
Collapse
Affiliation(s)
- Ahmet Baki Kocaballi
- Australian Institute of Health Innovation
, Macquarie University, Sydney, Australia
| | - Juan C Quiroz
- Australian Institute of Health Innovation
, Macquarie University, Sydney, Australia
| | - Dana Rezazadegan
- Australian Institute of Health Innovation
, Macquarie University, Sydney, Australia
| | - Shlomo Berkovsky
- Australian Institute of Health Innovation
, Macquarie University, Sydney, Australia
| | - Farah Magrabi
- Australian Institute of Health Innovation
, Macquarie University, Sydney, Australia
| | - Enrico Coiera
- Australian Institute of Health Innovation
, Macquarie University, Sydney, Australia
| | - Liliana Laranjo
- Australian Institute of Health Innovation
, Macquarie University, Sydney, Australia
- NOVA National School of Public Health, Public Health Research Centre, Universidade NOVA de Lisboa, Lisbon, Portugal
- NOVA Medical School, Comprehensive Health Research Center, Universidade NOVA de Lisboa, Lisbon, Portugal
| |
Collapse
|
73
|
de Gennaro M, Krumhuber EG, Lucas G. Effectiveness of an Empathic Chatbot in Combating Adverse Effects of Social Exclusion on Mood. Front Psychol 2020; 10:3061. [PMID: 32038415 PMCID: PMC6989433 DOI: 10.3389/fpsyg.2019.03061] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Accepted: 12/26/2019] [Indexed: 11/13/2022] Open
Abstract
From past research it is well known that social exclusion has detrimental consequences for mental health. To deal with these adverse effects, socially excluded individuals frequently turn to other humans for emotional support. While chatbots can elicit social and emotional responses on the part of the human interlocutor, their effectiveness in the context of social exclusion has not been investigated. In the present study, we examined whether an empathic chatbot can serve as a buffer against the adverse effects of social ostracism. After experiencing exclusion on social media, participants were randomly assigned to either talk with an empathetic chatbot about it (e.g., “I’m sorry that this happened to you”) or a control condition where their responses were merely acknowledged (e.g., “Thank you for your feedback”). Replicating previous research, results revealed that experiences of social exclusion dampened the mood of participants. Interacting with an empathetic chatbot, however, appeared to have a mitigating impact. In particular, participants in the chatbot intervention condition reported higher mood than those in the control condition. Theoretical, methodological, and practical implications, as well as directions for future research are discussed.
Collapse
Affiliation(s)
- Mauro de Gennaro
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Gale Lucas
- Institute for Creative Technologies, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
74
|
Thompson D, Baranowski T. Chatbots as extenders of pediatric obesity intervention: an invited commentary on "Feasibility of Pediatric Obesity & Pre-Diabetes Treatment Support through Tess, the AI Behavioral Coaching Chatbot". Transl Behav Med 2020; 9:448-450. [PMID: 31094432 DOI: 10.1093/tbm/ibz065] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Despite our best efforts, pediatric obesity remains an important public health issue. Although many different approaches to address this condition have been utilized, few have achieved long-term success. Technology is increasingly being explored as a convenient and accessible method for delivering behavioral interventions. Stephens and colleagues report the feasibility of using a behavioral coaching social chatbot, Tess, to extend a multicomponent pediatric obesity intervention for adolescents. We examine the pros and cons of this approach. Although social chatbots offer an interesting and novel method for promoting round-the-clock support, important issues and decisions must be carefully considered during the design phase to help ensure a safe environment for a vulnerable population.
Collapse
Affiliation(s)
- Debbe Thompson
- USDA/ARS Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, USA
| | - Tom Baranowski
- USDA/ARS Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
75
|
Miner AS, Shah N, Bullock KD, Arnow BA, Bailenson J, Hancock J. Key Considerations for Incorporating Conversational AI in Psychotherapy. Front Psychiatry 2019; 10:746. [PMID: 31681047 PMCID: PMC6813224 DOI: 10.3389/fpsyt.2019.00746] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2018] [Accepted: 09/17/2019] [Indexed: 01/25/2023] Open
Abstract
Conversational artificial intelligence (AI) is changing the way mental health care is delivered. By gathering diagnostic information, facilitating treatment, and reviewing clinician behavior, conversational AI is poised to impact traditional approaches to delivering psychotherapy. While this transition is not disconnected from existing professional services, specific formulations of clinician-AI collaboration and migration paths between forms remain vague. In this viewpoint, we introduce four approaches to AI-human integration in mental health service delivery. To inform future research and policy, these four approaches are addressed through four dimensions of impact: access to care, quality, clinician-patient relationship, and patient self-disclosure and sharing. Although many research questions are yet to be investigated, we view safety, trust, and oversight as crucial first steps. If conversational AI isn't safe it should not be used, and if it isn't trusted, it won't be. In order to assess safety, trust, interfaces, procedures, and system level workflows, oversight and collaboration is needed between AI systems, patients, clinicians, and administrators.
Collapse
Affiliation(s)
- Adam S. Miner
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, United States
- Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA, United States
- Department of Communication, Stanford University, Stanford, CA, United States
| | - Nigam Shah
- Stanford Center for Biomedical Informatics Research, Stanford University School of Medicine, Stanford, CA, United States
| | - Kim D. Bullock
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, United States
| | - Bruce A. Arnow
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, United States
| | - Jeremy Bailenson
- Department of Communication, Stanford University, Stanford, CA, United States
| | - Jeff Hancock
- Department of Communication, Stanford University, Stanford, CA, United States
| |
Collapse
|
76
|
Stay back, clever thing! Linking situational control and human uniqueness concerns to the aversion against autonomous technology. COMPUTERS IN HUMAN BEHAVIOR 2019. [DOI: 10.1016/j.chb.2019.01.021] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
77
|
Bibault JE, Chaix B, Nectoux P, Pienkowsky A, Guillemasse A, Brouard B. Healthcare ex Machina: Are conversational agents ready for prime time in oncology? Clin Transl Radiat Oncol 2019; 16:55-59. [PMID: 31008379 PMCID: PMC6454131 DOI: 10.1016/j.ctro.2019.04.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Revised: 04/01/2019] [Accepted: 04/03/2019] [Indexed: 12/04/2022] Open
Abstract
Chatbots are artificial intelligence–driven programs that interact with people. They can be used for screening, treatment adherence and follow-up. They can be deployed as text-based services on a website or mobile applications. Potential applications are very promising in our field.
Chatbots, also known as conversational agents or digital assistants, are artificial intelligence–driven software programs designed to interact with people in a conversational manner. They are often used for user-friendly customer-service triaging. In healthcare, chatbots can create bidirectional information exchange with patients, which could be leveraged for follow-up, screening, treatment adherence or data-collection. They can be deployed over various modalities, such as text-based services (text messaging, mobile applications, chat rooms) on any website or mobile applications, or audio services, such as Siri, Alexa, Cortana or Google Assistant. Potential applications are very promising, particularly in the field of oncology. In this review, we discuss the available publications and applications and the ongoing trials in that setting.
Collapse
Affiliation(s)
- Jean-Emmanuel Bibault
- Department of Radiation Oncology, Hôpital Européen Georges Pompidou, AP-HP, Paris, France
| | - Benjamin Chaix
- WeFight Inc., Institut du Cerveau et de la Moelle épinière, Hôpital Pitié-Salpêtrière, Paris, France.,MSc et ENT Department and University Montpellier 1, Hôpital Gui-de-Chauliac, Montpellier, France
| | - Pierre Nectoux
- WeFight Inc., Institut du Cerveau et de la Moelle épinière, Hôpital Pitié-Salpêtrière, Paris, France
| | - Arthur Pienkowsky
- WeFight Inc., Institut du Cerveau et de la Moelle épinière, Hôpital Pitié-Salpêtrière, Paris, France
| | - Arthur Guillemasse
- WeFight Inc., Institut du Cerveau et de la Moelle épinière, Hôpital Pitié-Salpêtrière, Paris, France
| | - Benoît Brouard
- WeFight Inc., Institut du Cerveau et de la Moelle épinière, Hôpital Pitié-Salpêtrière, Paris, France
| |
Collapse
|