1
|
Wu PF, Summers C, Panesar A, Kaura A, Zhang L. AI Hesitancy and Acceptability-Perceptions of AI Chatbots for Chronic Health Management and Long COVID Support: Survey Study. JMIR Hum Factors 2024; 11:e51086. [PMID: 39045815 PMCID: PMC11287232 DOI: 10.2196/51086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 02/22/2024] [Accepted: 05/09/2024] [Indexed: 07/25/2024] Open
Abstract
Background Artificial intelligence (AI) chatbots have the potential to assist individuals with chronic health conditions by providing tailored information, monitoring symptoms, and offering mental health support. Despite their potential benefits, research on public attitudes toward health care chatbots is still limited. To effectively support individuals with long-term health conditions like long COVID (or post-COVID-19 condition), it is crucial to understand their perspectives and preferences regarding the use of AI chatbots. Objective This study has two main objectives: (1) provide insights into AI chatbot acceptance among people with chronic health conditions, particularly adults older than 55 years and (2) explore the perceptions of using AI chatbots for health self-management and long COVID support. Methods A web-based survey study was conducted between January and March 2023, specifically targeting individuals with diabetes and other chronic conditions. This particular population was chosen due to their potential awareness and ability to self-manage their condition. The survey aimed to capture data at multiple intervals, taking into consideration the public launch of ChatGPT, which could have potentially impacted public opinions during the project timeline. The survey received 1310 clicks and garnered 900 responses, resulting in a total of 888 usable data points. Results Although past experience with chatbots (P<.001, 95% CI .110-.302) and online information seeking (P<.001, 95% CI .039-.084) are strong indicators of respondents' future adoption of health chatbots, they are in general skeptical or unsure about the use of AI chatbots for health care purposes. Less than one-third of the respondents (n=203, 30.1%) indicated that they were likely to use a health chatbot in the next 12 months if available. Most were uncertain about a chatbot's capability to provide accurate medical advice. However, people seemed more receptive to using voice-based chatbots for mental well-being, health data collection, and analysis. Half of the respondents with long COVID showed interest in using emotionally intelligent chatbots. Conclusions AI hesitancy is not uniform across all health domains and user groups. Despite persistent AI hesitancy, there are promising opportunities for chatbots to offer support for chronic conditions in areas of lifestyle enhancement and mental well-being, potentially through voice-based user interfaces.
Collapse
Affiliation(s)
- Philip Fei Wu
- School of Business and Management, Royal Holloway, University of London, Egham, United Kingdom
| | - Charlotte Summers
- DDM Health, Coventry, United Kingdom
- Warwick Medical School, University of Warwick, Coventry, United Kingdom
| | - Arjun Panesar
- DDM Health, Coventry, United Kingdom
- Warwick Medical School, University of Warwick, Coventry, United Kingdom
| | - Amit Kaura
- DDM Health, Coventry, United Kingdom
- Imperial College Healthcare NHS Trust, London, United Kingdom
| | - Li Zhang
- Department of Computer Science, Royal Holloway, University of London, Egham, United Kingdom
| |
Collapse
|
2
|
Laymouna M, Ma Y, Lessard D, Schuster T, Engler K, Lebouché B. Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review. J Med Internet Res 2024; 26:e56930. [PMID: 39042446 PMCID: PMC11303905 DOI: 10.2196/56930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 04/07/2024] [Accepted: 04/12/2024] [Indexed: 07/24/2024] Open
Abstract
BACKGROUND Chatbots, or conversational agents, have emerged as significant tools in health care, driven by advancements in artificial intelligence and digital technology. These programs are designed to simulate human conversations, addressing various health care needs. However, no comprehensive synthesis of health care chatbots' roles, users, benefits, and limitations is available to inform future research and application in the field. OBJECTIVE This review aims to describe health care chatbots' characteristics, focusing on their diverse roles in the health care pathway, user groups, benefits, and limitations. METHODS A rapid review of published literature from 2017 to 2023 was performed with a search strategy developed in collaboration with a health sciences librarian and implemented in the MEDLINE and Embase databases. Primary research studies reporting on chatbot roles or benefits in health care were included. Two reviewers dual-screened the search results. Extracted data on chatbot roles, users, benefits, and limitations were subjected to content analysis. RESULTS The review categorized chatbot roles into 2 themes: delivery of remote health services, including patient support, care management, education, skills building, and health behavior promotion, and provision of administrative assistance to health care providers. User groups spanned across patients with chronic conditions as well as patients with cancer; individuals focused on lifestyle improvements; and various demographic groups such as women, families, and older adults. Professionals and students in health care also emerged as significant users, alongside groups seeking mental health support, behavioral change, and educational enhancement. The benefits of health care chatbots were also classified into 2 themes: improvement of health care quality and efficiency and cost-effectiveness in health care delivery. The identified limitations encompassed ethical challenges, medicolegal and safety concerns, technical difficulties, user experience issues, and societal and economic impacts. CONCLUSIONS Health care chatbots offer a wide spectrum of applications, potentially impacting various aspects of health care. While they are promising tools for improving health care efficiency and quality, their integration into the health care system must be approached with consideration of their limitations to ensure optimal, safe, and equitable use.
Collapse
Affiliation(s)
- Moustafa Laymouna
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
| | - Yuanchao Ma
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
- Department of Biomedical Engineering, Polytechnique Montréal, Montreal, QC, Canada
| | - David Lessard
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Tibor Schuster
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Kim Engler
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Bertrand Lebouché
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| |
Collapse
|
3
|
Moons P. For better or for worse: when chatbots influence human emotions and behaviours. Eur J Cardiovasc Nurs 2024; 23:e49-e51. [PMID: 37791604 DOI: 10.1093/eurjcn/zvad098] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 09/23/2023] [Accepted: 09/29/2023] [Indexed: 10/05/2023]
Affiliation(s)
- Philip Moons
- KU Leuven Department of Public Health and Primary Care, KU Leuven-University of Leuven, Kapucijnenvoer 35 PB7001, 3000 Leuven, Belgium
- Institute of Health and Care Sciences, University of Gothenburg, Arvid Wallgrens backe 1, 413 46 Gothenburg, Sweden
- Department of Paediatrics and Child Health, University of Cape Town, Klipfontein Rd, Rondebosch, 7700 Cape Town, South Africa
| |
Collapse
|
4
|
Ferrario A, Sedlakova J, Trachsel M. The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis. JMIR Ment Health 2024; 11:e56569. [PMID: 38958218 PMCID: PMC11231450 DOI: 10.2196/56569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 04/27/2024] [Accepted: 04/27/2024] [Indexed: 07/04/2024] Open
Abstract
Unlabelled Large language model (LLM)-powered services are gaining popularity in various applications due to their exceptional performance in many tasks, such as sentiment analysis and answering questions. Recently, research has been exploring their potential use in digital health contexts, particularly in the mental health domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, and clinical challenges. In this viewpoint paper, we discuss 2 challenges that affect the use of LLM-enhanced CAI for individuals with mental health issues, focusing on the use case of patients with depression: the tendency to humanize LLM-enhanced CAI and their lack of contextualized robustness. Our approach is interdisciplinary, relying on considerations from philosophy, psychology, and computer science. We argue that the humanization of LLM-enhanced CAI hinges on the reflection of what it means to simulate "human-like" features with LLMs and what role these systems should play in interactions with humans. Further, ensuring the contextualization of the robustness of LLMs requires considering the specificities of language production in individuals with depression, as well as its evolution over time. Finally, we provide a series of recommendations to foster the responsible design and deployment of LLM-enhanced CAI for the therapeutic support of individuals with depression.
Collapse
Affiliation(s)
- Andrea Ferrario
- Institute Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland
- Mobiliar Lab for Analytics at ETH, ETH Zurich, Zurich, Switzerland
| | - Jana Sedlakova
- Institute Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland
- Digital Society Initiative, University of Zurich, Zurich, Switzerland
- Institute for Implementation Science in Health Care, University of Zurich, Zurich, Switzerland
| | - Manuel Trachsel
- University of Basel, Basel, Switzerland
- University Hospital Basel, Basel, Switzerland
- University Psychiatric Clinics Basel, Basel, Switzerland
| |
Collapse
|
5
|
Zhong W, Luo J, Zhang H. The therapeutic effectiveness of artificial intelligence-based chatbots in alleviation of depressive and anxiety symptoms in short-course treatments: A systematic review and meta-analysis. J Affect Disord 2024; 356:459-469. [PMID: 38631422 DOI: 10.1016/j.jad.2024.04.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 04/10/2024] [Accepted: 04/14/2024] [Indexed: 04/19/2024]
Abstract
BACKGROUND The emergence of artificial intelligence-based chatbot has revolutionized the field of clinical psychology and psychotherapy, granting individuals unprecedented access to professional assistance, overcoming time constraints and geographical limitations with cost-effective convenience. However, despite its potential, there has been a noticeable gap in the literature regarding their effectiveness in addressing common mental health issues like depression and anxiety. This meta-analysis aims to evaluate the efficacy of AI-based chatbots in treating these conditions. METHODS A systematic search was executed across multiple databases, including PubMed, Cochrane Library, Web of Science, PsycINFO, and Embase on April 4th, 2024. The effect size of treatment efficacy was calculated using the standardized mean difference (Hedge's g). Quality assessment measures were implemented to ensure trial's quality. RESULTS In our analysis of 18 randomized controlled trials involving 3477 participants, we observed noteworthy improvements in depression (g = -0.26, 95 % CI = -0.34, -0.17) and anxiety (g = -0.19, 95 % CI = -0.29, -0.09) symptoms. The most significant benefits were evident after 8 weeks of treatment. However, at the three-month follow-up, no substantial effects were detected for either condition. LIMITATIONS Several limitations should be considered. These include the lack of diversity in the study populations, variations in chatbot design, and the use of different psychotherapeutic approaches. These factors may limit the generalizability of our findings. CONCLUSION This meta-analysis highlights the promising role of AI-based chatbot interventions in alleviating depressive and anxiety symptoms among adults. Our results indicate that these interventions can yield substantial improvements over a relatively brief treatment period.
Collapse
Affiliation(s)
- Wenjun Zhong
- Center for Studies of Education and Psychology of Ethnic Minorities in Southwest China, Southwest University, Chongqing, China
| | - Jianghua Luo
- Center for Studies of Education and Psychology of Ethnic Minorities in Southwest China, Southwest University, Chongqing, China.
| | - Hong Zhang
- Center for Psychological Health Education, Xinjiang University of Finance & Economics, Urumqi, Xinjiang, China
| |
Collapse
|
6
|
Ulrich S, Lienhard N, Künzli H, Kowatsch T. A Chatbot-Delivered Stress Management Coaching for Students (MISHA App): Pilot Randomized Controlled Trial. JMIR Mhealth Uhealth 2024; 12:e54945. [PMID: 38922677 PMCID: PMC11237786 DOI: 10.2196/54945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 04/05/2024] [Accepted: 05/03/2024] [Indexed: 06/27/2024] Open
Abstract
BACKGROUND Globally, students face increasing mental health challenges, including elevated stress levels and declining well-being, leading to academic performance issues and mental health disorders. However, due to stigma and symptom underestimation, students rarely seek effective stress management solutions. Conversational agents in the health sector have shown promise in reducing stress, depression, and anxiety. Nevertheless, research on their effectiveness for students with stress remains limited. OBJECTIVE This study aims to develop a conversational agent-delivered stress management coaching intervention for students called MISHA and to evaluate its effectiveness, engagement, and acceptance. METHODS In an unblinded randomized controlled trial, Swiss students experiencing stress were recruited on the web. Using a 1:1 randomization ratio, participants (N=140) were allocated to either the intervention or waitlist control group. Treatment effectiveness on changes in the primary outcome, that is, perceived stress, and secondary outcomes, including depression, anxiety, psychosomatic symptoms, and active coping, were self-assessed and evaluated using ANOVA for repeated measure and general estimating equations. RESULTS The per-protocol analysis revealed evidence for improvement of stress, depression, and somatic symptoms with medium effect sizes (Cohen d=-0.36 to Cohen d=-0.60), while anxiety and active coping did not change (Cohen d=-0.29 and Cohen d=0.13). In the intention-to-treat analysis, similar results were found, indicating reduced stress (β estimate=-0.13, 95% CI -0.20 to -0.05; P<.001), depressive symptoms (β estimate=-0.23, 95% CI -0.38 to -0.08; P=.003), and psychosomatic symptoms (β estimate=-0.16, 95% CI -0.27 to -0.06; P=.003), while anxiety and active coping did not change. Overall, 60% (42/70) of the participants in the intervention group completed the coaching by completing the postintervention survey. They particularly appreciated the quality, quantity, credibility, and visual representation of information. While individual customization was rated the lowest, the target group fitting was perceived as high. CONCLUSIONS Findings indicate that MISHA is feasible, acceptable, and effective in reducing perceived stress among students in Switzerland. Future research is needed with different populations, for example, in students with high stress levels or compared to active controls. TRIAL REGISTRATION German Clinical Trials Register DRKS 00030004; https://drks.de/search/en/trial/DRKS00030004.
Collapse
Affiliation(s)
- Sandra Ulrich
- School of Applied Psychology, Zurich University of Applied Sciences, Zurich, Switzerland
| | - Natascha Lienhard
- School of Applied Psychology, Zurich University of Applied Sciences, Zurich, Switzerland
| | - Hansjörg Künzli
- School of Applied Psychology, Zurich University of Applied Sciences, Zurich, Switzerland
| | - Tobias Kowatsch
- Institute for Implementation Science in Health Care, University of Zurich, Zurich, Switzerland
- School of Medicine, University of St. Gallen, St.Gallen, Switzerland
- Centre for Digital Health Interventions, Department of Management, Technology and Economics, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
7
|
Linardon J, Torous J, Firth J, Cuijpers P, Messer M, Fuller-Tyszkiewicz M. Current evidence on the efficacy of mental health smartphone apps for symptoms of depression and anxiety. A meta-analysis of 176 randomized controlled trials. World Psychiatry 2024; 23:139-149. [PMID: 38214614 PMCID: PMC10785982 DOI: 10.1002/wps.21183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/13/2024] Open
Abstract
The mental health care available for depression and anxiety has recently undergone a major technological revolution, with growing interest towards the potential of smartphone apps as a scalable tool to treat these conditions. Since the last comprehensive meta-analysis in 2019 established positive yet variable effects of apps on depressive and anxiety symptoms, more than 100 new randomized controlled trials (RCTs) have been carried out. We conducted an updated meta-analysis with the objectives of providing more precise estimates of effects, quantifying generalizability from this evidence base, and understanding whether major app and trial characteristics moderate effect sizes. We included 176 RCTs that aimed to treat depressive or anxiety symptoms. Apps had overall significant although small effects on symptoms of depression (N=33,567, g=0.28, p<0.001; number needed to treat, NNT=11.5) and generalized anxiety (N=22,394, g=0.26, p<0.001, NNT=12.4) as compared to control groups. These effects were robust at different follow-ups and after removing small sample and higher risk of bias trials. There was less variability in outcome scores at post-test in app compared to control conditions (ratio of variance, RoV=-0.14, 95% CI: -0.24 to -0.05 for depressive symptoms; RoV=-0.21, 95% CI: -0.31 to -0.12 for generalized anxiety symptoms). Effect sizes for depression were significantly larger when apps incorporated cognitive behavioral therapy (CBT) features or included chatbot technology. Effect sizes for anxiety were significantly larger when trials had generalized anxiety as a primary target and administered a CBT app or an app with mood monitoring features. We found evidence of moderate effects of apps on social anxiety (g=0.52) and obsessive-compulsive (g=0.51) symptoms, a small effect on post-traumatic stress symptoms (g=0.12), a large effect on acrophobia symptoms (g=0.90), and a non-significant negative effect on panic symptoms (g=-0.12), although these results should be considered with caution, because most trials had high risk of bias and were based on small sample sizes. We conclude that apps have overall small but significant effects on symptoms of depression and generalized anxiety, and that specific features of apps - such as CBT or mood monitoring features and chatbot technology - are associated with larger effect sizes.
Collapse
Affiliation(s)
- Jake Linardon
- School of Psychology, Deakin University, Geelong, VIC, Australia
- Center for Social and Early Emotional Development, Deakin University, Burwood, VIC, Australia
| | - John Torous
- Department of Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Joseph Firth
- Division of Psychology and Mental Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
- Greater Manchester Mental Health NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK
| | - Pim Cuijpers
- Department of Clinical, Neuro and Developmental Psychology, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- International Institute for Psychotherapy, Babes¸-Bolyai University, Cluj-Napoca, Romania
| | - Mariel Messer
- School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Matthew Fuller-Tyszkiewicz
- School of Psychology, Deakin University, Geelong, VIC, Australia
- Center for Social and Early Emotional Development, Deakin University, Burwood, VIC, Australia
| |
Collapse
|
8
|
Ulrich S, Gantenbein AR, Zuber V, Von Wyl A, Kowatsch T, Künzli H. Development and Evaluation of a Smartphone-Based Chatbot Coach to Facilitate a Balanced Lifestyle in Individuals With Headaches (BalanceUP App): Randomized Controlled Trial. J Med Internet Res 2024; 26:e50132. [PMID: 38265863 PMCID: PMC10851123 DOI: 10.2196/50132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 09/20/2023] [Accepted: 12/12/2023] [Indexed: 01/25/2024] Open
Abstract
BACKGROUND Primary headaches, including migraine and tension-type headaches, are widespread and have a social, physical, mental, and economic impact. Among the key components of treatment are behavior interventions such as lifestyle modification. Scalable conversational agents (CAs) have the potential to deliver behavior interventions at a low threshold. To our knowledge, there is no evidence of behavioral interventions delivered by CAs for the treatment of headaches. OBJECTIVE This study has 2 aims. The first aim was to develop and test a smartphone-based coaching intervention (BalanceUP) for people experiencing frequent headaches, delivered by a CA and designed to improve mental well-being using various behavior change techniques. The second aim was to evaluate the effectiveness of BalanceUP by comparing the intervention and waitlist control groups and assess the engagement and acceptance of participants using BalanceUP. METHODS In an unblinded randomized controlled trial, adults with frequent headaches were recruited on the web and in collaboration with experts and allocated to either a CA intervention (BalanceUP) or a control condition. The effects of the treatment on changes in the primary outcome of the study, that is, mental well-being (as measured by the Patient Health Questionnaire Anxiety and Depression Scale), and secondary outcomes (eg, psychosomatic symptoms, stress, headache-related self-efficacy, intention to change behavior, presenteeism and absenteeism, and pain coping) were analyzed using linear mixed models and Cohen d. Primary and secondary outcomes were self-assessed before and after the intervention, and acceptance was assessed after the intervention. Engagement was measured during the intervention using self-reports and usage data. RESULTS A total of 198 participants (mean age 38.7, SD 12.14 y; n=172, 86.9% women) participated in the study (intervention group: n=110; waitlist control group: n=88). After the intervention, the intention-to-treat analysis revealed evidence for improved well-being (treatment: β estimate=-3.28, 95% CI -5.07 to -1.48) with moderate between-group effects (Cohen d=-0.66, 95% CI -0.99 to -0.33) in favor of the intervention group. We also found evidence of reduced somatic symptoms, perceived stress, and absenteeism and presenteeism, as well as improved headache management self-efficacy, application of behavior change techniques, and pain coping skills, with effects ranging from medium to large (Cohen d=0.43-1.05). Overall, 64.8% (118/182) of the participants used coaching as intended by engaging throughout the coaching and completing the outro. CONCLUSIONS BalanceUP was well accepted, and the results suggest that coaching delivered by a CA can be effective in reducing the burden of people who experience headaches by improving their well-being. TRIAL REGISTRATION German Clinical Trials Register DRKS00017422; https://trialsearch.who.int/Trial2.aspx?TrialID=DRKS00017422.
Collapse
Affiliation(s)
- Sandra Ulrich
- School of Applied Psychology, Zurich University of Applied Sciences, Zurich, Switzerland
| | - Andreas R Gantenbein
- Pain and Research Unit, ZURZACH Care, Bad Zurzach, Switzerland
- Department of Neurology, University Hospital Zurich, Zurich, Switzerland
| | - Viktor Zuber
- School of Applied Psychology, Zurich University of Applied Sciences, Zurich, Switzerland
| | - Agnes Von Wyl
- School of Applied Psychology, Zurich University of Applied Sciences, Zurich, Switzerland
| | - Tobias Kowatsch
- Institute for Implementation Science in Health Care, University of Zurich, Zurich, Switzerland
- School of Medicine, University of St.Gallen, St. Gallen, Switzerland
- Centre for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Hansjörg Künzli
- School of Applied Psychology, Zurich University of Applied Sciences, Zurich, Switzerland
| |
Collapse
|
9
|
Li H, Zhang R, Lee YC, Kraut RE, Mohr DC. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digit Med 2023; 6:236. [PMID: 38114588 PMCID: PMC10730549 DOI: 10.1038/s41746-023-00979-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 11/29/2023] [Indexed: 12/21/2023] Open
Abstract
Conversational artificial intelligence (AI), particularly AI-based conversational agents (CAs), is gaining traction in mental health care. Despite their growing usage, there is a scarcity of comprehensive evaluations of their impact on mental health and well-being. This systematic review and meta-analysis aims to fill this gap by synthesizing evidence on the effectiveness of AI-based CAs in improving mental health and factors influencing their effectiveness and user experience. Twelve databases were searched for experimental studies of AI-based CAs' effects on mental illnesses and psychological well-being published before May 26, 2023. Out of 7834 records, 35 eligible studies were identified for systematic review, out of which 15 randomized controlled trials were included for meta-analysis. The meta-analysis revealed that AI-based CAs significantly reduce symptoms of depression (Hedge's g 0.64 [95% CI 0.17-1.12]) and distress (Hedge's g 0.7 [95% CI 0.18-1.22]). These effects were more pronounced in CAs that are multimodal, generative AI-based, integrated with mobile/instant messaging apps, and targeting clinical/subclinical and elderly populations. However, CA-based interventions showed no significant improvement in overall psychological well-being (Hedge's g 0.32 [95% CI -0.13 to 0.78]). User experience with AI-based CAs was largely shaped by the quality of human-AI therapeutic relationships, content engagement, and effective communication. These findings underscore the potential of AI-based CAs in addressing mental health issues. Future research should investigate the underlying mechanisms of their effectiveness, assess long-term effects across various mental health outcomes, and evaluate the safe integration of large language models (LLMs) in mental health care.
Collapse
Affiliation(s)
- Han Li
- Department of Communications and New Media, National University of Singapore, Singapore, 117416, Singapore
| | - Renwen Zhang
- Department of Communications and New Media, National University of Singapore, Singapore, 117416, Singapore.
| | - Yi-Chieh Lee
- Department of Computer Science, National University of Singapore, Singapore, 117416, Singapore
| | - Robert E Kraut
- Human-Computer Interaction Institute Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - David C Mohr
- Center for Behavioral Intervention Technologies, Department of Preventive Medicine, Northwestern University, Chicago, IL, 60611, USA
| |
Collapse
|
10
|
Garcia Valencia OA, Suppadungsuk S, Thongprayoon C, Miao J, Tangpanithandee S, Craici IM, Cheungpasitporn W. Ethical Implications of Chatbot Utilization in Nephrology. J Pers Med 2023; 13:1363. [PMID: 37763131 PMCID: PMC10532744 DOI: 10.3390/jpm13091363] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/01/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023] Open
Abstract
This comprehensive critical review critically examines the ethical implications associated with integrating chatbots into nephrology, aiming to identify concerns, propose policies, and offer potential solutions. Acknowledging the transformative potential of chatbots in healthcare, responsible implementation guided by ethical considerations is of the utmost importance. The review underscores the significance of establishing robust guidelines for data collection, storage, and sharing to safeguard privacy and ensure data security. Future research should prioritize defining appropriate levels of data access, exploring anonymization techniques, and implementing encryption methods. Transparent data usage practices and obtaining informed consent are fundamental ethical considerations. Effective security measures, including encryption technologies and secure data transmission protocols, are indispensable for maintaining the confidentiality and integrity of patient data. To address potential biases and discrimination, the review suggests regular algorithm reviews, diversity strategies, and ongoing monitoring. Enhancing the clarity of chatbot capabilities, developing user-friendly interfaces, and establishing explicit consent procedures are essential for informed consent. Striking a balance between automation and human intervention is vital to preserve the doctor-patient relationship. Cultural sensitivity and multilingual support should be considered through chatbot training. To ensure ethical chatbot utilization in nephrology, it is imperative to prioritize the development of comprehensive ethical frameworks encompassing data handling, security, bias mitigation, informed consent, and collaboration. Continuous research and innovation in this field are crucial for maximizing the potential of chatbot technology and ultimately improving patient outcomes.
Collapse
Affiliation(s)
- Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Supawit Tangpanithandee
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Iasmina M. Craici
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| |
Collapse
|
11
|
Grodniewicz JP, Hohol M. Waiting for a digital therapist: three challenges on the path to psychotherapy delivered by artificial intelligence. Front Psychiatry 2023; 14:1190084. [PMID: 37324824 PMCID: PMC10267322 DOI: 10.3389/fpsyt.2023.1190084] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/15/2023] [Indexed: 06/17/2023] Open
Abstract
Growing demand for broadly accessible mental health care, together with the rapid development of new technologies, trigger discussions about the feasibility of psychotherapeutic interventions based on interactions with Conversational Artificial Intelligence (CAI). Many authors argue that while currently available CAI can be a useful supplement for human-delivered psychotherapy, it is not yet capable of delivering fully fledged psychotherapy on its own. The goal of this paper is to investigate what are the most important obstacles on our way to developing CAI systems capable of delivering psychotherapy in the future. To this end, we formulate and discuss three challenges central to this quest. Firstly, we might not be able to develop effective AI-based psychotherapy unless we deepen our understanding of what makes human-delivered psychotherapy effective. Secondly, assuming that it requires building a therapeutic relationship, it is not clear whether psychotherapy can be delivered by non-human agents. Thirdly, conducting psychotherapy might be a problem too complicated for narrow AI, i.e., AI proficient in dealing with only relatively simple and well-delineated tasks. If this is the case, we should not expect CAI to be capable of delivering fully-fledged psychotherapy until the so-called "general" or "human-like" AI is developed. While we believe that all these challenges can ultimately be overcome, we think that being mindful of them is crucial to ensure well-balanced and steady progress on our path to AI-based psychotherapy.
Collapse
|