1
|
Sanjeewa R, Iyer R, Apputhurai P, Wickramasinghe N, Meyer D. Empathic Conversational Agent Platform Designs and Their Evaluation in the Context of Mental Health: Systematic Review. JMIR Ment Health 2024; 11:e58974. [PMID: 39250799 PMCID: PMC11420590 DOI: 10.2196/58974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 09/11/2024] Open
Abstract
BACKGROUND The demand for mental health (MH) services in the community continues to exceed supply. At the same time, technological developments make the use of artificial intelligence-empowered conversational agents (CAs) a real possibility to help fill this gap. OBJECTIVE The objective of this review was to identify existing empathic CA design architectures within the MH care sector and to assess their technical performance in detecting and responding to user emotions in terms of classification accuracy. In addition, the approaches used to evaluate empathic CAs within the MH care sector in terms of their acceptability to users were considered. Finally, this review aimed to identify limitations and future directions for empathic CAs in MH care. METHODS A systematic literature search was conducted across 6 academic databases to identify journal articles and conference proceedings using search terms covering 3 topics: "conversational agents," "mental health," and "empathy." Only studies discussing CA interventions for the MH care domain were eligible for this review, with both textual and vocal characteristics considered as possible data inputs. Quality was assessed using appropriate risk of bias and quality tools. RESULTS A total of 19 articles met all inclusion criteria. Most (12/19, 63%) of these empathic CA designs in MH care were machine learning (ML) based, with 26% (5/19) hybrid engines and 11% (2/19) rule-based systems. Among the ML-based CAs, 47% (9/19) used neural networks, with transformer-based architectures being well represented (7/19, 37%). The remaining 16% (3/19) of the ML models were unspecified. Technical assessments of these CAs focused on response accuracies and their ability to recognize, predict, and classify user emotions. While single-engine CAs demonstrated good accuracy, the hybrid engines achieved higher accuracy and provided more nuanced responses. Of the 19 studies, human evaluations were conducted in 16 (84%), with only 5 (26%) focusing directly on the CA's empathic features. All these papers used self-reports for measuring empathy, including single or multiple (scale) ratings or qualitative feedback from in-depth interviews. Only 1 (5%) paper included evaluations by both CA users and experts, adding more value to the process. CONCLUSIONS The integration of CA design and its evaluation is crucial to produce empathic CAs. Future studies should focus on using a clear definition of empathy and standardized scales for empathy measurement, ideally including expert assessment. In addition, the diversity in measures used for technical assessment and evaluation poses a challenge for comparing CA performances, which future research should also address. However, CAs with good technical and empathic performance are already available to users of MH care services, showing promise for new applications, such as helpline services.
Collapse
Affiliation(s)
- Ruvini Sanjeewa
- School of Health Sciences, Swinburne University of Technology, Hawthorn, Australia
| | - Ravi Iyer
- School of Health Sciences, Swinburne University of Technology, Hawthorn, Australia
| | | | - Nilmini Wickramasinghe
- School of Computing, Engineering and Mathematical Sciences, La Trobe University, Bundoora, Australia
| | - Denny Meyer
- School of Health Sciences, Swinburne University of Technology, Hawthorn, Australia
| |
Collapse
|
2
|
Xian X, Chang A, Xiang YT, Liu MT. Debate and Dilemmas Regarding Generative AI in Mental Health Care: Scoping Review. Interact J Med Res 2024; 13:e53672. [PMID: 39133916 PMCID: PMC11347908 DOI: 10.2196/53672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 04/02/2024] [Accepted: 04/26/2024] [Indexed: 08/30/2024] Open
Abstract
BACKGROUND Mental disorders have ranked among the top 10 prevalent causes of burden on a global scale. Generative artificial intelligence (GAI) has emerged as a promising and innovative technological advancement that has significant potential in the field of mental health care. Nevertheless, there is a scarcity of research dedicated to examining and understanding the application landscape of GAI within this domain. OBJECTIVE This review aims to inform the current state of GAI knowledge and identify its key uses in the mental health domain by consolidating relevant literature. METHODS Records were searched within 8 reputable sources including Web of Science, PubMed, IEEE Xplore, medRxiv, bioRxiv, Google Scholar, CNKI and Wanfang databases between 2013 and 2023. Our focus was on original, empirical research with either English or Chinese publications that use GAI technologies to benefit mental health. For an exhaustive search, we also checked the studies cited by relevant literature. Two reviewers were responsible for the data selection process, and all the extracted data were synthesized and summarized for brief and in-depth analyses depending on the GAI approaches used (traditional retrieval and rule-based techniques vs advanced GAI techniques). RESULTS In this review of 144 articles, 44 (30.6%) met the inclusion criteria for detailed analysis. Six key uses of advanced GAI emerged: mental disorder detection, counseling support, therapeutic application, clinical training, clinical decision-making support, and goal-driven optimization. Advanced GAI systems have been mainly focused on therapeutic applications (n=19, 43%) and counseling support (n=13, 30%), with clinical training being the least common. Most studies (n=28, 64%) focused broadly on mental health, while specific conditions such as anxiety (n=1, 2%), bipolar disorder (n=2, 5%), eating disorders (n=1, 2%), posttraumatic stress disorder (n=2, 5%), and schizophrenia (n=1, 2%) received limited attention. Despite prevalent use, the efficacy of ChatGPT in the detection of mental disorders remains insufficient. In addition, 100 articles on traditional GAI approaches were found, indicating diverse areas where advanced GAI could enhance mental health care. CONCLUSIONS This study provides a comprehensive overview of the use of GAI in mental health care, which serves as a valuable guide for future research, practical applications, and policy development in this domain. While GAI demonstrates promise in augmenting mental health care services, its inherent limitations emphasize its role as a supplementary tool rather than a replacement for trained mental health providers. A conscientious and ethical integration of GAI techniques is necessary, ensuring a balanced approach that maximizes benefits while mitigating potential challenges in mental health care practices.
Collapse
Affiliation(s)
- Xuechang Xian
- Department of Communication, Faculty of Social Sciences, University of Macau, Macau SAR, China
- Department of Publicity, Zhaoqing University, Zhaoqing City, China
| | - Angela Chang
- Department of Communication, Faculty of Social Sciences, University of Macau, Macau SAR, China
- Institute of Communication and Health, Lugano University, Lugano, Switzerland
| | - Yu-Tao Xiang
- Department of Public Health and Medicinal Administration, Faculty of Health Sciences, University of Macau, Macau SAR, China
| | | |
Collapse
|
3
|
Laymouna M, Ma Y, Lessard D, Schuster T, Engler K, Lebouché B. Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review. J Med Internet Res 2024; 26:e56930. [PMID: 39042446 PMCID: PMC11303905 DOI: 10.2196/56930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 04/07/2024] [Accepted: 04/12/2024] [Indexed: 07/24/2024] Open
Abstract
BACKGROUND Chatbots, or conversational agents, have emerged as significant tools in health care, driven by advancements in artificial intelligence and digital technology. These programs are designed to simulate human conversations, addressing various health care needs. However, no comprehensive synthesis of health care chatbots' roles, users, benefits, and limitations is available to inform future research and application in the field. OBJECTIVE This review aims to describe health care chatbots' characteristics, focusing on their diverse roles in the health care pathway, user groups, benefits, and limitations. METHODS A rapid review of published literature from 2017 to 2023 was performed with a search strategy developed in collaboration with a health sciences librarian and implemented in the MEDLINE and Embase databases. Primary research studies reporting on chatbot roles or benefits in health care were included. Two reviewers dual-screened the search results. Extracted data on chatbot roles, users, benefits, and limitations were subjected to content analysis. RESULTS The review categorized chatbot roles into 2 themes: delivery of remote health services, including patient support, care management, education, skills building, and health behavior promotion, and provision of administrative assistance to health care providers. User groups spanned across patients with chronic conditions as well as patients with cancer; individuals focused on lifestyle improvements; and various demographic groups such as women, families, and older adults. Professionals and students in health care also emerged as significant users, alongside groups seeking mental health support, behavioral change, and educational enhancement. The benefits of health care chatbots were also classified into 2 themes: improvement of health care quality and efficiency and cost-effectiveness in health care delivery. The identified limitations encompassed ethical challenges, medicolegal and safety concerns, technical difficulties, user experience issues, and societal and economic impacts. CONCLUSIONS Health care chatbots offer a wide spectrum of applications, potentially impacting various aspects of health care. While they are promising tools for improving health care efficiency and quality, their integration into the health care system must be approached with consideration of their limitations to ensure optimal, safe, and equitable use.
Collapse
Affiliation(s)
- Moustafa Laymouna
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
| | - Yuanchao Ma
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
- Department of Biomedical Engineering, Polytechnique Montréal, Montreal, QC, Canada
| | - David Lessard
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Tibor Schuster
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Kim Engler
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Bertrand Lebouché
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| |
Collapse
|
4
|
Anisha SA, Sen A, Bain C. Evaluating the Potential and Pitfalls of AI-Powered Conversational Agents as Humanlike Virtual Health Carers in the Remote Management of Noncommunicable Diseases: Scoping Review. J Med Internet Res 2024; 26:e56114. [PMID: 39012688 PMCID: PMC11289576 DOI: 10.2196/56114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 03/06/2024] [Accepted: 03/25/2024] [Indexed: 07/17/2024] Open
Abstract
BACKGROUND The rising prevalence of noncommunicable diseases (NCDs) worldwide and the high recent mortality rates (74.4%) associated with them, especially in low- and middle-income countries, is causing a substantial global burden of disease, necessitating innovative and sustainable long-term care solutions. OBJECTIVE This scoping review aims to investigate the impact of artificial intelligence (AI)-based conversational agents (CAs)-including chatbots, voicebots, and anthropomorphic digital avatars-as human-like health caregivers in the remote management of NCDs as well as identify critical areas for future research and provide insights into how these technologies might be used effectively in health care to personalize NCD management strategies. METHODS A broad literature search was conducted in July 2023 in 6 electronic databases-Ovid MEDLINE, Embase, PsycINFO, PubMed, CINAHL, and Web of Science-using the search terms "conversational agents," "artificial intelligence," and "noncommunicable diseases," including their associated synonyms. We also manually searched gray literature using sources such as ProQuest Central, ResearchGate, ACM Digital Library, and Google Scholar. We included empirical studies published in English from January 2010 to July 2023 focusing solely on health care-oriented applications of CAs used for remote management of NCDs. The narrative synthesis approach was used to collate and summarize the relevant information extracted from the included studies. RESULTS The literature search yielded a total of 43 studies that matched the inclusion criteria. Our review unveiled four significant findings: (1) higher user acceptance and compliance with anthropomorphic and avatar-based CAs for remote care; (2) an existing gap in the development of personalized, empathetic, and contextually aware CAs for effective emotional and social interaction with users, along with limited consideration of ethical concerns such as data privacy and patient safety; (3) inadequate evidence of the efficacy of CAs in NCD self-management despite a moderate to high level of optimism among health care professionals regarding CAs' potential in remote health care; and (4) CAs primarily being used for supporting nonpharmacological interventions such as behavioral or lifestyle modifications and patient education for the self-management of NCDs. CONCLUSIONS This review makes a unique contribution to the field by not only providing a quantifiable impact analysis but also identifying the areas requiring imminent scholarly attention for the ethical, empathetic, and efficacious implementation of AI in NCD care. This serves as an academic cornerstone for future research in AI-assisted health care for NCD management. TRIAL REGISTRATION Open Science Framework; https://doi.org/10.17605/OSF.IO/GU5PX.
Collapse
Affiliation(s)
- Sadia Azmin Anisha
- Jeffrey Cheah School of Medicine & Health Sciences, Monash University Malaysia, Bandar Sunway, Malaysia
| | - Arkendu Sen
- Jeffrey Cheah School of Medicine & Health Sciences, Monash University Malaysia, Bandar Sunway, Malaysia
| | - Chris Bain
- Faculty of Information Technology, Data Future Institutes, Monash University, Clayton, Australia
| |
Collapse
|
5
|
Huq SM, Maskeliūnas R, Damaševičius R. Dialogue agents for artificial intelligence-based conversational systems for cognitively disabled: a systematic review. Disabil Rehabil Assist Technol 2024; 19:1059-1078. [PMID: 36413423 DOI: 10.1080/17483107.2022.2146768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 10/28/2022] [Accepted: 11/07/2022] [Indexed: 11/23/2022]
Abstract
PURPOSE We present a systematic literature review of dialogue agents for Artificial Intelligence (AI) and agent-based conversational systems dealing with cognitive disability of aged and impaired people including dementia and Parkinson's disease. We analyze current applications, gaps, and challenges in the existing research body, and provide guidelines and recommendations for their future development and use. MATERIALS AND METHODS We perform this study by applying Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) criteria. We performed a systematic search using relevant databases (ACM Digital Library, Google Scholar, IEEE Xplore, PubMed, and Scopus). RESULTS This study identified 468 articles on the use of conversational agents in healthcare. We finally selected 124 articles based on their objectives and content as directly related to our main topic. CONCLUSION We identified the main challenges in the field and analyzed the typical examples of the application of conversational agents in the healthcare domain, the desired characteristics of conversational agents, and chatbot support for aged people and people with cognitive disabilities. Our results contribute to a discussion on conversational health agents and emphasize current knowledge gaps and challenges for future research.IMPLICATIONS FOR REHABILITATIONA systematic literature review of dialogue agents for artificial intelligence and agent-based conversational systems dealing with cognitive disability of aged and impaired people.Main challenges and desired characteristics of the conversational agents, and chatbot support for aged people and people with cognitive disability.Current knowledge gaps and challenges for remote healthcare and rehabilitation.Guidelines and recommendations for future development and use of conversational systems.
Collapse
Affiliation(s)
- Syed Mahmudul Huq
- Faculty of Informatics, Kaunas University of Technology, Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Faculty of Informatics, Kaunas University of Technology, Kaunas, Lithuania
| | | |
Collapse
|
6
|
Wimbarti S, Kairupan BHR, Tallei TE. Critical review of self-diagnosis of mental health conditions using artificial intelligence. Int J Ment Health Nurs 2024; 33:344-358. [PMID: 38345132 DOI: 10.1111/inm.13303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 01/26/2024] [Accepted: 01/30/2024] [Indexed: 03/10/2024]
Abstract
The advent of artificial intelligence (AI) has revolutionised various aspects of our lives, including mental health nursing. AI-driven tools and applications have provided a convenient and accessible means for individuals to assess their mental well-being within the confines of their homes. Nonetheless, the widespread trend of self-diagnosing mental health conditions through AI poses considerable risks. This review article examines the perils associated with relying on AI for self-diagnosis in mental health, highlighting the constraints and possible adverse outcomes that can arise from such practices. It delves into the ethical, psychological, and social implications, underscoring the vital role of mental health professionals, including psychologists, psychiatrists, and nursing specialists, in providing professional assistance and guidance. This article aims to highlight the importance of seeking professional assistance and guidance in addressing mental health concerns, especially in the era of AI-driven self-diagnosis.
Collapse
Affiliation(s)
- Supra Wimbarti
- Faculty of Psychology, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - B H Ralph Kairupan
- Department of Psychiatry, Faculty of Medicine, Sam Ratulangi University, Manado, North Sulawesi, Indonesia
| | - Trina Ekawati Tallei
- Department of Biology, Faculty of Mathematics and Natural Sciences, Sam Ratulangi University, Manado, North Sulawesi, Indonesia
- Department of Biology, Faculty of Medicine, Sam Ratulangi University, Manado, North Sulawesi, Indonesia
| |
Collapse
|
7
|
Ding H, Simmich J, Vaezipour A, Andrews N, Russell T. Evaluation framework for conversational agents with artificial intelligence in health interventions: a systematic scoping review. J Am Med Inform Assoc 2024; 31:746-761. [PMID: 38070173 PMCID: PMC10873847 DOI: 10.1093/jamia/ocad222] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 11/04/2023] [Accepted: 11/24/2023] [Indexed: 02/18/2024] Open
Abstract
OBJECTIVES Conversational agents (CAs) with emerging artificial intelligence present new opportunities to assist in health interventions but are difficult to evaluate, deterring their applications in the real world. We aimed to synthesize existing evidence and knowledge and outline an evaluation framework for CA interventions. MATERIALS AND METHODS We conducted a systematic scoping review to investigate designs and outcome measures used in the studies that evaluated CAs for health interventions. We then nested the results into an overarching digital health framework proposed by the World Health Organization (WHO). RESULTS The review included 81 studies evaluating CAs in experimental (n = 59), observational (n = 15) trials, and other research designs (n = 7). Most studies (n = 72, 89%) were published in the past 5 years. The proposed CA-evaluation framework includes 4 evaluation stages: (1) feasibility/usability, (2) efficacy, (3) effectiveness, and (4) implementation, aligning with WHO's stepwise evaluation strategy. Across these stages, this article presents the essential evidence of different study designs (n = 8), sample sizes, and main evaluation categories (n = 7) with subcategories (n = 40). The main evaluation categories included (1) functionality, (2) safety and information quality, (3) user experience, (4) clinical and health outcomes, (5) costs and cost benefits, (6) usage, adherence, and uptake, and (7) user characteristics for implementation research. Furthermore, the framework highlighted the essential evaluation areas (potential primary outcomes) and gaps across the evaluation stages. DISCUSSION AND CONCLUSION This review presents a new framework with practical design details to support the evaluation of CA interventions in healthcare research. PROTOCOL REGISTRATION The Open Science Framework (https://osf.io/9hq2v) on March 22, 2021.
Collapse
Affiliation(s)
- Hang Ding
- RECOVER Injury Research Centre, Faculty of Health and Behavioural Sciences, The University of Queensland, Brisbane, QLD, Australia
- STARS Education and Research Alliance, Surgical Treatment and Rehabilitation Service (STARS), The University of Queensland and Metro North Health, Brisbane, QLD, Australia
| | - Joshua Simmich
- RECOVER Injury Research Centre, Faculty of Health and Behavioural Sciences, The University of Queensland, Brisbane, QLD, Australia
- STARS Education and Research Alliance, Surgical Treatment and Rehabilitation Service (STARS), The University of Queensland and Metro North Health, Brisbane, QLD, Australia
| | - Atiyeh Vaezipour
- RECOVER Injury Research Centre, Faculty of Health and Behavioural Sciences, The University of Queensland, Brisbane, QLD, Australia
- STARS Education and Research Alliance, Surgical Treatment and Rehabilitation Service (STARS), The University of Queensland and Metro North Health, Brisbane, QLD, Australia
| | - Nicole Andrews
- RECOVER Injury Research Centre, Faculty of Health and Behavioural Sciences, The University of Queensland, Brisbane, QLD, Australia
- STARS Education and Research Alliance, Surgical Treatment and Rehabilitation Service (STARS), The University of Queensland and Metro North Health, Brisbane, QLD, Australia
- The Tess Cramond Pain and Research Centre, Metro North Hospital and Health Service, Brisbane, QLD, Australia
- The Occupational Therapy Department, The Royal Brisbane and Women’s Hospital, Metro North Hospital and Health Service, Brisbane, QLD, Australia
| | - Trevor Russell
- RECOVER Injury Research Centre, Faculty of Health and Behavioural Sciences, The University of Queensland, Brisbane, QLD, Australia
- STARS Education and Research Alliance, Surgical Treatment and Rehabilitation Service (STARS), The University of Queensland and Metro North Health, Brisbane, QLD, Australia
| |
Collapse
|
8
|
Cook D, Peters D, Moradbakhti L, Su T, Da Re M, Schuller BW, Quint J, Wong E, Calvo RA. A text-based conversational agent for asthma support: Mixed-methods feasibility study. Digit Health 2024; 10:20552076241258276. [PMID: 38894942 PMCID: PMC11185032 DOI: 10.1177/20552076241258276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 05/13/2024] [Indexed: 06/21/2024] Open
Abstract
Objective Millions of people in the UK have asthma, yet 70% do not access basic care, leading to the largest number of asthma-related deaths in Europe. Chatbots may extend the reach of asthma support and provide a bridge to traditional healthcare. This study evaluates 'Brisa', a chatbot designed to improve asthma patients' self-assessment and self-management. Methods We recruited 150 adults with an asthma diagnosis to test our chatbot. Participants were recruited over three waves through social media and a research recruitment platform. Eligible participants had access to 'Brisa' via a WhatsApp or website version for 28 days and completed entry and exit questionnaires to evaluate user experience and asthma control. Weekly symptom tracking, user interaction metrics, satisfaction measures, and qualitative feedback were utilised to evaluate the chatbot's usability and potential effectiveness, focusing on changes in asthma control and self-reported behavioural improvements. Results 74% of participants engaged with 'Brisa' at least once. High task completion rates were observed: asthma attack risk assessment (86%), voice recording submission (83%) and asthma control tracking (95.5%). Post use, an 8% improvement in asthma control was reported. User satisfaction surveys indicated positive feedback on helpfulness (80%), privacy (87%), trustworthiness (80%) and functionality (84%) but highlighted a need for improved conversational depth and personalisation. Conclusions The study indicates that chatbots are effective for asthma support, demonstrated by the high usage of features like risk assessment and control tracking, as well as a statistically significant improvement in asthma control. However, lower satisfaction in conversational flexibility highlights rising expectations for chatbot fluency, influenced by advanced models like ChatGPT. Future health-focused chatbots must balance conversational capability with accuracy and safety to maintain engagement and effectiveness.
Collapse
Affiliation(s)
- Darren Cook
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Dorian Peters
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Laura Moradbakhti
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Ting Su
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Marco Da Re
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Bjorn W. Schuller
- Dyson School of Design Engineering, Imperial College London, London, UK
| | | | - Ernie Wong
- Imperial College Healthcare NHS Trust, London, UK
| | - Rafael A. Calvo
- Dyson School of Design Engineering, Imperial College London, London, UK
| |
Collapse
|
9
|
Li H, Zhang R, Lee YC, Kraut RE, Mohr DC. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digit Med 2023; 6:236. [PMID: 38114588 PMCID: PMC10730549 DOI: 10.1038/s41746-023-00979-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 11/29/2023] [Indexed: 12/21/2023] Open
Abstract
Conversational artificial intelligence (AI), particularly AI-based conversational agents (CAs), is gaining traction in mental health care. Despite their growing usage, there is a scarcity of comprehensive evaluations of their impact on mental health and well-being. This systematic review and meta-analysis aims to fill this gap by synthesizing evidence on the effectiveness of AI-based CAs in improving mental health and factors influencing their effectiveness and user experience. Twelve databases were searched for experimental studies of AI-based CAs' effects on mental illnesses and psychological well-being published before May 26, 2023. Out of 7834 records, 35 eligible studies were identified for systematic review, out of which 15 randomized controlled trials were included for meta-analysis. The meta-analysis revealed that AI-based CAs significantly reduce symptoms of depression (Hedge's g 0.64 [95% CI 0.17-1.12]) and distress (Hedge's g 0.7 [95% CI 0.18-1.22]). These effects were more pronounced in CAs that are multimodal, generative AI-based, integrated with mobile/instant messaging apps, and targeting clinical/subclinical and elderly populations. However, CA-based interventions showed no significant improvement in overall psychological well-being (Hedge's g 0.32 [95% CI -0.13 to 0.78]). User experience with AI-based CAs was largely shaped by the quality of human-AI therapeutic relationships, content engagement, and effective communication. These findings underscore the potential of AI-based CAs in addressing mental health issues. Future research should investigate the underlying mechanisms of their effectiveness, assess long-term effects across various mental health outcomes, and evaluate the safe integration of large language models (LLMs) in mental health care.
Collapse
Affiliation(s)
- Han Li
- Department of Communications and New Media, National University of Singapore, Singapore, 117416, Singapore
| | - Renwen Zhang
- Department of Communications and New Media, National University of Singapore, Singapore, 117416, Singapore.
| | - Yi-Chieh Lee
- Department of Computer Science, National University of Singapore, Singapore, 117416, Singapore
| | - Robert E Kraut
- Human-Computer Interaction Institute Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - David C Mohr
- Center for Behavioral Intervention Technologies, Department of Preventive Medicine, Northwestern University, Chicago, IL, 60611, USA
| |
Collapse
|
10
|
Cho YM, Rai S, Ungar L, Sedoc J, Guntuku SC. An Integrative Survey on Mental Health Conversational Agents to Bridge Computer Science and Medical Perspectives. PROCEEDINGS OF THE CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING. CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING 2023; 2023:11346-11369. [PMID: 38618627 PMCID: PMC11010238 DOI: 10.18653/v1/2023.emnlp-main.698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Mental health conversational agents (a.k.a. chatbots) are widely studied for their potential to offer accessible support to those experiencing mental health challenges. Previous surveys on the topic primarily consider papers published in either computer science or medicine, leading to a divide in understanding and hindering the sharing of beneficial knowledge between both domains. To bridge this gap, we conduct a comprehensive literature review using the PRISMA framework, reviewing 534 papers published in both computer science and medicine. Our systematic review reveals 136 key papers on building mental health-related conversational agents with diverse characteristics of modeling and experimental design techniques. We find that computer science papers focus on LLM techniques and evaluating response quality using automated metrics with little attention to the application while medical papers use rule-based conversational agents and outcome metrics to measure the health outcomes of participants. Based on our findings on transparency, ethics, and cultural heterogeneity in this review, we provide a few recommendations to help bridge the disciplinary divide and enable the cross-disciplinary development of mental health conversational agents.
Collapse
|
11
|
Warrier U, Trivedi R. Metaverse and mental health: Just because you can, doesn't mean you should. Asian J Psychiatr 2023; 89:103792. [PMID: 37827063 DOI: 10.1016/j.ajp.2023.103792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 09/22/2023] [Accepted: 09/25/2023] [Indexed: 10/14/2023]
Affiliation(s)
- Uma Warrier
- CMS Bschool, Faculty of management studies, JAIN University, Bangalore, India.
| | | |
Collapse
|
12
|
Siglen E, Vetti HH, Augestad M, Steen VM, Lunde Å, Bjorvatn C. Evaluation of the Rosa Chatbot Providing Genetic Information to Patients at Risk of Hereditary Breast and Ovarian Cancer: Qualitative Interview Study. J Med Internet Res 2023; 25:e46571. [PMID: 37656502 PMCID: PMC10504626 DOI: 10.2196/46571] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 06/27/2023] [Accepted: 07/20/2023] [Indexed: 09/02/2023] Open
Abstract
BACKGROUND Genetic testing has become an integrated part of health care for patients with breast or ovarian cancer, and the increasing demand for genetic testing is accompanied by an increasing need for easy access to reliable genetic information for patients. Therefore, we developed a chatbot app (Rosa) that is able to perform humanlike digital conversations about genetic BRCA testing. OBJECTIVE Before implementing this new information service in daily clinical practice, we wanted to explore 2 aspects of chatbot use: the perceived utility and trust in chatbot technology among healthy patients at risk of hereditary cancer and how interaction with a chatbot regarding sensitive information about hereditary cancer influences patients. METHODS Overall, 175 healthy individuals at risk of hereditary breast and ovarian cancer were invited to test the chatbot, Rosa, before and after genetic counseling. To secure a varied sample, participants were recruited from all cancer genetic clinics in Norway, and the selection was based on age, gender, and risk of having a BRCA pathogenic variant. Among the 34.9% (61/175) of participants who consented for individual interview, a selected subgroup (16/61, 26%) shared their experience through in-depth interviews via video. The semistructured interviews covered the following topics: usability, perceived usefulness, trust in the information received via the chatbot, how Rosa influenced the user, and thoughts about future use of digital tools in health care. The transcripts were analyzed using the stepwise-deductive inductive approach. RESULTS The overall finding was that the chatbot was very welcomed by the participants. They appreciated the 24/7 availability wherever they were and the possibility to use it to prepare for genetic counseling and to repeat and ask questions about what had been said afterward. As Rosa was created by health care professionals, they also valued the information they received as being medically correct. Rosa was referred to as being better than Google because it provided specific and reliable answers to their questions. The findings were summed up in 3 concepts: "Anytime, anywhere"; "In addition, not instead"; and "Trustworthy and true." All participants (16/16) denied increased worry after reading about genetic testing and hereditary breast and ovarian cancer in Rosa. CONCLUSIONS Our results indicate that a genetic information chatbot has the potential to contribute to easy access to uniform information for patients at risk of hereditary breast and ovarian cancer, regardless of geographical location. The 24/7 availability of quality-assured information, tailored to the specific situation, had a reassuring effect on our participants. It was consistent across concepts that Rosa was a tool for preparation and repetition; however, none of the participants (0/16) supported that Rosa could replace genetic counseling if hereditary cancer was confirmed. This indicates that a chatbot can be a well-suited digital companion to genetic counseling.
Collapse
Affiliation(s)
- Elen Siglen
- Western Norway Familial Cancer Center, Department of Medical Genetics, Haukeland University Hospital, Bergen, Norway
- Faculty of Health Studies, VID Specialized University, Bergen, Norway
| | - Hildegunn Høberg Vetti
- Western Norway Familial Cancer Center, Department of Medical Genetics, Haukeland University Hospital, Bergen, Norway
- Faculty of Health Studies, VID Specialized University, Bergen, Norway
| | - Mirjam Augestad
- Faculty of Health Studies, VID Specialized University, Bergen, Norway
| | - Vidar M Steen
- Western Norway Familial Cancer Center, Department of Medical Genetics, Haukeland University Hospital, Bergen, Norway
- Department of Clinical Science, University of Bergen, Bergen, Norway
| | - Åshild Lunde
- Department of Global Public Health and Primary Care, University of Bergen, Bergen, Norway
| | - Cathrine Bjorvatn
- Western Norway Familial Cancer Center, Department of Medical Genetics, Haukeland University Hospital, Bergen, Norway
- Faculty of Health Studies, VID Specialized University, Bergen, Norway
| |
Collapse
|
13
|
Delir Haghighi P, Burstein F. Advances in E-Health and Mobile Health Monitoring. SENSORS (BASEL, SWITZERLAND) 2022; 22:8621. [PMID: 36433218 PMCID: PMC9697701 DOI: 10.3390/s22228621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 10/26/2022] [Indexed: 06/16/2023]
Abstract
E-health as a new industrial phenomenon and a field of research integrates medical informatics, public health and healthcare business, aiming to facilitate the provision of more accessible healthcare services, such as remote health monitoring, reducing healthcare costs and enhancing patient experience [...].
Collapse
|