1
|
Antel R, Whitelaw S, Gore G, Ingelmo P. Moving towards the use of artificial intelligence in pain management. Eur J Pain 2024. [PMID: 39523657 DOI: 10.1002/ejp.4748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 09/15/2024] [Accepted: 10/14/2024] [Indexed: 11/16/2024]
Abstract
BACKGROUND AND OBJECTIVE While the development of artificial intelligence (AI) technologies in medicine has been significant, their application to acute and chronic pain management has not been well characterized. This systematic review aims to provide an overview of the current state of AI in acute and chronic pain management. DATABASES AND DATA TREATMENT This review was registered with PROSPERO (ID# CRD42022307017), the international registry for systematic reviews. The search strategy was prepared by a librarian and run in four electronic databases (Embase, Medline, Central, and Web of Science). Collected articles were screened by two reviewers. Included studies described the use of AI for acute and chronic pain management. RESULTS From the 17,601 records identified in the initial search, 197 were included in this review. Identified applications of AI were described for treatment planning as well as treatment delivery. Described uses include prediction of pain, forecasting of individualized responses to treatment, treatment regimen tailoring, image-guidance for procedural interventions and self-management tools. Multiple domains of AI were used including machine learning, computer vision, fuzzy logic, natural language processing and expert systems. CONCLUSION There is growing literature regarding applications of AI for pain management, and their clinical use holds potential for improving patient outcomes. However, multiple barriers to their clinical integration remain including lack validation of such applications in diverse patient populations, missing infrastructure to support these tools and limited provider understanding of AI. SIGNIFICANCE This review characterizes current applications of AI for pain management and discusses barriers to their clinical integration. Our findings support continuing efforts directed towards establishing comprehensive systems that integrate AI throughout the patient care continuum.
Collapse
Affiliation(s)
- Ryan Antel
- Department of Anesthesia, McGill University, Montreal, Quebec, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Quebec, Canada
| | - Sera Whitelaw
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Quebec, Canada
| | - Genevieve Gore
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Montreal, Quebec, Canada
| | - Pablo Ingelmo
- Department of Anesthesia, McGill University, Montreal, Quebec, Canada
- Edwards Family Interdisciplinary Center for Complex Pain, Montreal Children's Hospital, McGill University Health Center, Montreal, Quebec, Canada
- Alan Edwards Center for Research in Pain, Montreal, Quebec, Canada
- Research Institute, McGill University Health Center, Montreal, Quebec, Canada
| |
Collapse
|
2
|
Singh S, Gambill JL, Attalla M, Fatima R, Gill AR, Siddiqui HF. Evaluating the Clinical Validity and Reliability of Artificial Intelligence-Enabled Diagnostic Tools in Neuropsychiatric Disorders. Cureus 2024; 16:e71651. [PMID: 39553014 PMCID: PMC11567685 DOI: 10.7759/cureus.71651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/16/2024] [Indexed: 11/19/2024] Open
Abstract
Neuropsychiatric disorders (NPDs) pose a substantial burden on the healthcare system. The major challenge in diagnosing NPDs is the subjective assessment by the physician which can lead to inaccurate and delayed diagnosis. Recent studies have depicted that the integration of artificial intelligence (AI) in neuropsychiatry could potentially revolutionize the field by precisely diagnosing complex neurological and mental health disorders in a timely fashion and providing individualized management strategies. In this narrative review, the authors have examined the current status of AI tools in assessing neuropsychiatric disorders and evaluated their validity and reliability in the existing literature. The analysis of various datasets including MRI scans, EEG, facial expressions, social media posts, texts, and laboratory samples in the accurate diagnosis of neuropsychiatric conditions using machine learning has been profoundly explored in this article. The recent trials and tribulations in various neuropsychiatric disorders encouraging future scope in the utility and application of AI have been discussed. Overall machine learning has proved to be feasible and applicable in the field of neuropsychiatry and it is about time that research translates to clinical settings for favorable patient outcomes. Future trials should focus on presenting higher quality evidence for superior adaptability and establish guidelines for healthcare providers to maintain standards.
Collapse
Affiliation(s)
- Satneet Singh
- Psychiatry, Hampshire and Isle of Wight Healthcare NHS Foundation Trust, Southampton, GBR
| | | | - Mary Attalla
- Medicine, Saba University School of Medicine, The Bottom, NLD
| | - Rida Fatima
- Mental Health, Cwm Taf Morgannwg University Health Board, Pontyclun, GBR
| | - Amna R Gill
- Psychiatry, HSE (Health Service Executive) Ireland, Dublin, IRL
| | - Humza F Siddiqui
- Internal Medicine, Jinnah Postgraduate Medical Centre, Karachi, PAK
| |
Collapse
|
3
|
Lang C. Dreaming big with little therapy devices: automated therapy from India. Anthropol Med 2024; 31:232-249. [PMID: 39435587 DOI: 10.1080/13648470.2024.2378727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 04/03/2024] [Accepted: 04/10/2024] [Indexed: 10/23/2024]
Abstract
This paper examines the aspirations, imaginaries and utopias of designers of an AI-based mental health app in India. By looking at automated therapy as both technological fix and sociotechnical object, I ask, What can we learn from engaging with psy technologists' imaginaries and practices of health care futures? What are the assumptions they encode in the app? How does automated therapy reconfigure the geographies and temporalities of care? While automated therapy as instantiated by Wysa provides, I argue, a modest mental health intervention, the scalar aspirations of designers are anything but small. The paper proceeds in three steps. First, it turns to designers' imaginaries of what it means to care for current mental health needs in digitally saturated lifeworlds and how they inscribe them into the app. It identifies nonjudgmental listening, anonymity, acceptance, reframing, and agency as key ideas encoded in Wysa's sociotechnical algorithms, along with a congruence between entrepreneurial and encoded ethics of care. Second, it situates automated therapy within anthropological scholarship on 'little' technical devices in global health to argue that automated therapy devices such as Wysa articulate dreams for minimalist interventions with macro effects. Finally, it explores the new geographies and temporalities of care that automated therapy spurs, tracing the ways the app bridges various spatial and temporal gaps and obstacles of human therapy and upends common global health pathways. This paper contributes to recent scholarship on aspirations, dreams and utopias and on digitization and datafication in global health.
Collapse
Affiliation(s)
- Claudia Lang
- Institute of Anthropology, University of Leipzig, Leipzig, Germany
| |
Collapse
|
4
|
Mazzolenis ME, Bulat E, Schatman ME, Gumb C, Gilligan CJ, Yong RJ. The Ethical Stewardship of Artificial Intelligence in Chronic Pain and Headache: A Narrative Review. Curr Pain Headache Rep 2024; 28:785-792. [PMID: 38809404 DOI: 10.1007/s11916-024-01272-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2024] [Indexed: 05/30/2024]
Abstract
PURPOSE OF REVIEW As artificial intelligence (AI) and machine learning (ML) are becoming more pervasive in medicine, understanding their ethical considerations for chronic pain and headache management is crucial for optimizing their safety. RECENT FINDINGS We reviewed thirty-eight editorial and original research articles published between 2018 and 2023, focusing on the application of AI and ML to chronic pain or headache. The core medical principles of beneficence, non-maleficence, autonomy, and justice constituted the evaluation framework. The AI applications addressed topics such as pain intensity prediction, diagnostic aides, risk assessment for medication misuse, empowering patients to self-manage their conditions, and optimizing access to care. Virtually all AI applications aligned both positively and negatively with specific medical ethics principles. This review highlights the potential of AI to enhance patient outcomes and physicians' experiences in managing chronic pain and headache. We emphasize the importance of carefully considering the advantages, disadvantages, and unintended consequences of utilizing AI tools in chronic pain and headache, and propose the four core principles of medical ethics as an evaluation framework.
Collapse
Affiliation(s)
- Maria Emilia Mazzolenis
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Evgeny Bulat
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital, Harvard Medical School, 75 Francis Street, Boston, 02115, MA, USA
| | - Michael E Schatman
- Department of Anesthesiology, Perioperative Care, and Pain Medicine, Department of Population Health - Division of Medical Ethics, New York University Grossman School of Medicine, New York, NY, USA
| | - Chris Gumb
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Christopher J Gilligan
- Department of Anesthesiology, Robert Wood Johnson University Hospital, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Robert J Yong
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital, Harvard Medical School, 75 Francis Street, Boston, 02115, MA, USA.
| |
Collapse
|
5
|
Neumann I, Andreatta M, Pauli P, Käthner I. Social support of virtual characters reduces pain perception. Eur J Pain 2024; 28:806-820. [PMID: 38088523 DOI: 10.1002/ejp.2220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 10/17/2023] [Accepted: 11/25/2023] [Indexed: 04/18/2024]
Abstract
BACKGROUND Psychosocial factors, such as social support, can reduce pain. Virtual reality (VR) is a powerful tool to decrease pain, but social factors in VR-based pain analgesia have rarely been studied. Specifically, it is unclear whether social support by virtual characters can reduce pain and whether the perceived control behind virtual characters (agency) and varying degrees of social cues impact pain perception. METHODS Healthy participants (N = 97) received heat pain stimulation while undergoing four within-subject conditions in immersive VR: (1) virtual character with a low number of social cues (virtual figure) provided verbal support, (2) virtual character with a high number of social cues (virtual human) provided verbal support, (3) no social support (hearing neutral words), (4) no social support. Perceived agency of the virtual characters served as between-subjects factor. Participants in the avatar group were led to believe that another participant controlled the virtual characters. Participants in the agent group were told they interacted with a computer. However, in both conditions, virtual characters were computer-controlled. Pain ratings, psychophysiological measurements and presence ratings were recorded. RESULTS Virtual social support decreased pain intensity and pain unpleasantness ratings but had no impact on electrodermal activity nor heart rate. A virtual character with a high number of social cues led to lower pain unpleasantness and higher feelings of presence. Agency had no significant impact. CONCLUSIONS Virtual characters providing social support can reduce pain independent of perceived agency. A more human visual appearance can have beneficial effects on social pain modulation by virtual characters. SIGNIFICANCE Social influences are important factors in pain modulation. The current study demonstrated analgesic effects through verbal support provided by virtual characters and investigated modulating factors. A more human appearance of a virtual character resulted in a higher reduction of pain unpleasantness. Importantly, agency of the virtual characters had no impact. Given the increasing use of digital health interventions, the findings suggest a positive impact of virtual characters for digital pain treatments.
Collapse
Affiliation(s)
- I Neumann
- Department of Biological Psychology, Clinical Psychology, and Psychotherapy, Institute of Psychology, University of Würzburg, Würzburg, Germany
| | - M Andreatta
- Department of Biological Psychology, Clinical Psychology, and Psychotherapy, Institute of Psychology, University of Würzburg, Würzburg, Germany
- Department of Psychiatry and Psychotherapy, University Hospital Tübingen, Tübingen, Germany
| | - P Pauli
- Department of Biological Psychology, Clinical Psychology, and Psychotherapy, Institute of Psychology, University of Würzburg, Würzburg, Germany
- Center of Mental Health, Medical Faculty, University of Würzburg, Würzburg, Germany
| | - I Käthner
- Department of Biological Psychology, Clinical Psychology, and Psychotherapy, Institute of Psychology, University of Würzburg, Würzburg, Germany
- Department of Physiological Psychology, University of Bamberg, Bamberg, Germany
| |
Collapse
|
6
|
Aghakhani S, Carre N, Mostovoy K, Shafer R, Baeza-Hernandez K, Entenberg G, Testerman A, Bunge EL. Qualitative analysis of mental health conversational agents messages about autism spectrum disorder: a call for action. Front Digit Health 2023; 5:1251016. [PMID: 38116099 PMCID: PMC10728644 DOI: 10.3389/fdgth.2023.1251016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 11/20/2023] [Indexed: 12/21/2023] Open
Abstract
Background Conversational agents (CA's) have shown promise in increasing accessibility to mental health resources. This study aimed to identify common themes of messages sent to a mental health CA (Wysa) related to ASD by general users and users that identify as having ASD. Methods This study utilized retrospective data. Two thematic analyses were conducted, one focusing on user messages including the keywords (e.g., ASD, autism, Asperger), and the second one with messages from users who self-identified as having ASD. Results For the sample of general users, the most frequent themes were "others having ASD," "ASD diagnosis," and "seeking help." For the users that self-identified as having ASD (n = 277), the most frequent themes were "ASD diagnosis or symptoms," "negative reaction from others," and "positive comments." There were 3,725 emotion words mentioned by users who self-identified as having ASD. The majority had negative valence (80.3%), and few were positive (14.8%) or ambivalent (4.9%). Conclusion Users shared their experiences and emotions surrounding ASD with a mental health CA. Users asked about the ASD diagnosis, sought help, and reported negative reactions from others. CA's have the potential to become a source of support for those interested in ASD and/or identify as having ASD.
Collapse
Affiliation(s)
- S. Aghakhani
- Department of Psychology, Palo Alto University, Palo Alto, CA, United States
| | - N. Carre
- Department of Psychology, Palo Alto University, Palo Alto, CA, United States
| | - K. Mostovoy
- Department of Psychology, Palo Alto University, Palo Alto, CA, United States
| | - R. Shafer
- Department of Psychology, Palo Alto University, Palo Alto, CA, United States
| | - K. Baeza-Hernandez
- Department of Psychology, Palo Alto University, Palo Alto, CA, United States
| | | | - A. Testerman
- Department of Psychology, Palo Alto University, Palo Alto, CA, United States
| | - E. L. Bunge
- Department of Psychology, Palo Alto University, Palo Alto, CA, United States
| |
Collapse
|
7
|
Cho YM, Rai S, Ungar L, Sedoc J, Guntuku SC. An Integrative Survey on Mental Health Conversational Agents to Bridge Computer Science and Medical Perspectives. PROCEEDINGS OF THE CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING. CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING 2023; 2023:11346-11369. [PMID: 38618627 PMCID: PMC11010238 DOI: 10.18653/v1/2023.emnlp-main.698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Mental health conversational agents (a.k.a. chatbots) are widely studied for their potential to offer accessible support to those experiencing mental health challenges. Previous surveys on the topic primarily consider papers published in either computer science or medicine, leading to a divide in understanding and hindering the sharing of beneficial knowledge between both domains. To bridge this gap, we conduct a comprehensive literature review using the PRISMA framework, reviewing 534 papers published in both computer science and medicine. Our systematic review reveals 136 key papers on building mental health-related conversational agents with diverse characteristics of modeling and experimental design techniques. We find that computer science papers focus on LLM techniques and evaluating response quality using automated metrics with little attention to the application while medical papers use rule-based conversational agents and outcome metrics to measure the health outcomes of participants. Based on our findings on transparency, ethics, and cultural heterogeneity in this review, we provide a few recommendations to help bridge the disciplinary divide and enable the cross-disciplinary development of mental health conversational agents.
Collapse
|
8
|
Andrews NE, Ireland D, Vijayakumar P, Burvill L, Hay E, Westerman D, Rose T, Schlumpf M, Strong J, Claus A. Acceptability of a Pain History Assessment and Education Chatbot (Dolores) Across Age Groups in Populations With Chronic Pain: Development and Pilot Testing. JMIR Form Res 2023; 7:e47267. [PMID: 37801342 PMCID: PMC10589833 DOI: 10.2196/47267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 08/28/2023] [Accepted: 08/28/2023] [Indexed: 10/07/2023] Open
Abstract
BACKGROUND The delivery of education on pain neuroscience and the evidence for different treatment approaches has become a key component of contemporary persistent pain management. Chatbots, or more formally conversation agents, are increasingly being used in health care settings due to their versatility in providing interactive and individualized approaches to both capture and deliver information. Research focused on the acceptability of diverse chatbot formats can assist in developing a better understanding of the educational needs of target populations. OBJECTIVE This study aims to detail the development and initial pilot testing of a multimodality pain education chatbot (Dolores) that can be used across different age groups and investigate whether acceptability and feedback were comparable across age groups following pilot testing. METHODS Following an initial design phase involving software engineers (n=2) and expert clinicians (n=6), a total of 60 individuals with chronic pain who attended an outpatient clinic at 1 of 2 pain centers in Australia were recruited for pilot testing. The 60 individuals consisted of 20 (33%) adolescents (aged 10-18 years), 20 (33%) young adults (aged 19-35 years), and 20 (33%) adults (aged >35 years) with persistent pain. Participants spent 20 to 30 minutes completing interactive chatbot activities that enabled the Dolores app to gather a pain history and provide education about pain and pain treatments. After the chatbot activities, participants completed a custom-made feedback questionnaire measuring the acceptability constructs pertaining to health education chatbots. To determine the effect of age group on the acceptability ratings and feedback provided, a series of binomial logistic regression models and cumulative odds ordinal logistic regression models with proportional odds were generated. RESULTS Overall, acceptability was high for the following constructs: engagement, perceived value, usability, accuracy, responsiveness, adoption intention, esthetics, and overall quality. The effect of age group on all acceptability ratings was small and not statistically significant. An analysis of open-ended question responses revealed that major frustrations with the app were related to Dolores' speech, which was explored further through a comparative analysis. With respect to providing negative feedback about Dolores' speech, a logistic regression model showed that the effect of age group was statistically significant (χ22=11.7; P=.003) and explained 27.1% of the variance (Nagelkerke R2). Adults and young adults were less likely to comment on Dolores' speech compared with adolescent participants (odds ratio 0.20, 95% CI 0.05-0.84 and odds ratio 0.05, 95% CI 0.01-0.43, respectively). Comments were related to both speech rate (too slow) and quality (unpleasant and robotic). CONCLUSIONS This study provides support for the acceptability of pain history and education chatbots across different age groups. Chatbot acceptability for adolescent cohorts may be improved by enabling the self-selection of speech characteristics such as rate and personable tone.
Collapse
Affiliation(s)
- Nicole Emma Andrews
- RECOVER Injury Research Centre, The University of Queensland, Herston, Australia
- Tess Cramond Pain and Research Centre, The Royal Brisbane and Women's Hospital, Metro North Hospital and Health Service, Herston, Australia
- The Occupational Therapy Department, The Royal Brisbane and Women's Hospital, Metro North Hospital and Health Service, Herston, Australia
- Surgical Treatment and Rehabilitation Service (STARS) Education and Research Alliance, The University of Queensland and Metro North Health, Herston, Australia
| | - David Ireland
- Australian eHealth Research Centre, The Commonwealth Scientific and Industrial Research Organisation, Herston, Australia
| | - Pranavie Vijayakumar
- Australian eHealth Research Centre, The Commonwealth Scientific and Industrial Research Organisation, Herston, Australia
- The Walter and Eliza Hall Institute of Medical Research, Melbourne, Victoria, Australia
| | - Lyza Burvill
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, Australia
| | - Elizabeth Hay
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, Australia
| | - Daria Westerman
- Queensland Interdisciplinary Paediatric Persistent Pain Service, Queensland Children's Hospital, South Brisbane, Australia
| | - Tanya Rose
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, Australia
| | - Mikaela Schlumpf
- Queensland Interdisciplinary Paediatric Persistent Pain Service, Queensland Children's Hospital, South Brisbane, Australia
| | - Jenny Strong
- Tess Cramond Pain and Research Centre, The Royal Brisbane and Women's Hospital, Metro North Hospital and Health Service, Herston, Australia
- The Occupational Therapy Department, The Royal Brisbane and Women's Hospital, Metro North Hospital and Health Service, Herston, Australia
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, Australia
| | - Andrew Claus
- Tess Cramond Pain and Research Centre, The Royal Brisbane and Women's Hospital, Metro North Hospital and Health Service, Herston, Australia
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, Australia
| |
Collapse
|
9
|
Iglesias M, Sinha C, Vempati R, Grace SE, Roy M, Chapman WC, Rinaldi ML. Evaluating a Digital Mental Health Intervention (Wysa) for Workers' Compensation Claimants: Pilot Feasibility Study. J Occup Environ Med 2023; 65:e93-e99. [PMID: 36459701 PMCID: PMC9897276 DOI: 10.1097/jom.0000000000002762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
OBJECTIVE This study examines the feasibility and acceptability of an AI-led digital mental health intervention in a Workers' Compensation (WC) program, Wysa for Return to Work. METHODS Self-reported demographic data and responses to psychosocial screening questions were analyzed alongside participants' app usage through which four key outcomes were measured: recruitment rate, onboarding rate, retention, and engagement. RESULTS The data demonstrated a high need for psychosocial interventions among injured workers, especially women, young adults, and those with high severity injuries. Those with more psychosocial risk factors had a higher rate of onboarding, retention, and engagement, and those with severe injuries had higher retention. CONCLUSIONS Our study concluded that Wysa for Return to Work, the AI-led digital mental health intervention that delivers a recovery program using a digital conversational agent, is feasible and acceptable for a return-to-work population.
Collapse
|
10
|
Vázquez A, López Zorrilla A, Olaso JM, Torres MI. Dialogue Management and Language Generation for a Robust Conversational Virtual Coach: Validation and User Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:1423. [PMID: 36772464 PMCID: PMC9919213 DOI: 10.3390/s23031423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/13/2023] [Accepted: 01/19/2023] [Indexed: 06/18/2023]
Abstract
Designing human-machine interactive systems requires cooperation between different disciplines is required. In this work, we present a Dialogue Manager and a Language Generator that are the core modules of a Voice-based Spoken Dialogue System (SDS) capable of carrying out challenging, long and complex coaching conversations. We also develop an efficient integration procedure of the whole system that will act as an intelligent and robust Virtual Coach. The coaching task significantly differs from the classical applications of SDSs, resulting in a much higher degree of complexity and difficulty. The Virtual Coach has been successfully tested and validated in a user study with independent elderly, in three different countries with three different languages and cultures: Spain, France and Norway.
Collapse
Affiliation(s)
| | | | | | - María Inés Torres
- Speech Interactive Research Group, Universidad del País Vasco UPV/EHU, 48940 Leioa, Spain
| |
Collapse
|
11
|
Mavragani A, Meheli S, Kadaba M. Understanding Digital Mental Health Needs and Usage With an Artificial Intelligence-Led Mental Health App (Wysa) During the COVID-19 Pandemic: Retrospective Analysis. JMIR Form Res 2023; 7:e41913. [PMID: 36540052 PMCID: PMC9885755 DOI: 10.2196/41913] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 09/23/2022] [Accepted: 12/21/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND There has been a surge in mental health concerns during the COVID-19 pandemic, which has prompted the increased use of digital platforms. However, there is little known about the mental health needs and behaviors of the global population during the pandemic. This study aims to fill this knowledge gap through the analysis of real-world data collected from users of a digital mental health app (Wysa) regarding their engagement patterns and behaviors, as shown by their usage of the service. OBJECTIVE This study aims to (1) examine the relationship between mental health distress, digital health uptake, and COVID-19 case numbers; (2) evaluate engagement patterns with the app during the study period; and (3) examine the efficacy of the app in improving mental health outcomes for its users during the pandemic. METHODS This study used a retrospective observational design. During the COVID-19 pandemic, the app's installations and emotional utterances were measured from March 2020 to October 2021 for the United Kingdom, the United States of America, and India and were mapped against COVID-19 case numbers and their peaks. The engagement of the users from this period (N=4541) with the Wysa app was compared to that of equivalent samples of users from a pre-COVID-19 period (1000 iterations). The efficacy was assessed for users who completed pre-post assessments for symptoms of depression (n=2061) and anxiety (n=1995) on the Patient Health Questionnaire-9 (PHQ-9) and Generalized Anxiety Disorder-7 (GAD-7) test measures, respectively. RESULTS Our findings demonstrate a significant positive correlation between the increase in the number of installs of the Wysa mental health app and the peaks of COVID-19 case numbers in the United Kingdom (P=.02) and India (P<.001). Findings indicate that users (N=4541) during the COVID period had a significantly higher engagement than the samples from the pre-COVID period, with a medium to large effect size for 80% of these 1000 iterative samples, as observed on the Mann-Whitney test. The PHQ-9 and GAD-7 pre-post assessments indicated statistically significant improvement with a medium effect size (PHQ-9: P=.57; GAD-7: P=.56). CONCLUSIONS This study demonstrates that emotional distress increased substantially during the pandemic, prompting the increased uptake of an artificial intelligence-led mental health app (Wysa), and also offers evidence that the Wysa app could support its users and its usage could result in a significant reduction in symptoms of anxiety and depression. This study also highlights the importance of contextualizing interventions and suggests that digital health interventions can provide large populations with scalable and evidence-based support for mental health care.
Collapse
Affiliation(s)
| | - Saha Meheli
- Department of Clinical Psychology, National Institute of Mental Health and Neurosciences, Bengaluru, India
| | | |
Collapse
|