1
|
Lawrence HR, Schneider RA, Rubin SB, Matarić MJ, McDuff DJ, Jones Bell M. The Opportunities and Risks of Large Language Models in Mental Health. JMIR Ment Health 2024; 11:e59479. [PMID: 39105570 DOI: 10.2196/59479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 05/31/2024] [Accepted: 06/01/2024] [Indexed: 08/07/2024] Open
Abstract
Unlabelled Global rates of mental health concerns are rising, and there is increasing realization that existing models of mental health care will not adequately expand to meet the demand. With the emergence of large language models (LLMs) has come great optimism regarding their promise to create novel, large-scale solutions to support mental health. Despite their nascence, LLMs have already been applied to mental health-related tasks. In this paper, we summarize the extant literature on efforts to use LLMs to provide mental health education, assessment, and intervention and highlight key opportunities for positive impact in each area. We then highlight risks associated with LLMs' application to mental health and encourage the adoption of strategies to mitigate these risks. The urgent need for mental health support must be balanced with responsible development, testing, and deployment of mental health LLMs. It is especially critical to ensure that mental health LLMs are fine-tuned for mental health, enhance mental health equity, and adhere to ethical standards and that people, including those with lived experience with mental health concerns, are involved in all stages from development through deployment. Prioritizing these efforts will minimize potential harms to mental health and maximize the likelihood that LLMs will positively impact mental health globally.
Collapse
Affiliation(s)
| | | | | | - Maja J Matarić
- Google LLC, Mountain View, CA, 90291, United States, 13103106000
| | - Daniel J McDuff
- Google LLC, Mountain View, CA, 90291, United States, 13103106000
| | - Megan Jones Bell
- Google LLC, Mountain View, CA, 90291, United States, 13103106000
| |
Collapse
|
2
|
Bouhouita-Guermech S, Haidar H. Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of "Responsibility" in Artificial Intelligence within the Healthcare Context. Asian Bioeth Rev 2024; 16:315-344. [PMID: 39022380 PMCID: PMC11250714 DOI: 10.1007/s41649-024-00292-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 07/20/2024] Open
Abstract
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to "responsibility" and "AI in healthcare", and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders' responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.
Collapse
Affiliation(s)
| | - Hazar Haidar
- Ethics Programs, Department of Letters and Humanities, University of Quebec at Rimouski, Rimouski, Québec Canada
| |
Collapse
|
3
|
Hornstein S, Scharfenberger J, Lueken U, Wundrack R, Hilbert K. Predicting recurrent chat contact in a psychological intervention for the youth using natural language processing. NPJ Digit Med 2024; 7:132. [PMID: 38762694 PMCID: PMC11102489 DOI: 10.1038/s41746-024-01121-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 04/23/2024] [Indexed: 05/20/2024] Open
Abstract
Chat-based counseling hotlines emerged as a promising low-threshold intervention for youth mental health. However, despite the resulting availability of large text corpora, little work has investigated Natural Language Processing (NLP) applications within this setting. Therefore, this preregistered approach (OSF: XA4PN) utilizes a sample of approximately 19,000 children and young adults that received a chat consultation from a 24/7 crisis service in Germany. Around 800,000 messages were used to predict whether chatters would contact the service again, as this would allow the provision of or redirection to additional treatment. We trained an XGBoost Classifier on the words of the anonymized conversations, using repeated cross-validation and bayesian optimization for hyperparameter search. The best model was able to achieve an AUROC score of 0.68 (p < 0.01) on the previously unseen 3942 newest consultations. A shapely-based explainability approach revealed that words indicating younger age or female gender and terms related to self-harm and suicidal thoughts were associated with a higher chance of recontacting. We conclude that NLP-based predictions of recurrent contact are a promising path toward personalized care at chat hotlines.
Collapse
Affiliation(s)
- Silvan Hornstein
- Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany.
| | | | - Ulrike Lueken
- Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
- German Center for Mental Health (DZPG), partner site Berlin/Potsdam, Potsdam, Germany
| | - Richard Wundrack
- Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Kevin Hilbert
- Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| |
Collapse
|
4
|
Wimbarti S, Kairupan BHR, Tallei TE. Critical review of self-diagnosis of mental health conditions using artificial intelligence. Int J Ment Health Nurs 2024; 33:344-358. [PMID: 38345132 DOI: 10.1111/inm.13303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 01/26/2024] [Accepted: 01/30/2024] [Indexed: 03/10/2024]
Abstract
The advent of artificial intelligence (AI) has revolutionised various aspects of our lives, including mental health nursing. AI-driven tools and applications have provided a convenient and accessible means for individuals to assess their mental well-being within the confines of their homes. Nonetheless, the widespread trend of self-diagnosing mental health conditions through AI poses considerable risks. This review article examines the perils associated with relying on AI for self-diagnosis in mental health, highlighting the constraints and possible adverse outcomes that can arise from such practices. It delves into the ethical, psychological, and social implications, underscoring the vital role of mental health professionals, including psychologists, psychiatrists, and nursing specialists, in providing professional assistance and guidance. This article aims to highlight the importance of seeking professional assistance and guidance in addressing mental health concerns, especially in the era of AI-driven self-diagnosis.
Collapse
Affiliation(s)
- Supra Wimbarti
- Faculty of Psychology, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - B H Ralph Kairupan
- Department of Psychiatry, Faculty of Medicine, Sam Ratulangi University, Manado, North Sulawesi, Indonesia
| | - Trina Ekawati Tallei
- Department of Biology, Faculty of Mathematics and Natural Sciences, Sam Ratulangi University, Manado, North Sulawesi, Indonesia
- Department of Biology, Faculty of Medicine, Sam Ratulangi University, Manado, North Sulawesi, Indonesia
| |
Collapse
|
5
|
Ridout B, Forsyth R, Amon KL, Navarro P, Campbell AJ. The Urgent Need for an Evidence-Based Digital Mental Health Practice Model of Care for Youth. JMIR Ment Health 2024; 11:e48441. [PMID: 38534006 PMCID: PMC11004617 DOI: 10.2196/48441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 01/07/2024] [Accepted: 01/25/2024] [Indexed: 03/28/2024] Open
Abstract
Australian providers of mental health services and support for young people include private and public allied health providers, government initiatives (eg, headspace), nongovernment organizations (eg, Kids Helpline), general practitioners (GPs), and the hospital system. Over 20 years of research has established that many young people prefer to seek mental health support online; however, clear client pathways within and between online and offline mental health services are currently lacking. The authors propose a Digital Mental Health Practice model of care for youth to assist with digital mental health service mapping. The proposed model offers accessible pathways for a client to engage with digital mental health services, provides clear navigation to access support for individual needs, and facilitates a seamless connection with offline mental health services using a transferable electronic health records system. This future-looking model also includes emerging technologies, such as artificial intelligence and the metaverse, which must be accounted for as potential tools to be leveraged for digital therapies and support systems. The urgent need for a user-centered Digital Mental Health Practice model of care for youth in Australia is discussed, highlighting the shortcomings of traditional and existing online triage models evident during the COVID-19 pandemic, and the complex challenges that must be overcome, such as the integration of diverse mental health care providers and establishment of a robust electronic health records system. Potential benefits of such a model include reduced pressure on emergency rooms, improved identification of immediate needs, enhanced referral practices, and the establishment of a cost-efficient national digital mental health care model with global applicability. The authors conclude by stressing the consequences of inaction, warning that delays may lead to more complex challenges as new technologies emerge and exacerbate the long-term negative consequences of poor mental health management on the economic and biopsychosocial well-being of young Australians.
Collapse
Affiliation(s)
- Brad Ridout
- Cyberpsychology Research Group, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| | - Rowena Forsyth
- Cyberpsychology Research Group, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| | - Krestina L Amon
- Cyberpsychology Research Group, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| | | | - Andrew J Campbell
- Cyberpsychology Research Group, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
6
|
Thakkar A, Gupta A, De Sousa A. Artificial intelligence in positive mental health: a narrative review. Front Digit Health 2024; 6:1280235. [PMID: 38562663 PMCID: PMC10982476 DOI: 10.3389/fdgth.2024.1280235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
The paper reviews the entire spectrum of Artificial Intelligence (AI) in mental health and its positive role in mental health. AI has a huge number of promises to offer mental health care and this paper looks at multiple facets of the same. The paper first defines AI and its scope in the area of mental health. It then looks at various facets of AI like machine learning, supervised machine learning and unsupervised machine learning and other facets of AI. The role of AI in various psychiatric disorders like neurodegenerative disorders, intellectual disability and seizures are discussed along with the role of AI in awareness, diagnosis and intervention in mental health disorders. The role of AI in positive emotional regulation and its impact in schizophrenia, autism spectrum disorders and mood disorders is also highlighted. The article also discusses the limitations of AI based approaches and the need for AI based approaches in mental health to be culturally aware, with structured flexible algorithms and an awareness of biases that can arise in AI. The ethical issues that may arise with the use of AI in mental health are also visited.
Collapse
|
7
|
Zafar F, Fakhare Alam L, Vivas RR, Wang J, Whei SJ, Mehmood S, Sadeghzadegan A, Lakkimsetti M, Nazir Z. The Role of Artificial Intelligence in Identifying Depression and Anxiety: A Comprehensive Literature Review. Cureus 2024; 16:e56472. [PMID: 38638735 PMCID: PMC11025697 DOI: 10.7759/cureus.56472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/18/2024] [Indexed: 04/20/2024] Open
Abstract
This narrative literature review undertakes a comprehensive examination of the burgeoning field, tracing the development of artificial intelligence (AI)-powered tools for depression and anxiety detection from the level of intricate algorithms to practical applications. Delivering essential mental health care services is now a significant public health priority. In recent years, AI has become a game-changer in the early identification and intervention of these pervasive mental health disorders. AI tools can potentially empower behavioral healthcare services by helping psychiatrists collect objective data on patients' progress and tasks. This study emphasizes the current understanding of AI, the different types of AI, its current use in multiple mental health disorders, advantages, disadvantages, and future potentials. As technology develops and the digitalization of the modern era increases, there will be a rise in the application of artificial intelligence in psychiatry; therefore, a comprehensive understanding will be needed. We searched PubMed, Google Scholar, and Science Direct using keywords for this. In a recent review of studies using electronic health records (EHR) with AI and machine learning techniques for diagnosing all clinical conditions, roughly 99 publications have been found. Out of these, 35 studies were identified for mental health disorders in all age groups, and among them, six studies utilized EHR data sources. By critically analyzing prominent scholarly works, we aim to illuminate the current state of this technology, exploring its successes, limitations, and future directions. In doing so, we hope to contribute to a nuanced understanding of AI's potential to revolutionize mental health diagnostics and pave the way for further research and development in this critically important domain.
Collapse
Affiliation(s)
- Fabeha Zafar
- Internal Medicine, Dow University of Health Sciences (DUHS), Karachi, PAK
| | | | - Rafael R Vivas
- Nutrition, Food and Exercise Sciences, Florida State University College of Human Sciences, Tallahassee, USA
| | - Jada Wang
- Medicine, St. George's University, Brooklyn, USA
| | - See Jia Whei
- Internal Medicine, Sriwijaya University, Palembang, IDN
| | | | | | | | - Zahra Nazir
- Internal Medicine, Combined Military Hospital, Quetta, Quetta, PAK
| |
Collapse
|
8
|
Chiauzzi E, Williams A, Mariano TY, Pajarito S, Robinson A, Kirvin-Quamme A, Forman-Hoffman V. Demographic and clinical characteristics associated with anxiety and depressive symptom outcomes in users of a digital mental health intervention incorporating a relational agent. BMC Psychiatry 2024; 24:79. [PMID: 38291369 PMCID: PMC10826101 DOI: 10.1186/s12888-024-05532-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 01/17/2024] [Indexed: 02/01/2024] Open
Abstract
BACKGROUND Digital mental health interventions (DMHIs) may reduce treatment access issues for those experiencing depressive and/or anxiety symptoms. DMHIs that incorporate relational agents may offer unique ways to engage and respond to users and to potentially help reduce provider burden. This study tested Woebot for Mood & Anxiety (W-MA-02), a DMHI that employs Woebot, a relational agent that incorporates elements of several evidence-based psychotherapies, among those with baseline clinical levels of depressive or anxiety symptoms. Changes in self-reported depressive and anxiety symptoms over 8 weeks were measured, along with the association between each of these outcomes and demographic and clinical characteristics. METHODS This exploratory, single-arm, 8-week study of 256 adults yielded non-mutually exclusive subsamples with either clinical levels of depressive or anxiety symptoms at baseline. Week 8 Patient Health Questionnaire-8 (PHQ-8) changes were measured in the depressive subsample (PHQ-8 ≥ 10). Week 8 Generalized Anxiety Disorder-7 (GAD-7) changes were measured in the anxiety subsample (GAD-7 ≥ 10). Demographic and clinical characteristics were examined in association with symptom changes via bivariate and multiple regression models adjusted for W-MA-02 utilization. Characteristics included age, sex at birth, race/ethnicity, marital status, education, sexual orientation, employment status, health insurance, baseline levels of depressive and anxiety symptoms, and concurrent psychotherapeutic or psychotropic medication treatments during the study. RESULTS Both the depressive and anxiety subsamples were predominantly female, educated, non-Hispanic white, and averaged 38 and 37 years of age, respectively. The depressive subsample had significant reductions in depressive symptoms at Week 8 (mean change =-7.28, SD = 5.91, Cohen's d = -1.23, p < 0.01); the anxiety subsample had significant reductions in anxiety symptoms at Week 8 (mean change = -7.45, SD = 5.99, Cohen's d = -1.24, p < 0.01). No significant associations were found between sex at birth, age, employment status, educational background and Week 8 symptom changes. Significant associations between depressive and anxiety symptom outcomes and sexual orientation, marital status, concurrent mental health treatment, and baseline symptom severity were found. CONCLUSIONS The present study suggests early promise for W-MA-02 as an intervention for depression and/or anxiety symptoms. Although exploratory in nature, this study revealed potential user characteristics associated with outcomes that can be investigated in future studies. TRIAL REGISTRATION This study was retrospectively registered on ClinicalTrials.gov (#NCT05672745) on January 5th, 2023.
Collapse
Affiliation(s)
- Emil Chiauzzi
- Woebot Health, 535 Mission Street, 14th Floor, San Francisco, CA, 94105, USA
| | - Andre Williams
- Woebot Health, 535 Mission Street, 14th Floor, San Francisco, CA, 94105, USA
| | - Timothy Y Mariano
- Woebot Health, 535 Mission Street, 14th Floor, San Francisco, CA, 94105, USA
- RR&D Center for Neurorestoration and Neurotechnology, VA Providence Healthcare System, Providence, RI, USA
- Department of Psychiatry and Human Behavior, Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Sarah Pajarito
- Woebot Health, 535 Mission Street, 14th Floor, San Francisco, CA, 94105, USA
| | - Athena Robinson
- Woebot Health, 535 Mission Street, 14th Floor, San Francisco, CA, 94105, USA
| | | | | |
Collapse
|
9
|
Johnson EA, Dudding KM, Carrington JM. When to err is inhuman: An examination of the influence of artificial intelligence-driven nursing care on patient safety. Nurs Inq 2024; 31:e12583. [PMID: 37459179 DOI: 10.1111/nin.12583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 07/05/2023] [Accepted: 07/09/2023] [Indexed: 01/18/2024]
Abstract
Artificial intelligence, as a nonhuman entity, is increasingly used to inform, direct, or supplant nursing care and clinical decision-making. The boundaries between human- and nonhuman-driven nursing care are blurred with the advent of sensors, wearables, camera devices, and humanoid robots at such an accelerated pace that the critical evaluation of its influence on patient safety has not been fully assessed. Since the pivotal release of To Err is Human, patient safety is being challenged by the dynamic healthcare environment like never before, with nursing at a critical juncture to steer the course of artificial intelligence integration in clinical decision-making. This paper presents an overview of artificial intelligence and its application in healthcare and highlights the implications which affect nursing as a profession, including perspectives on nursing education and training recommendations. The legal and policy challenges which emerge when artificial intelligence influences the risk of clinical errors and safety issues are discussed.
Collapse
Affiliation(s)
- Elizabeth A Johnson
- Mark & Robyn Jones College of Nursing, Montana State University, Bozeman, Montana, USA
| | - Katherine M Dudding
- Department of Family, Community, and Health Systems, UAB School of Nursing, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Jane M Carrington
- Department of Family, Community and Health System Science, University of Florida College of Nursing, Gainesville, Florida, USA
| |
Collapse
|
10
|
Khawaja Z, Bélisle-Pipon JC. Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Front Digit Health 2023; 5:1278186. [PMID: 38026836 PMCID: PMC10663264 DOI: 10.3389/fdgth.2023.1278186] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Artificial intelligence (AI)-powered chatbots have the potential to substantially increase access to affordable and effective mental health services by supplementing the work of clinicians. Their 24/7 availability and accessibility through a mobile phone allow individuals to obtain help whenever and wherever needed, overcoming financial and logistical barriers. Although psychological AI chatbots have the ability to make significant improvements in providing mental health care services, they do not come without ethical and technical challenges. Some major concerns include providing inadequate or harmful support, exploiting vulnerable populations, and potentially producing discriminatory advice due to algorithmic bias. However, it is not always obvious for users to fully understand the nature of the relationship they have with chatbots. There can be significant misunderstandings about the exact purpose of the chatbot, particularly in terms of care expectations, ability to adapt to the particularities of users and responsiveness in terms of the needs and resources/treatments that can be offered. Hence, it is imperative that users are aware of the limited therapeutic relationship they can enjoy when interacting with mental health chatbots. Ignorance or misunderstanding of such limitations or of the role of psychological AI chatbots may lead to a therapeutic misconception (TM) where the user would underestimate the restrictions of such technologies and overestimate their ability to provide actual therapeutic support and guidance. TM raises major ethical concerns that can exacerbate one's mental health contributing to the global mental health crisis. This paper will explore the various ways in which TM can occur particularly through inaccurate marketing of these chatbots, forming a digital therapeutic alliance with them, receiving harmful advice due to bias in the design and algorithm, and the chatbots inability to foster autonomy with patients.
Collapse
|
11
|
Nashwan AJ, Gharib S, Alhadidi M, El-Ashry AM, Alamgir A, Al-Hassan M, Khedr MA, Dawood S, Abufarsakh B. Harnessing Artificial Intelligence: Strategies for Mental Health Nurses in Optimizing Psychiatric Patient Care. Issues Ment Health Nurs 2023; 44:1020-1034. [PMID: 37850937 DOI: 10.1080/01612840.2023.2263579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
This narrative review explores the transformative impact of Artificial Intelligence (AI) on mental health nursing, particularly in enhancing psychiatric patient care. AI technologies present new strategies for early detection, risk assessment, and improving treatment adherence in mental health. They also facilitate remote patient monitoring, bridge geographical gaps, and support clinical decision-making. The evolution of virtual mental health assistants and AI-enhanced therapeutic interventions are also discussed. These technological advancements reshape the nurse-patient interactions while ensuring personalized, efficient, and high-quality care. The review also addresses AI's ethical and responsible use in mental health nursing, emphasizing patient privacy, data security, and the balance between human interaction and AI tools. As AI applications in mental health care continue to evolve, this review encourages continued innovation while advocating for responsible implementation, thereby optimally leveraging the potential of AI in mental health nursing.
Collapse
Affiliation(s)
- Abdulqadir J Nashwan
- Nursing Department, Hamad Medical Corporation, Doha, Qatar
- Department of Public Health, College of Health Sciences, QU Health, Qatar University, Doha, Qatar
| | - Suzan Gharib
- Nursing Department, Al-Khaldi Hospital, Amman, Jordan
| | - Majdi Alhadidi
- Psychiatric & Mental Health Nursing, Faculty of Nursing, Al-Zaytoonah University of Jordan, Amman, Jordan
| | | | | | | | | | - Shaimaa Dawood
- Faculty of Nursing, Alexandria University, Alexandria, Egypt
| | | |
Collapse
|
12
|
Garcia Valencia OA, Suppadungsuk S, Thongprayoon C, Miao J, Tangpanithandee S, Craici IM, Cheungpasitporn W. Ethical Implications of Chatbot Utilization in Nephrology. J Pers Med 2023; 13:1363. [PMID: 37763131 PMCID: PMC10532744 DOI: 10.3390/jpm13091363] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/01/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023] Open
Abstract
This comprehensive critical review critically examines the ethical implications associated with integrating chatbots into nephrology, aiming to identify concerns, propose policies, and offer potential solutions. Acknowledging the transformative potential of chatbots in healthcare, responsible implementation guided by ethical considerations is of the utmost importance. The review underscores the significance of establishing robust guidelines for data collection, storage, and sharing to safeguard privacy and ensure data security. Future research should prioritize defining appropriate levels of data access, exploring anonymization techniques, and implementing encryption methods. Transparent data usage practices and obtaining informed consent are fundamental ethical considerations. Effective security measures, including encryption technologies and secure data transmission protocols, are indispensable for maintaining the confidentiality and integrity of patient data. To address potential biases and discrimination, the review suggests regular algorithm reviews, diversity strategies, and ongoing monitoring. Enhancing the clarity of chatbot capabilities, developing user-friendly interfaces, and establishing explicit consent procedures are essential for informed consent. Striking a balance between automation and human intervention is vital to preserve the doctor-patient relationship. Cultural sensitivity and multilingual support should be considered through chatbot training. To ensure ethical chatbot utilization in nephrology, it is imperative to prioritize the development of comprehensive ethical frameworks encompassing data handling, security, bias mitigation, informed consent, and collaboration. Continuous research and innovation in this field are crucial for maximizing the potential of chatbot technology and ultimately improving patient outcomes.
Collapse
Affiliation(s)
- Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Supawit Tangpanithandee
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Iasmina M. Craici
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| |
Collapse
|
13
|
Minerva F, Giubilini A. Is AI the Future of Mental Healthcare? TOPOI : AN INTERNATIONAL REVIEW OF PHILOSOPHY 2023:1-9. [PMID: 37361723 PMCID: PMC10230127 DOI: 10.1007/s11245-023-09932-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/09/2023] [Indexed: 06/28/2023]
Affiliation(s)
| | - Alberto Giubilini
- Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, England
| |
Collapse
|
14
|
Entenberg GA, Dosovitsky G, Aghakhani S, Mostovoy K, Carre N, Marshall Z, Benfica D, Mizrahi S, Testerman A, Rousseau A, Lin G, Bunge EL. User experience with a parenting chatbot micro intervention. Front Digit Health 2023; 4:989022. [PMID: 36714612 PMCID: PMC9874295 DOI: 10.3389/fdgth.2022.989022] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 12/20/2022] [Indexed: 01/12/2023] Open
Abstract
Background The use of chatbots to address mental health conditions have become increasingly popular in recent years. However, few studies aimed to teach parenting skills through chatbots, and there are no reports on parental user experience. Aim: This study aimed to assess the user experience of a parenting chatbot micro intervention to teach how to praise children in a Spanish-speaking country. Methods A sample of 89 parents were assigned to the chatbot micro intervention as part of a randomized controlled trial study. Completion rates, engagement, satisfaction, net promoter score, and acceptability were analyzed. Results 66.3% of the participants completed the intervention. Participants exchanged an average of 49.8 messages (SD = 1.53), provided an average satisfaction score of 4.19 (SD = .79), and reported that they would recommend the chatbot to other parents (net promoter score = 4.63/5; SD = .66). Acceptability level was high (ease of use = 4.66 [SD = .73]; comfortability = 4.76 [SD = .46]; lack of technical problems = 4.69 [SD = .59]; interactivity = 4.51 [SD = .77]; usefulness for everyday life = 4.75 [SD = .54]). Conclusions Overall, users completed the intervention at a high rate, engaged with the chatbot, were satisfied, would recommend it to others, and reported a high level of acceptability. Chatbots have the potential to teach parenting skills however research on the efficacy of parenting chatbot interventions is needed.
Collapse
Affiliation(s)
- G. A. Entenberg
- Research Department, Fundación ETCI, Buenos Aires, Argentina,Correspondence: G. A. Entenberg E. L. Bunge
| | - G. Dosovitsky
- Children and Adolescents Psychotherapy and Technology Lab (CAPT), Palo Alto University, Palo Alto, CA, United States
| | - S. Aghakhani
- Children and Adolescents Psychotherapy and Technology Lab (CAPT), Palo Alto University, Palo Alto, CA, United States
| | - K. Mostovoy
- Children and Adolescents Psychotherapy and Technology Lab (CAPT), Palo Alto University, Palo Alto, CA, United States
| | - N. Carre
- Children and Adolescents Psychotherapy and Technology Lab (CAPT), Palo Alto University, Palo Alto, CA, United States
| | - Z. Marshall
- Children and Adolescents Psychotherapy and Technology Lab (CAPT), Palo Alto University, Palo Alto, CA, United States
| | - D. Benfica
- Children and Adolescents Psychotherapy and Technology Lab (CAPT), Palo Alto University, Palo Alto, CA, United States
| | - S. Mizrahi
- Research Department, Fundación ETCI, Buenos Aires, Argentina
| | - A. Testerman
- Children and Adolescents Psychotherapy and Technology Lab (CAPT), Palo Alto University, Palo Alto, CA, United States
| | - A. Rousseau
- Children and Adolescents Psychotherapy and Technology Lab (CAPT), Palo Alto University, Palo Alto, CA, United States
| | - G. Lin
- Children and Adolescents Psychotherapy and Technology Lab (CAPT), Palo Alto University, Palo Alto, CA, United States
| | - E. L. Bunge
- Children and Adolescents Psychotherapy and Technology Lab (CAPT), Palo Alto University, Palo Alto, CA, United States,Department of Psychology, International Institute for Internet Interventions i4Health, Palo Alto, CA, United States,Correspondence: G. A. Entenberg E. L. Bunge
| |
Collapse
|
15
|
Ahmed A, Aziz S, Khalifa M, Shah U, Hassan A, Abd-Alrazaq A, Househ M. Thematic Analysis on User Reviews for Depression and Anxiety Chatbot Apps: Machine Learning Approach. JMIR Form Res 2022; 6:e27654. [PMID: 35275069 PMCID: PMC8956988 DOI: 10.2196/27654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 05/19/2021] [Accepted: 12/15/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Anxiety and depression are among the most commonly prevalent mental health disorders worldwide. Chatbot apps can play an important role in relieving anxiety and depression. Users' reviews of chatbot apps are considered an important source of data for exploring users' opinions and satisfaction. OBJECTIVE This study aims to explore users' opinions, satisfaction, and attitudes toward anxiety and depression chatbot apps by conducting a thematic analysis of users' reviews of 11 anxiety and depression chatbot apps collected from the Google Play Store and Apple App Store. In addition, we propose a workflow to provide a methodological approach for future analysis of app review comments. METHODS We analyzed 205,581 user review comments from chatbots designed for users with anxiety and depression symptoms. Using scraper tools and Google Play Scraper and App Store Scraper Python libraries, we extracted the text and metadata. The reviews were divided into positive and negative meta-themes based on users' rating per review. We analyzed the reviews using word frequencies of bigrams and words in pairs. A topic modeling technique, latent Dirichlet allocation, was applied to identify topics in the reviews and analyzed to detect themes and subthemes. RESULTS Thematic analysis was conducted on 5 topics for each sentimental set. Reviews were categorized as positive or negative. For positive reviews, the main themes were confidence and affirmation building, adequate analysis, and consultation, caring as a friend, and ease of use. For negative reviews, the results revealed the following themes: usability issues, update issues, privacy, and noncreative conversations. CONCLUSIONS Using a machine learning approach, we were able to analyze ≥200,000 comments and categorize them into themes, allowing us to observe users' expectations effectively despite some negative factors. A methodological workflow is provided for the future analysis of review comments.
Collapse
Affiliation(s)
- Arfan Ahmed
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar.,AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Sarah Aziz
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar.,AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Mohamed Khalifa
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Uzair Shah
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Asma Hassan
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Alaa Abd-Alrazaq
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar.,AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Mowafa Househ
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| |
Collapse
|
16
|
Abstract
Human-computer interaction (HCI) has contributed to the design and development of some efficient, user-friendly, cost-effective, and adaptable digital mental health solutions. But HCI has not been well-combined into technological developments resulting in quality and safety concerns. Digital platforms and artificial intelligence (AI) have a good potential to improve prediction, identification, coordination, and treatment by mental health care and suicide prevention services. AI is driving web-based and smartphone apps; mostly it is used for self-help and guided cognitive behavioral therapy (CBT) for anxiety and depression. Interactive AI may help real-time screening and treatment in outdated, strained or lacking mental healthcare systems. The barriers for using AI in mental healthcare include accessibility, efficacy, reliability, usability, safety, security, ethics, suitable education and training, and socio-cultural adaptability. Apps, real-time machine learning algorithms, immersive technologies, and digital phenotyping are notable prospects. Generally, there is a need for faster and better human factors in combination with machine interaction and automation, higher levels of effectiveness evaluation and the application of blended, hybrid or stepped care in an adjunct approach. HCI modeling may assist in the design and development of usable applications, and to effectively recognize, acknowledge, and address the inequities of mental health care and suicide prevention and assist in the digital therapeutic alliance.
Collapse
|