1
|
Stavropoulos A, Crone DL, Grossmann I. Shadows of wisdom: Classifying meta-cognitive and morally grounded narrative content via large language models. Behav Res Methods 2024; 56:7632-7646. [PMID: 38811519 DOI: 10.3758/s13428-024-02441-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/14/2024] [Indexed: 05/31/2024]
Abstract
We investigated large language models' (LLMs) efficacy in classifying complex psychological constructs like intellectual humility, perspective-taking, open-mindedness, and search for a compromise in narratives of 347 Canadian and American adults reflecting on a workplace conflict. Using state-of-the-art models like GPT-4 across few-shot and zero-shot paradigms and RoB-ELoC (RoBERTa -fine-tuned-on-Emotion-with-Logistic-Regression-Classifier), we compared their performance with expert human coders. Results showed robust classification by LLMs, with over 80% agreement and F1 scores above 0.85, and high human-model reliability (Cohen's κ Md across top models = .80). RoB-ELoC and few-shot GPT-4 were standout classifiers, although somewhat less effective in categorizing intellectual humility. We offer example workflows for easy integration into research. Our proof-of-concept findings indicate the viability of both open-source and commercial LLMs in automating the coding of complex constructs, potentially transforming social science research.
Collapse
Affiliation(s)
| | | | - Igor Grossmann
- Department of Psychology, University of Waterloo, Waterloo, N2L 3G1, Canada.
| |
Collapse
|
2
|
Wang N. The role of psychotherapy apps during teaching solo vocals: The specifics of students' psychological preparation for performing in front of an audience. Acta Psychol (Amst) 2024; 249:104417. [PMID: 39121613 DOI: 10.1016/j.actpsy.2024.104417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 07/08/2024] [Accepted: 07/17/2024] [Indexed: 08/12/2024] Open
Abstract
This study aimed to determine the effectiveness of a self-help application to reduce performance-related excitement in students of solo vocals in higher education institutions. The study participants (n = 219) used the mobile application during 6 weeks. Statistically significant effect of the intervention was achieved by Negative cognitions, Psychological vulnerability, and Anxiety perception constructs. The study also examines the influence of sociodemographic and personal characteristics on anxiety. Gender, graduate status, and self-efficacy were statistically significant variables when using the psychological self-help application. The investigation failed to disclose any significant impact of performance experience. Psychological self-help applications can be used in vocal/music education as a low-threshold intervention to reduce anxiety symptoms. The findings of the study introduce new data into approaches to the treatment of anxiety and expand the understanding of the characteristic features of singer training.
Collapse
Affiliation(s)
- Ning Wang
- College of Music and Dance, Henan Normal University, No. 46, Jianshe East Road, Xinxiang 453007, Henan Province, China.
| |
Collapse
|
3
|
Sriharan A, Sekercioglu N, Mitchell C, Senkaiahliyan S, Hertelendy A, Porter T, Banaszak-Holl J. Leadership for AI Transformation in Health Care Organization: Scoping Review. J Med Internet Res 2024; 26:e54556. [PMID: 39009038 PMCID: PMC11358667 DOI: 10.2196/54556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 03/12/2024] [Accepted: 07/15/2024] [Indexed: 07/17/2024] Open
Abstract
BACKGROUND The leaders of health care organizations are grappling with rising expenses and surging demands for health services. In response, they are increasingly embracing artificial intelligence (AI) technologies to improve patient care delivery, alleviate operational burdens, and efficiently improve health care safety and quality. OBJECTIVE In this paper, we map the current literature and synthesize insights on the role of leadership in driving AI transformation within health care organizations. METHODS We conducted a comprehensive search across several databases, including MEDLINE (via Ovid), PsycINFO (via Ovid), CINAHL (via EBSCO), Business Source Premier (via EBSCO), and Canadian Business & Current Affairs (via ProQuest), spanning articles published from 2015 to June 2023 discussing AI transformation within the health care sector. Specifically, we focused on empirical studies with a particular emphasis on leadership. We used an inductive, thematic analysis approach to qualitatively map the evidence. The findings were reported in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews) guidelines. RESULTS A comprehensive review of 2813 unique abstracts led to the retrieval of 97 full-text articles, with 22 included for detailed assessment. Our literature mapping reveals that successful AI integration within healthcare organizations requires leadership engagement across technological, strategic, operational, and organizational domains. Leaders must demonstrate a blend of technical expertise, adaptive strategies, and strong interpersonal skills to navigate the dynamic healthcare landscape shaped by complex regulatory, technological, and organizational factors. CONCLUSIONS In conclusion, leading AI transformation in healthcare requires a multidimensional approach, with leadership across technological, strategic, operational, and organizational domains. Organizations should implement a comprehensive leadership development strategy, including targeted training and cross-functional collaboration, to equip leaders with the skills needed for AI integration. Additionally, when upskilling or recruiting AI talent, priority should be given to individuals with a strong mix of technical expertise, adaptive capacity, and interpersonal acumen, enabling them to navigate the unique complexities of the healthcare environment.
Collapse
Affiliation(s)
- Abi Sriharan
- Krembil Centre for Health Management and Leadership, Schulich School of Business, York University, Toronto, ON, Canada
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Nigar Sekercioglu
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Cheryl Mitchell
- Gustavson School of Business, University of Victoria, Victoria, ON, Canada
| | - Senthujan Senkaiahliyan
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Attila Hertelendy
- College of Business, Florida International University, Florida, FL, United States
| | - Tracy Porter
- Department of Management, Cleveland State University, Cleveland, OH, United States
| | - Jane Banaszak-Holl
- Department of Health Services Administration, School of Health Professions, University of Alabama Birmingham, Birmingham, OH, United States
| |
Collapse
|
4
|
Mei Z, Jin S, Li W, Zhang S, Cheng X, Li Y, Wang M, Song Y, Tu W, Yin H, Wang Q, Bai Y, Xu G. Ethical risks in robot health education: A qualitative study. Nurs Ethics 2024:9697330241270829. [PMID: 39138639 DOI: 10.1177/09697330241270829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/15/2024]
Abstract
BACKGROUND As health education robots may potentially become a significant support force in nursing practice in the future, it is imperative to adhere to the European Union's concept of "Responsible Research and Innovation" (RRI) and deeply reflect on the ethical risks hidden in the process of intelligent robotic health education. AIM This study explores the perceptions of professional nursing professionals regarding the potential ethical risks associated with the clinical practice of intelligent robotic health education. RESEARCH DESIGN This study adopts a descriptive phenomenological approach, employing Colaizzi's seven-step method for data analysis. PARTICIPANTS AND RESEARCH CONTEXT We conducted semi-structured interviews with 17 nursing professionals from tertiary comprehensive hospitals in China. ETHICAL CONSIDERATIONS This study has been approved by the Ethics Committee of the Second Affiliated Hospital of Nanjing University of Chinese Medicine, Jiangsu Provincial Second Chinese Medicine Hospital. FINDINGS Nursing personnel, adhering to the principles of RRI and the concept of "person-centered" care, have critically reflected on the potential ethical risks inherent in robotic health education. This reflection has primarily identified six themes: (a) threats to human dignity, (b) concerns about patient safety, (c) apprehensions about privacy disclosure, (d) worries about implicit burdens, (e) concerns about responsibility attribution, and (f) expectations for social support. CONCLUSIONS This study focuses on health education robots, which are perceived to have minimal ethical risks, and provides rich and detailed insights into the ethical risks associated with robotic health education. Even seemingly safe health education robots elicit significant concerns among professionals regarding their safety and ethics in clinical practice. As we move forward, it is essential to remain attentive to the potential negative impacts of robots and actively address them.
Collapse
Affiliation(s)
- ZiQi Mei
- Nanjing University of Chinese Medicine
| | | | | | - SuJu Zhang
- The Second Affiliated Hospital of Nanjing University of Chinese Medicine
| | - XiRong Cheng
- The Second Affiliated Hospital of Nanjing University of Chinese Medicine
| | - YiTing Li
- Nanjing University of Chinese Medicine
| | - Meng Wang
- Nanjing University of Chinese Medicine
| | | | | | | | - Qing Wang
- Nanjing University of Chinese Medicine
| | - YaMei Bai
- Nanjing University of Chinese Medicine
| | - GuiHua Xu
- Nanjing University of Chinese Medicine
| |
Collapse
|
5
|
Zhang K, Zhou HY, Baptista-Hon DT, Gao Y, Liu X, Oermann E, Xu S, Jin S, Zhang J, Sun Z, Yin Y, Razmi RM, Loupy A, Beck S, Qu J, Wu J. Concepts and applications of digital twins in healthcare and medicine. PATTERNS (NEW YORK, N.Y.) 2024; 5:101028. [PMID: 39233690 PMCID: PMC11368703 DOI: 10.1016/j.patter.2024.101028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/06/2024]
Abstract
The digital twin (DT) is a concept widely used in industry to create digital replicas of physical objects or systems. The dynamic, bi-directional link between the physical entity and its digital counterpart enables a real-time update of the digital entity. It can predict perturbations related to the physical object's function. The obvious applications of DTs in healthcare and medicine are extremely attractive prospects that have the potential to revolutionize patient diagnosis and treatment. However, challenges including technical obstacles, biological heterogeneity, and ethical considerations make it difficult to achieve the desired goal. Advances in multi-modal deep learning methods, embodied AI agents, and the metaverse may mitigate some difficulties. Here, we discuss the basic concepts underlying DTs, the requirements for implementing DTs in medicine, and their current and potential healthcare uses. We also provide our perspective on five hallmarks for a healthcare DT system to advance research in this field.
Collapse
Affiliation(s)
- Kang Zhang
- National Clinical Eye Research Center, Eye Hospital, Wenzhou Medical University, Wenzhou 325000, China
- Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou 325000, China
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau 999078, China
- Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou 325000, China
| | - Hong-Yu Zhou
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02138, USA
| | - Daniel T. Baptista-Hon
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau 999078, China
- School of Medicine, University of Dundee, DD1 9SY Dundee, UK
| | - Yuanxu Gao
- Department of Big Data and Biomedical AI, College of Future Technology, Peking University, Beijing 100000, China
| | - Xiaohong Liu
- Cancer Institute, University College London, WC1E 6BT London, UK
| | - Eric Oermann
- NYU Langone Medical Center, New York University, New York, NY 10016, USA
| | - Sheng Xu
- Department of Chemical Engineering and Nanoengineering, University of California San Diego, San Diego, CA 92093, USA
| | - Shengwei Jin
- Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou 325000, China
- Department of Anesthesia and Critical Care, The Second Affiliated Hospital and Yuying Children’s Hospital, Wenzhou Medical University, Wenzhou 325000, China
| | - Jian Zhang
- National Clinical Eye Research Center, Eye Hospital, Wenzhou Medical University, Wenzhou 325000, China
- Department of Anesthesia and Critical Care, The Second Affiliated Hospital and Yuying Children’s Hospital, Wenzhou Medical University, Wenzhou 325000, China
| | - Zhuo Sun
- Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou 325000, China
| | - Yun Yin
- Faculty of Business and Health Science Institute, City University of Macau, Macau 999078, China
| | | | - Alexandre Loupy
- Université Paris Cité, INSERM U970 PARCC, Paris Institute for Transplantation and Organ Regeneration, 75015 Paris, France
| | - Stephan Beck
- Cancer Institute, University College London, WC1E 6BT London, UK
| | - Jia Qu
- National Clinical Eye Research Center, Eye Hospital, Wenzhou Medical University, Wenzhou 325000, China
- Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou 325000, China
| | - Joseph Wu
- Cardiovascular Research Institute, Stanford University, Standford, CA 94305, USA
| | - International Consortium of Digital Twins in Medicine
- National Clinical Eye Research Center, Eye Hospital, Wenzhou Medical University, Wenzhou 325000, China
- Institute for Clinical Data Science, Wenzhou Medical University, Wenzhou 325000, China
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau 999078, China
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02138, USA
- Department of Big Data and Biomedical AI, College of Future Technology, Peking University, Beijing 100000, China
- Cancer Institute, University College London, WC1E 6BT London, UK
- NYU Langone Medical Center, New York University, New York, NY 10016, USA
- Department of Chemical Engineering and Nanoengineering, University of California San Diego, San Diego, CA 92093, USA
- Department of Anesthesia and Critical Care, The Second Affiliated Hospital and Yuying Children’s Hospital, Wenzhou Medical University, Wenzhou 325000, China
- Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou 325000, China
- Faculty of Business and Health Science Institute, City University of Macau, Macau 999078, China
- Zoi Capital, New York, NY 10013, USA
- Université Paris Cité, INSERM U970 PARCC, Paris Institute for Transplantation and Organ Regeneration, 75015 Paris, France
- Cardiovascular Research Institute, Stanford University, Standford, CA 94305, USA
- School of Medicine, University of Dundee, DD1 9SY Dundee, UK
| |
Collapse
|
6
|
Bhugra D, Liebrenz M, Ventriglio A, Ng R, Javed A, Kar A, Chumakov E, Moura H, Tolentino E, Gupta S, Ruiz R, Okasha T, Chisolm MS, Castaldelli-Maia J, Torales J, Smith A. World Psychiatric Association-Asian Journal of Psychiatry Commission on Public Mental Health. Asian J Psychiatr 2024; 98:104105. [PMID: 38861790 DOI: 10.1016/j.ajp.2024.104105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 04/22/2024] [Accepted: 05/31/2024] [Indexed: 06/13/2024]
Abstract
Although there is considerable evidence showing that the prevention of mental illnesses and adverse outcomes and mental health promotion can help people lead better and more functional lives, public mental health remains overlooked in the broader contexts of psychiatry and public health. Likewise, in undergraduate and postgraduate medical curricula, prevention and mental health promotion have often been ignored. However, there has been a recent increase in interest in public mental health, including an emphasis on the prevention of psychiatric disorders and improving individual and community wellbeing to support life trajectories, from childhood through to adulthood and into older age. These lifespan approaches have significant potential to reduce the onset of mental illnesses and the related burdens for the individual and communities, as well as mitigating social, economic, and political costs. Informed by principles of social justice and respect for human rights, this may be especially important for addressing salient problems in communities with distinct vulnerabilities, where prominent disadvantages and barriers for care delivery exist. Therefore, this Commission aims to address these topics, providing a narrative overview of relevant literature and suggesting ways forward. Additionally, proposals for improving mental health and preventing mental illnesses and adverse outcomes are presented, particularly amongst at-risk populations.
Collapse
Affiliation(s)
- Dinesh Bhugra
- Institute of Psychiatry, Psychology and Neurosciences, Kings College, London SE5 8AF, United Kingdom.
| | - Michael Liebrenz
- Department of Forensic Psychiatry, University of Bern, Bern, Switzerland
| | | | - Roger Ng
- World Psychiatric Association, Geneva, Switzerland
| | | | - Anindya Kar
- Advanced Neuropsychiatry Institute, Kolkata, India
| | - Egor Chumakov
- Department of Psychiatry & Addiction, St Petersburg State University, St Petersburg, Russia
| | | | | | - Susham Gupta
- East London NHS Foundation Trust, London, United Kingdom
| | - Roxanna Ruiz
- University of Francisco Moaroquin, Guatemala City, Guatemala
| | | | | | | | | | - Alexander Smith
- Department of Forensic Psychiatry, University of Bern, Bern, Switzerland
| |
Collapse
|
7
|
Laymouna M, Ma Y, Lessard D, Schuster T, Engler K, Lebouché B. Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review. J Med Internet Res 2024; 26:e56930. [PMID: 39042446 PMCID: PMC11303905 DOI: 10.2196/56930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 04/07/2024] [Accepted: 04/12/2024] [Indexed: 07/24/2024] Open
Abstract
BACKGROUND Chatbots, or conversational agents, have emerged as significant tools in health care, driven by advancements in artificial intelligence and digital technology. These programs are designed to simulate human conversations, addressing various health care needs. However, no comprehensive synthesis of health care chatbots' roles, users, benefits, and limitations is available to inform future research and application in the field. OBJECTIVE This review aims to describe health care chatbots' characteristics, focusing on their diverse roles in the health care pathway, user groups, benefits, and limitations. METHODS A rapid review of published literature from 2017 to 2023 was performed with a search strategy developed in collaboration with a health sciences librarian and implemented in the MEDLINE and Embase databases. Primary research studies reporting on chatbot roles or benefits in health care were included. Two reviewers dual-screened the search results. Extracted data on chatbot roles, users, benefits, and limitations were subjected to content analysis. RESULTS The review categorized chatbot roles into 2 themes: delivery of remote health services, including patient support, care management, education, skills building, and health behavior promotion, and provision of administrative assistance to health care providers. User groups spanned across patients with chronic conditions as well as patients with cancer; individuals focused on lifestyle improvements; and various demographic groups such as women, families, and older adults. Professionals and students in health care also emerged as significant users, alongside groups seeking mental health support, behavioral change, and educational enhancement. The benefits of health care chatbots were also classified into 2 themes: improvement of health care quality and efficiency and cost-effectiveness in health care delivery. The identified limitations encompassed ethical challenges, medicolegal and safety concerns, technical difficulties, user experience issues, and societal and economic impacts. CONCLUSIONS Health care chatbots offer a wide spectrum of applications, potentially impacting various aspects of health care. While they are promising tools for improving health care efficiency and quality, their integration into the health care system must be approached with consideration of their limitations to ensure optimal, safe, and equitable use.
Collapse
Affiliation(s)
- Moustafa Laymouna
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
| | - Yuanchao Ma
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
- Department of Biomedical Engineering, Polytechnique Montréal, Montreal, QC, Canada
| | - David Lessard
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Tibor Schuster
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Kim Engler
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| | - Bertrand Lebouché
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Infectious Diseases and Immunity in Global Health Program, Research Institute of McGill University Health Centre, Montreal, QC, Canada
- Chronic and Viral Illness Service, Division of Infectious Disease, Department of Medicine, McGill University Health Centre, Montreal, QC, Canada
| |
Collapse
|
8
|
Hoek S, Metselaar S, Ploem C, Bak M. Promising for patients or deeply disturbing? The ethical and legal aspects of deepfake therapy. JOURNAL OF MEDICAL ETHICS 2024:jme-2024-109985. [PMID: 38981659 DOI: 10.1136/jme-2024-109985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 06/24/2024] [Indexed: 07/11/2024]
Abstract
Deepfakes are hyper-realistic but fabricated videos created with the use of artificial intelligence. In the context of psychotherapy, the first studies on using deepfake technology are emerging, with potential applications including grief counselling and treatment for sexual violence-related trauma. This paper explores these applications from the perspective of medical ethics and health law. First, we question whether deepfake therapy can truly constitute good care. Important risks are dangerous situations or 'triggers' to the patient during data collection for the creation of a deepfake, and when deepfake therapy is started, there are risks of overattachment and blurring of reality, which can complicate the grieving process or alter perceptions of perpetrators. Therapists must mitigate these risks, but more research is needed to evaluate deepfake therapy's efficacy before it can be used at all. Second, we address the implications for the person depicted in the deepfake. We describe how privacy and portrait law apply and argue that the legitimate interests of those receiving therapy should outweigh the interests of the depicted, as long as the therapy is an effective and 'last resort' treatment option, overseen by a therapist and the deepfakes are handled carefully. We suggest specific preventative measures that can be taken to protect the depicted person's privacy. Finally, we call for qualitative research with patients and therapists to explore dependencies and other unintended consequences. In conclusion, while deepfake therapy holds promise, the competing interests and ethicolegal complexities demand careful consideration and further investigation alongside the development and implementation of this technology.
Collapse
Affiliation(s)
- Saar Hoek
- Law Centre for Health and Life, Faculty of Law, University of Amsterdam, Amsterdam, Netherlands
| | - Suzanne Metselaar
- Department of Ethics, Law & Humanities, Amsterdam UMC, Amsterdam, Netherlands
| | - Corrette Ploem
- Law Centre for Health and Life, Faculty of Law, University of Amsterdam, Amsterdam, Netherlands
- Department of Ethics, Law & Humanities, Amsterdam UMC, Amsterdam, Netherlands
| | - Marieke Bak
- Department of Ethics, Law & Humanities, Amsterdam UMC, Amsterdam, Netherlands
- Institute for History and Ethics of Medicine, Technical University of Munich, Munchen, Germany
| |
Collapse
|
9
|
Abid A, Baxter SL. Breaking Barriers in Behavioral Change: The Potential of Artificial Intelligence-Driven Motivational Interviewing. J Glaucoma 2024; 33:473-477. [PMID: 38595151 DOI: 10.1097/ijg.0000000000002382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 03/16/2024] [Indexed: 04/11/2024]
Abstract
Patient outcomes in ophthalmology are greatly influenced by adherence and patient participation, which can be particularly challenging in diseases like glaucoma, where medication regimens can be complex. A well-studied and evidence-based intervention for behavioral change is motivational interviewing (MI), a collaborative and patient-centered counseling approach that has been shown to improve medication adherence in glaucoma patients. However, there are many barriers to clinicians being able to provide motivational interviewing in-office, including short visit durations within high-volume ophthalmology clinics and inadequate billing structures for counseling. Recently, Large Language Models (LLMs), a type of artificial intelligence, have advanced such that they can follow instructions and carry coherent conversations, offering novel solutions to a wide range of clinical problems. In this paper, we discuss the potential of LLMs to provide chatbot-driven MI to improve adherence in glaucoma patients and provide an example conversation as a proof of concept. We discuss the advantages of AI-driven MI, such as demonstrated effectiveness, scalability, and accessibility. We also explore the risks and limitations, including issues of safety and privacy, as well as the factual inaccuracies and hallucinations to which LLMs are susceptible. Domain-specific training may be needed to ensure the accuracy and completeness of information provided in subspecialty areas such as glaucoma. Despite the current limitations, AI-driven motivational interviewing has the potential to offer significant improvements in adherence and should be further explored to maximally leverage the potential of artificial intelligence for our patients.
Collapse
Affiliation(s)
- Areeba Abid
- Emory University School of Medicine, Atlanta, GA
| | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego
- Divison of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA
| |
Collapse
|
10
|
Bouhouita-Guermech S, Haidar H. Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of "Responsibility" in Artificial Intelligence within the Healthcare Context. Asian Bioeth Rev 2024; 16:315-344. [PMID: 39022380 PMCID: PMC11250714 DOI: 10.1007/s41649-024-00292-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 07/20/2024] Open
Abstract
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to "responsibility" and "AI in healthcare", and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders' responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.
Collapse
Affiliation(s)
| | - Hazar Haidar
- Ethics Programs, Department of Letters and Humanities, University of Quebec at Rimouski, Rimouski, Québec Canada
| |
Collapse
|
11
|
Franco D'Souza R, Mathew M, Amanullah S, Edward Thornton J, Mishra V, E M, Louis Palatty P, Surapaneni KM. Navigating merits and limits on the current perspectives and ethical challenges in the utilization of artificial intelligence in psychiatry - An exploratory mixed methods study. Asian J Psychiatr 2024; 97:104067. [PMID: 38718518 DOI: 10.1016/j.ajp.2024.104067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 04/29/2024] [Indexed: 06/16/2024]
Abstract
BACKGROUND The integration of Artificial Intelligence (AI) in psychiatry presents opportunities for enhancing patient care but raises significant ethical concerns and challenges in clinical application. Addressing these challenges necessitates an informed and ethically aware psychiatric workforce capable of integrating AI into practice responsibly. METHODS A mixed-methods study was conducted to assess the outcomes of the "CONNECT with AI" - (Collaborative Opportunity to Navigate and Negotiate Ethical Challenges and Trials with Artificial Intelligence) workshop, aimed at exploring AI's ethical implications and applications in psychiatry. This workshop featured presentations, discussions, and scenario analyses focusing on AI's role in mental health care. Pre- and post-workshop questionnaires and focus group discussions evaluated participants' perspectives, and ethical understanding regarding AI in psychiatry. RESULTS Participants exhibited a cautious optimism towards AI, recognizing its potential to augment mental health care while expressing concerns over ethical usage, patient-doctor relationships, and AI's practical application in patient care. The workshop significantly improved participants' ethical understanding, highlighting a substantial knowledge gap and the need for further education in AI among psychiatrists. CONCLUSION The study underscores the necessity of continuous education and ethical guideline development for psychiatrists in the era of AI, emphasizing collaborative efforts in AI system design to ensure they meet clinical needs ethically and effectively. Future initiatives should aim to broaden psychiatrists' exposure to AI, fostering a deeper understanding and integration of AI technologies in psychiatric practice.
Collapse
Affiliation(s)
- Russell Franco D'Souza
- Department of Education, UNESCO Chair in Bioethics, Melbourne, Australia; Department of Organizational Psychological Medicine, International Institute of Organisational Psychological Medicine, 71 Cleeland Street, Dandenong Victoria, Melbourne 3175, Australia
| | - Mary Mathew
- Department of Pathology, Kasturba Medical College, Manipal, Manipal Academy of Higher Education, Tiger Circle Road, Madhav Nagar, Manipal, Karnataka 576104, India
| | - Shabbir Amanullah
- Division of Geriatric Psychiatry, Queen's University, Providence Care Hospital, 752 King Street West, Postal Bag 603 Kingston, ON K7L7X3, Canada
| | - Joseph Edward Thornton
- Department of Psychiatry, University of Florida College of Medicine, Gainesville, FL, USA
| | - Vedprakash Mishra
- School of Higher Education & Research, Datta Meghe Institute of Higher Education and Research (Deemed to be University), Nagpur, Maharashtra, India
| | - Mohandas E
- Department of Psychiatry, Sun Medical and Research Centre, Thrissur, Kerala 680 001, India
| | - Princy Louis Palatty
- Department of Pharmacology, Amrita Institute of Medical Sciences, Amrita Vishwa Vidyapeetham, Elamakkara P.O., Kochi, Kerala 682 041, India
| | - Krishna Mohan Surapaneni
- Department of Biochemistry, Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai, Tamil Nadu 600 123, India; Department of Medical Education, Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai, Tamil Nadu 600 123, India.
| |
Collapse
|
12
|
Lee GC, Platow MJ, Cruwys T. Listening quality leads to greater working alliance and well-being: Testing a social identity model of working alliance. BRITISH JOURNAL OF CLINICAL PSYCHOLOGY 2024. [PMID: 38946045 DOI: 10.1111/bjc.12489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 06/19/2024] [Indexed: 07/02/2024]
Abstract
OBJECTIVES Characterization of psychotherapy as the "talking cure" de-emphasizes the importance of an active listener on the curative effect of talking. We test whether the working alliance and its benefits emerge from expression of voice, per se, or whether active listening is needed. We examine the role of listening in a social identity model of working alliance. METHODS University student participants in a laboratory experiment spoke about stress management to another person (a confederate student) who either did or did not engage in active listening. Participants reported their perceptions of alliance, key social-psychological variables, and well-being. RESULTS Active listening led to significantly higher ratings of alliance, procedural justice, social identification, and identity leadership, compared to no active listening. Active listening also led to greater positive affect and satisfaction. Ultimately, an explanatory path model was supported in which active listening predicted working alliance through social identification, identity leadership, and procedural justice. CONCLUSIONS Listening quality enhances alliance and well-being in a manner consistent with a social identity model of working alliance, and is a strategy for facilitating alliance in therapy.
Collapse
Affiliation(s)
- Georgina C Lee
- School of Medicine and Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia
| | - Michael J Platow
- School of Medicine and Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia
| | - Tegan Cruwys
- School of Medicine and Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia
| |
Collapse
|
13
|
Omar M, Soffer S, Charney AW, Landi I, Nadkarni GN, Klang E. Applications of large language models in psychiatry: a systematic review. Front Psychiatry 2024; 15:1422807. [PMID: 38979501 PMCID: PMC11228775 DOI: 10.3389/fpsyt.2024.1422807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 06/05/2024] [Indexed: 07/10/2024] Open
Abstract
Background With their unmatched ability to interpret and engage with human language and context, large language models (LLMs) hint at the potential to bridge AI and human cognitive processes. This review explores the current application of LLMs, such as ChatGPT, in the field of psychiatry. Methods We followed PRISMA guidelines and searched through PubMed, Embase, Web of Science, and Scopus, up until March 2024. Results From 771 retrieved articles, we included 16 that directly examine LLMs' use in psychiatry. LLMs, particularly ChatGPT and GPT-4, showed diverse applications in clinical reasoning, social media, and education within psychiatry. They can assist in diagnosing mental health issues, managing depression, evaluating suicide risk, and supporting education in the field. However, our review also points out their limitations, such as difficulties with complex cases and potential underestimation of suicide risks. Conclusion Early research in psychiatry reveals LLMs' versatile applications, from diagnostic support to educational roles. Given the rapid pace of advancement, future investigations are poised to explore the extent to which these models might redefine traditional roles in mental health care.
Collapse
Affiliation(s)
- Mahmud Omar
- Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Shelly Soffer
- Internal Medicine B, Assuta Medical Center, Ashdod, Israel
- Ben-Gurion University of the Negev, Be'er Sheva, Israel
| | | | - Isotta Landi
- Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Girish N Nadkarni
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Eyal Klang
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| |
Collapse
|
14
|
Khosravi M, Alzahrani AA, Muhammed TM, Hjazi A, Abbas HH, AbdRabou MA, Mohmmed KH, Ghildiyal P, Yumashev A, Elawady A, Sarabandi S. Management of Refractory Functional Gastrointestinal Disorders: What Role Should Psychiatrists Have? PHARMACOPSYCHIATRY 2024. [PMID: 38897220 DOI: 10.1055/a-2331-7684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Currently, it has been stated that psychiatric and psychological problems are equally paramount aspects of the clinical modulation and manifestation of both the central nervous and digestive systems, which could be used to restore balance. The present narrative review aims to provide an elaborate description of the bio-psycho-social facets of refractory functional gastrointestinal disorders, psychiatrists' role, specific psychiatric approach, and the latest psychiatric and psychological perspectives on practical therapeutic management. In this respect, "psyche," "psychiatry," "psychology," "psychiatrist," "psychotropic," and "refractory functional gastrointestinal disorders" (as the keywords) were searched in relevant English publications from January 1, 1950, to March 1, 2024, in the PubMed, Web of Science, Scopus, EMBASE, Cochrane Library, and Google Scholar databases. Eventually, the narrative technique was adopted to reach a compelling story with a high level of cohesion through material synthesis. The current literature recognizes the brain-gut axis modulation as a therapeutic target for refractory functional gastrointestinal disorders and the bio-psycho-social model as an integrated framework to explain disease pathogenesis. The results also reveal some evidence to affirm the benefits of psychotropic medications and psychological therapies in refractory functional gastrointestinal disorders, even when psychiatric symptoms were absent. It seems that psychiatrists are required to pay higher levels of attention to both the assessment and treatment of patients with refractory functional gastrointestinal disorders, accompanied by educating and training practitioners who take care of these patients.
Collapse
Affiliation(s)
- Mohsen Khosravi
- Department of Psychiatry, School of Medicine, Zahedan University of Medical Sciences, Zahedan, Iran
- Health Promotion Research Center, Zahedan University of Medical Sciences, Zahedan, Iran
| | | | - Thikra M Muhammed
- Department of Biotechnology, College of Applied Sciences, University of Fallujah, Al-anbar, Iraq
| | - Ahmed Hjazi
- Department of Medical Laboratory, College of Applied Medical Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Huda H Abbas
- National University of Science and Technology, Dhi Qar, Iraq
| | - Mervat A AbdRabou
- Department of Biology, College of Science, Jouf University, Sakaka, Saudi Arabia
| | | | - Pallavi Ghildiyal
- Uttaranchal Institute of Pharmaceutical Sciences, Uttaranchal University, Dehradun, India
| | - Alexey Yumashev
- Department of Prosthetic Dentistry, Sechenov First Moscow State Medical University, Moscow, Russia
| | - Ahmed Elawady
- College of technical engineering, the Islamic University, Najaf, Iraq
- College of technical engineering, the Islamic University of Al Diwaniyah, Al Diwaniyah, Iraq
- College of technical engineering, the Islamic University of Babylon, Babylon, Iraq
| | - Sahel Sarabandi
- Department of Clinical Biochemistry, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran
| |
Collapse
|
15
|
Palmier C, Rigaud AS, Ogawa T, Wieching R, Dacunha S, Barbarossa F, Stara V, Bevilacqua R, Pino M. Identification of Ethical Issues and Practice Recommendations Regarding the Use of Robotic Coaching Solutions for Older Adults: Narrative Review. J Med Internet Res 2024; 26:e48126. [PMID: 38888953 PMCID: PMC11220435 DOI: 10.2196/48126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 12/22/2023] [Accepted: 03/12/2024] [Indexed: 06/20/2024] Open
Abstract
BACKGROUND Technological advances in robotics, artificial intelligence, cognitive algorithms, and internet-based coaches have contributed to the development of devices capable of responding to some of the challenges resulting from demographic aging. Numerous studies have explored the use of robotic coaching solutions (RCSs) for supporting healthy behaviors in older adults and have shown their benefits regarding the quality of life and functional independence of older adults at home. However, the use of RCSs by individuals who are potentially vulnerable raises many ethical questions. Establishing an ethical framework to guide the development, use, and evaluation practices regarding RCSs for older adults seems highly pertinent. OBJECTIVE The objective of this paper was to highlight the ethical issues related to the use of RCSs for health care purposes among older adults and draft recommendations for researchers and health care professionals interested in using RCSs for older adults. METHODS We conducted a narrative review of the literature to identify publications including an analysis of the ethical dimension and recommendations regarding the use of RCSs for older adults. We used a qualitative analysis methodology inspired by a Health Technology Assessment model. We included all article types such as theoretical papers, research studies, and reviews dealing with ethical issues or recommendations for the implementation of these RCSs in a general population, particularly among older adults, in the health care sector and published after 2011 in either English or French. The review was performed between August and December 2021 using the PubMed, CINAHL, Embase, Scopus, Web of Science, IEEE Explore, SpringerLink, and PsycINFO databases. Selected publications were analyzed using the European Network of Health Technology Assessment Core Model (version 3.0) around 5 ethical topics: benefit-harm balance, autonomy, privacy, justice and equity, and legislation. RESULTS In the 25 publications analyzed, the most cited ethical concerns were the risk of accidents, lack of reliability, loss of control, risk of deception, risk of social isolation, data confidentiality, and liability in case of safety problems. Recommendations included collecting the opinion of target users, collecting their consent, and training professionals in the use of RCSs. Proper data management, anonymization, and encryption appeared to be essential to protect RCS users' personal data. CONCLUSIONS Our analysis supports the interest in using RCSs for older adults because of their potential contribution to individuals' quality of life and well-being. This analysis highlights many ethical issues linked to the use of RCSs for health-related goals. Future studies should consider the organizational consequences of the implementation of RCSs and the influence of cultural and socioeconomic specificities of the context of experimentation. We suggest implementing a scalable ethical and regulatory framework to accompany the development and implementation of RCSs for various aspects related to the technology, individual, or legal aspects.
Collapse
Affiliation(s)
- Cécilia Palmier
- Maladie d'Alzheimer, Université de Paris, Paris, France
- Service de Gériatrie 1 & 2, Hôpital Broca, Assistance Publique - Hôpitaux de Paris, Paris, France
| | - Anne-Sophie Rigaud
- Maladie d'Alzheimer, Université de Paris, Paris, France
- Service de Gériatrie 1 & 2, Hôpital Broca, Assistance Publique - Hôpitaux de Paris, Paris, France
| | - Toshimi Ogawa
- Smart-Aging Research Center, Tohoku University, Sendai, Japan
| | - Rainer Wieching
- Institute for New Media & Information Systems, University of Siegen, Siegen, Germany
| | - Sébastien Dacunha
- Maladie d'Alzheimer, Université de Paris, Paris, France
- Service de Gériatrie 1 & 2, Hôpital Broca, Assistance Publique - Hôpitaux de Paris, Paris, France
| | - Federico Barbarossa
- Scientific Direction, Istituto Nazionale di Ricovero e Cura per Anziani, Ancona, Italy
| | - Vera Stara
- Scientific Direction, Istituto Nazionale di Ricovero e Cura per Anziani, Ancona, Italy
| | - Roberta Bevilacqua
- Scientific Direction, Istituto Nazionale di Ricovero e Cura per Anziani, Ancona, Italy
| | - Maribel Pino
- Maladie d'Alzheimer, Université de Paris, Paris, France
- Service de Gériatrie 1 & 2, Hôpital Broca, Assistance Publique - Hôpitaux de Paris, Paris, France
| |
Collapse
|
16
|
Ghadiri P, Yaffe MJ, Adams AM, Abbasgholizadeh-Rahimi S. Primary care physicians' perceptions of artificial intelligence systems in the care of adolescents' mental health. BMC PRIMARY CARE 2024; 25:215. [PMID: 38872128 PMCID: PMC11170885 DOI: 10.1186/s12875-024-02417-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 05/06/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Given that mental health problems in adolescence may have lifelong impacts, the role of primary care physicians (PCPs) in identifying and managing these issues is important. Artificial Intelligence (AI) may offer solutions to the current challenges involved in mental health care. We therefore explored PCPs' challenges in addressing adolescents' mental health, along with their attitudes towards using AI to assist them in their tasks. METHODS We used purposeful sampling to recruit PCPs for a virtual Focus Group (FG). The virtual FG lasted 75 minutes and was moderated by two facilitators. A life transcription was produced by an online meeting software. Transcribed data was cleaned, followed by a priori and inductive coding and thematic analysis. RESULTS We reached out to 35 potential participants via email. Seven agreed to participate, and ultimately four took part in the FG. PCPs perceived that AI systems have the potential to be cost-effective, credible, and useful in collecting large amounts of patients' data, and relatively credible. They envisioned AI assisting with tasks such as diagnoses and establishing treatment plans. However, they feared that reliance on AI might result in a loss of clinical competency. PCPs wanted AI systems to be user-friendly, and they were willing to assist in achieving this goal if it was within their scope of practice and they were compensated for their contribution. They stressed a need for regulatory bodies to deal with medicolegal and ethical aspects of AI and clear guidelines to reduce or eliminate the potential of patient harm. CONCLUSION This study provides the groundwork for assessing PCPs' perceptions of AI systems' features and characteristics, potential applications, possible negative aspects, and requirements for using them. A future study of adolescents' perspectives on integrating AI into mental healthcare might contribute a fuller understanding of the potential of AI for this population.
Collapse
Affiliation(s)
- Pooria Ghadiri
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada
- Mila-Quebec AI Institute, Montréal, QC, Canada
| | - Mark J Yaffe
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada
- St. Mary's Hospital Center of the Integrated University Centre for Health and Social Services of West Island of Montreal, Montréal, QC, Canada
| | - Alayne Mary Adams
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada
| | - Samira Abbasgholizadeh-Rahimi
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada.
- Mila-Quebec AI Institute, Montréal, QC, Canada.
- Lady Davis Institute for Medical Research (LDI), Jewish General Hospital, Montréal, QC, Canada.
| |
Collapse
|
17
|
Liu K. Artificial Intelligence and Ethical Frameworks in Pediatrics. JAMA Pediatr 2024; 178:626-627. [PMID: 38587862 DOI: 10.1001/jamapediatrics.2024.0510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Affiliation(s)
- Kai Liu
- Comprehensive Pediatrics & Pulmonary and Critical Care Medicine, Kunming Children's Hospital, Kunming, China
| |
Collapse
|
18
|
Haber Y, Levkovich I, Hadar-Shoval D, Elyoseph Z. The Artificial Third: A Broad View of the Effects of Introducing Generative Artificial Intelligence on Psychotherapy. JMIR Ment Health 2024; 11:e54781. [PMID: 38787297 PMCID: PMC11137430 DOI: 10.2196/54781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 03/24/2024] [Accepted: 04/18/2024] [Indexed: 05/25/2024] Open
Abstract
Unlabelled This paper explores a significant shift in the field of mental health in general and psychotherapy in particular following generative artificial intelligence's new capabilities in processing and generating humanlike language. Following Freud, this lingo-technological development is conceptualized as the "fourth narcissistic blow" that science inflicts on humanity. We argue that this narcissistic blow has a potentially dramatic influence on perceptions of human society, interrelationships, and the self. We should, accordingly, expect dramatic changes in perceptions of the therapeutic act following the emergence of what we term the artificial third in the field of psychotherapy. The introduction of an artificial third marks a critical juncture, prompting us to ask the following important core questions that address two basic elements of critical thinking, namely, transparency and autonomy: (1) What is this new artificial presence in therapy relationships? (2) How does it reshape our perception of ourselves and our interpersonal dynamics? and (3) What remains of the irreplaceable human elements at the core of therapy? Given the ethical implications that arise from these questions, this paper proposes that the artificial third can be a valuable asset when applied with insight and ethical consideration, enhancing but not replacing the human touch in therapy.
Collapse
Affiliation(s)
- Yuval Haber
- The PhD Program of Hermeneutics and Cultural Studies, Interdisciplinary Studies Unit, Bar-Ilan University, Ramat Gan, Israel
| | | | - Dorit Hadar-Shoval
- Department of Psychology and Educational Counseling, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Zohar Elyoseph
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
- The Center for Psychobiological Research, Department of Psychology and Educational Counseling, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|
19
|
Jebreen K, Radwan E, Kammoun-Rebai W, Alattar E, Radwan A, Safi W, Radwan W, Alajez M. Perceptions of undergraduate medical students on artificial intelligence in medicine: mixed-methods survey study from Palestine. BMC MEDICAL EDUCATION 2024; 24:507. [PMID: 38714993 PMCID: PMC11077786 DOI: 10.1186/s12909-024-05465-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 04/24/2024] [Indexed: 05/12/2024]
Abstract
BACKGROUND The current applications of artificial intelligence (AI) in medicine continue to attract the attention of medical students. This study aimed to identify undergraduate medical students' attitudes toward AI in medicine, explore present AI-related training opportunities, investigate the need for AI inclusion in medical curricula, and determine preferred methods for teaching AI curricula. METHODS This study uses a mixed-method cross-sectional design, including a quantitative study and a qualitative study, targeting Palestinian undergraduate medical students in the academic year 2022-2023. In the quantitative part, we recruited a convenience sample of undergraduate medical students from universities in Palestine from June 15, 2022, to May 30, 2023. We collected data by using an online, well-structured, and self-administered questionnaire with 49 items. In the qualitative part, 15 undergraduate medical students were interviewed by trained researchers. Descriptive statistics and an inductive content analysis approach were used to analyze quantitative and qualitative data, respectively. RESULTS From a total of 371 invitations sent, 362 responses were received (response rate = 97.5%), and 349 were included in the analysis. The mean age of participants was 20.38 ± 1.97, with 40.11% (140) in their second year of medical school. Most participants (268, 76.79%) did not receive formal education on AI before or during medical study. About two-thirds of students strongly agreed or agreed that AI would become common in the future (67.9%, 237) and would revolutionize medical fields (68.7%, 240). Participants stated that they had not previously acquired training in the use of AI in medicine during formal medical education (260, 74.5%), confirming a dire need to include AI training in medical curricula (247, 70.8%). Most participants (264, 75.7%) think that learning opportunities for AI in medicine have not been adequate; therefore, it is very important to study more about employing AI in medicine (228, 65.3%). Male students (3.15 ± 0.87) had higher perception scores than female students (2.81 ± 0.86) (p < 0.001). The main themes that resulted from the qualitative analysis of the interview questions were an absence of AI learning opportunities, the necessity of including AI in medical curricula, optimism towards the future of AI in medicine, and expected challenges related to AI in medical fields. CONCLUSION Medical students lack access to educational opportunities for AI in medicine; therefore, AI should be included in formal medical curricula in Palestine.
Collapse
Affiliation(s)
- Kamel Jebreen
- Department of Mathematics, Palestine Technical University - Kadoorie, Hebron, Palestine
- Department of Mathematics, An-Najah National University, Nablus, Palestine
- Unité de Recherche Clinique Saint-Louis Fernand-Widal Lariboisière, APHP, Paris, France
| | - Eqbal Radwan
- Department of Biology, Faculty of Science, Islamic University of Gaza, Gaza, Palestine.
| | | | - Etimad Alattar
- Department of Biology, Faculty of Science, Islamic University of Gaza, Gaza, Palestine
| | - Afnan Radwan
- Faculty of Education, Islamic University of Gaza, Gaza, Palestine
| | - Walaa Safi
- Department of Biotechnology, Faculty of Science, Islamic University of Gaza, Gaza, Palestine
| | - Walaa Radwan
- University College of Applied Sciences - Gaza, Gaza, Palestine
| | | |
Collapse
|
20
|
Pratt N, Madhavan R, Weleff J. Digital Dialogue-How Youth Are Interacting With Chatbots. JAMA Pediatr 2024; 178:429-430. [PMID: 38497982 DOI: 10.1001/jamapediatrics.2024.0084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
This Viewpoint describes the use of large language model chatbots in social, educational, and therapeutic settings and the need to assess when children are developmentally ready to engage with them.
Collapse
Affiliation(s)
- Nicholas Pratt
- Department of Psychiatry, Yale University School of Medicine, New Haven, Connecticut
| | - Ricky Madhavan
- Department of Psychiatry, Yale University School of Medicine, New Haven, Connecticut
| | - Jeremy Weleff
- Department of Psychiatry, Yale University School of Medicine, New Haven, Connecticut
- Department of Psychiatry and Psychology, Center for Behavioral Health, Neurological Institute, Cleveland Clinic, Cleveland, Ohio
| |
Collapse
|
21
|
Maccaro A, Stokes K, Statham L, He L, Williams A, Pecchia L, Piaggio D. Clearing the Fog: A Scoping Literature Review on the Ethical Issues Surrounding Artificial Intelligence-Based Medical Devices. J Pers Med 2024; 14:443. [PMID: 38793025 PMCID: PMC11121798 DOI: 10.3390/jpm14050443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 04/12/2024] [Accepted: 04/16/2024] [Indexed: 05/26/2024] Open
Abstract
The use of AI in healthcare has sparked much debate among philosophers, ethicists, regulators and policymakers who raised concerns about the implications of such technologies. The presented scoping review captures the progression of the ethical and legal debate and the proposed ethical frameworks available concerning the use of AI-based medical technologies, capturing key themes across a wide range of medical contexts. The ethical dimensions are synthesised in order to produce a coherent ethical framework for AI-based medical technologies, highlighting how transparency, accountability, confidentiality, autonomy, trust and fairness are the top six recurrent ethical issues. The literature also highlighted how it is essential to increase ethical awareness through interdisciplinary research, such that researchers, AI developers and regulators have the necessary education/competence or networks and tools to ensure proper consideration of ethical matters in the conception and design of new AI technologies and their norms. Interdisciplinarity throughout research, regulation and implementation will help ensure AI-based medical devices are ethical, clinically effective and safe. Achieving these goals will facilitate successful translation of AI into healthcare systems, which currently is lagging behind other sectors, to ensure timely achievement of health benefits to patients and the public.
Collapse
Affiliation(s)
- Alessia Maccaro
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| | - Katy Stokes
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| | - Laura Statham
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
- Warwick Medical School, University of Warwick, Coventry CV4 7AL, UK
| | - Lucas He
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
- Faculty of Engineering, Imperial College, London SW7 1AY, UK
| | - Arthur Williams
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| | - Leandro Pecchia
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
- Intelligent Technologies for Health and Well-Being: Sustainable Design, Management and Evaluation, Faculty of Engineering, Università Campus Bio-Medico Roma, Via Alvaro del Portillo, 21, 00128 Rome, Italy
| | - Davide Piaggio
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| |
Collapse
|
22
|
van Houtum LAEM, Baaré WFC, Beckmann CF, Castro-Fornieles J, Cecil CAM, Dittrich J, Ebdrup BH, Fegert JM, Havdahl A, Hillegers MHJ, Kalisch R, Kushner SA, Mansuy IM, Mežinska S, Moreno C, Muetzel RL, Neumann A, Nordentoft M, Pingault JB, Preisig M, Raballo A, Saunders J, Sprooten E, Sugranyes G, Tiemeier H, van Woerden GM, Vandeleur CL, van Haren NEM. Running in the FAMILY: understanding and predicting the intergenerational transmission of mental illness. Eur Child Adolesc Psychiatry 2024:10.1007/s00787-024-02423-9. [PMID: 38613677 DOI: 10.1007/s00787-024-02423-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 03/15/2024] [Indexed: 04/15/2024]
Abstract
Over 50% of children with a parent with severe mental illness will develop mental illness by early adulthood. However, intergenerational transmission of risk for mental illness in one's children is insufficiently considered in clinical practice, nor is it sufficiently utilised into diagnostics and care for children of ill parents. This leads to delays in diagnosing young offspring and missed opportunities for protective actions and resilience strengthening. Prior twin, family, and adoption studies suggest that the aetiology of mental illness is governed by a complex interplay of genetic and environmental factors, potentially mediated by changes in epigenetic programming and brain development. However, how these factors ultimately materialise into mental disorders remains unclear. Here, we present the FAMILY consortium, an interdisciplinary, multimodal (e.g., (epi)genetics, neuroimaging, environment, behaviour), multilevel (e.g., individual-level, family-level), and multisite study funded by a European Union Horizon-Staying-Healthy-2021 grant. FAMILY focuses on understanding and prediction of intergenerational transmission of mental illness, using genetically informed causal inference, multimodal normative prediction, and animal modelling. Moreover, FAMILY applies methods from social sciences to map social and ethical consequences of risk prediction to prepare clinical practice for future implementation. FAMILY aims to deliver: (i) new discoveries clarifying the aetiology of mental illness and the process of resilience, thereby providing new targets for prevention and intervention studies; (ii) a risk prediction model within a normative modelling framework to predict who is at risk for developing mental illness; and (iii) insight into social and ethical issues related to risk prediction to inform clinical guidelines.
Collapse
Affiliation(s)
- Lisanne A E M van Houtum
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
| | - William F C Baaré
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital-Amager and Hvidovre, Copenhagen, Denmark
| | - Christian F Beckmann
- Centre for Functional MRI of the Brain, Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
- Department of Cognitive Neuroscience, Radboud University Medical Centre, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Josefina Castro-Fornieles
- Department of Child and Adolescent Psychiatry and Psychology, 2021SGR01319, Institut Clinic de Neurociències, Hospital Clínic de Barcelona, FCRB-IDIBAPS, Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Department of Medicine, Institute of Neuroscience, University of Barcelona, Barcelona, Spain
| | - Charlotte A M Cecil
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
- Department of Epidemiology, Erasmus MC, University Medical Centre Rotterdam, Rotterdam, the Netherlands
| | | | - Bjørn H Ebdrup
- Center for Neuropsychiatric Schizophrenia Research and Centre for Clinical Intervention and Neuropsychiatric Schizophrenia Research, Mental Health Centre Glostrup, University of Copenhagen, Glostrup, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Jörg M Fegert
- President European Society for Child and Adolescent Psychiatry (ESCAP), Brussels, Belgium
- Department of Child and Adolescent Psychiatry/Psychotherapy, University Hospital Ulm, Ulm, Germany
| | - Alexandra Havdahl
- PsychGen Centre for Genetic Epidemiology and Mental Health, Norwegian Institute of Public Health, Oslo, Norway
- PROMENTA Research Centre, Department of Psychology, University of Oslo, Oslo, Norway
- Nic Waals Institute, Lovisenberg Diaconal Hospital, Oslo, Norway
| | - Manon H J Hillegers
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
| | - Raffael Kalisch
- Leibniz Institute for Resilience Research, Mainz, Germany
- Neuroimaging Center (NIC), Focus Program Translational Neuroscience (FTN), Johannes Gutenberg University Medical Center, Mainz, Germany
| | - Steven A Kushner
- Department of Psychiatry, Erasmus MC, University Medical Centre Rotterdam, Rotterdam, The Netherlands
| | - Isabelle M Mansuy
- Laboratory of Neuroepigenetics, Medical Faculty, Brain Research Institute, Department of Health Science and Technology of ETH, University of Zurich and Institute for Neuroscience, Zurich, Switzerland
- Zurich Neuroscience Centre, ETH and University of Zurich, Zurich, Switzerland
| | - Signe Mežinska
- Institute of Clinical and Preventive Medicine, University of Latvia, Riga, Latvia
| | - Carmen Moreno
- Department of Child and Adolescent Psychiatry, Institute of Psychiatry and Mental Health, Hospital General Universitario Gregorio Marañón, IiSGM, CIBERSAM, ISCIII, School of Medicine, Universidad Complutense, Madrid, Spain
| | - Ryan L Muetzel
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
- Department of Radiology and Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Alexander Neumann
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
| | - Merete Nordentoft
- The Lundbeck Foundation Initiative for Integrative Psychiatric Research, Aarhus, Denmark
- Copenhagen Research Centre for Mental Health, Mental Health Centre Copenhagen, Copenhagen University Hospital, Copenhagen, Denmark
| | - Jean-Baptiste Pingault
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
- Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
- Department of Clinical, Educational and Health Psychology, University College London, London, UK
| | - Martin Preisig
- Psychiatric Epidemiology and Psychopathology Research Centre, Department of Psychiatry, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Andrea Raballo
- Public Health Division, Department of Health and Social Care, Cantonal Socio-Psychiatric Organization, Repubblica e Cantone Ticino, Mendrisio, Switzerland
- Chair of Psychiatry, Faculty of Biomedical Sciences, Università Della Svizzera Italiana, Lugano, Switzerland
| | - John Saunders
- Executive Director European Federation of Associations of Families of People with Mental Illness (EUFAMI), Louvain, Belgium
| | - Emma Sprooten
- Department of Cognitive Neuroscience, Radboud University Medical Centre, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
- Department of Human Genetics, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - Gisela Sugranyes
- Department of Child and Adolescent Psychiatry and Psychology, 2021SGR01319, Institut Clinic de Neurociències, Hospital Clínic de Barcelona, FCRB-IDIBAPS, Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Department of Medicine, Institute of Neuroscience, University of Barcelona, Barcelona, Spain
| | - Henning Tiemeier
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands
- Department of Social and Behavioural Sciences, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Geeske M van Woerden
- Department of Neuroscience, Erasmus University Medical Centre, Rotterdam, The Netherlands
- ENCORE Expertise Center for Neurodevelopmental Disorders, Erasmus University Medical Centre, Rotterdam, The Netherlands
- Department of Clinical Genetics, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Caroline L Vandeleur
- Psychiatric Epidemiology and Psychopathology Research Centre, Department of Psychiatry, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Neeltje E M van Haren
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Centre-Sophia, Rotterdam, The Netherlands.
| |
Collapse
|
23
|
Zou X, Na Y, Lai K, Liu G. Unpacking public resistance to health Chatbots: a parallel mediation analysis. Front Psychol 2024; 15:1276968. [PMID: 38659671 PMCID: PMC11041026 DOI: 10.3389/fpsyg.2024.1276968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 04/01/2024] [Indexed: 04/26/2024] Open
Abstract
Introduction Despite the numerous potential benefits of health chatbots for personal health management, a substantial proportion of people oppose the use of such software applications. Building on the innovation resistance theory (IRT) and the prototype willingness model (PWM), this study investigated the functional barriers, psychological barriers, and negative prototype perception antecedents of individuals' resistance to health chatbots, as well as the rational and irrational psychological mechanisms underlying their linkages. Methods Data from 398 participants were used to construct a partial least squares structural equation model (PLS-SEM). Results Resistance intention mediated the relationship between functional barriers, psychological barriers, and resistance behavioral tendency, respectively. Furthermore, The relationship between negative prototype perceptions and resistance behavioral tendency was mediated by resistance intention and resistance willingness. Moreover, negative prototype perceptions were a more effective predictor of resistance behavioral tendency through resistance willingness than functional and psychological barriers. Discussion By investigating the role of irrational factors in health chatbot resistance, this study expands the scope of the IRT to explain the psychological mechanisms underlying individuals' resistance to health chatbots. Interventions to address people's resistance to health chatbots are discussed.
Collapse
Affiliation(s)
- Xiqian Zou
- School of Journalism and Communication, Tsinghua University, Beijing, China
| | - Yuxiang Na
- School of Journalism and Communication, Jinan University, Guangzhou, Guangdong, China
| | - Kaisheng Lai
- School of Journalism and Communication, Jinan University, Guangzhou, Guangdong, China
| | - Guan Liu
- Center for Computational Communication Studies, Jinan University, Guangzhou, Guangdong, China
| |
Collapse
|
24
|
Naqvi WM, Naqvi IW, Mishra GV, Vardhan VD. The future of telerehabilitation: embracing virtual reality and augmented reality innovations. Pan Afr Med J 2024; 47:157. [PMID: 38974699 PMCID: PMC11226757 DOI: 10.11604/pamj.2024.47.157.42956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Accepted: 03/22/2024] [Indexed: 07/09/2024] Open
Abstract
The integration of virtual reality (VR) and augmented reality (AR) into the telerehabilitation initiates a major change in the healthcare practice particularly in neurological and also orthopedic rehabilitation. This essay reflects the potential of the VR and AR in their capacity to create immersive, interactive environments that facilitate the recovery. The recent developments have illustrated the ability to enhance the patient engagement and outcomes, especially in tackling the complex motor and cognitive rehabilitation needs. The combination of artificial intelligence (AI) with VR and AR will bring the rehabilitation to the next level by enabling adaptive and responsive treatment programs provided through real-time feedback and predictive analytics. Nevertheless, the issues such as availability, cost, and digital gap among many others present huge obstacles to the mass adoption. This essay provides a very thorough review of the existing level of virtual reality and augmented reality in rehabilitation and examines the many potential gains, drawbacks, and future directions from a different perspective.
Collapse
Affiliation(s)
- Waqar Mohsin Naqvi
- Department of Physiotherapy, College of Health Sciences, Gulf Medical University, Ajman, United Arab Emirates
- Faculty of Health Professions Education, Datta Meghe Institute of Higher Education and Research, Wardha, India
| | - Ifat Waqar Naqvi
- Ravi Nair Physiotherapy College, Datta Meghe Institute of Higher Education and Research, Wardha, India
| | - Gaurav Vedprakash Mishra
- Faculty of Health Professions Education, Datta Meghe Institute of Higher Education and Research, Wardha, India
| | - Vishnu Diwakar Vardhan
- Ravi Nair Physiotherapy College, Datta Meghe Institute of Higher Education and Research, Wardha, India
| |
Collapse
|
25
|
Hunt A, Merola GP, Carpenter T, Jaeggi AV. Evolutionary perspectives on substance and behavioural addictions: Distinct and shared pathways to understanding, prediction and prevention. Neurosci Biobehav Rev 2024; 159:105603. [PMID: 38402919 DOI: 10.1016/j.neubiorev.2024.105603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 01/31/2024] [Accepted: 02/21/2024] [Indexed: 02/27/2024]
Abstract
Addiction poses significant social, health, and criminal issues. Its moderate heritability and early-life impact, affecting reproductive success, poses an evolutionary paradox: why are humans predisposed to addictive behaviours? This paper reviews biological and psychological mechanisms of substance and behavioural addictions, exploring evolutionary explanations for the origin and function of relevant systems. Ancestrally, addiction-related systems promoted fitness through reward-seeking, and possibly self-medication. Today, psychoactive substances disrupt these systems, leading individuals to neglect essential life goals for immediate satisfaction. Behavioural addictions (e.g. video games, social media) often emulate ancestrally beneficial behaviours, making them appealing yet often irrelevant to contemporary success. Evolutionary insights have implications for how addiction is criminalised and stigmatised, propose novel avenues for interventions, anticipate new sources of addiction from emerging technologies such as AI. The emerging potential of glucagon-like peptide 1 (GLP-1) agonists targeting obesity suggest the satiation system may be a natural counter to overactivation of the reward system.
Collapse
Affiliation(s)
- Adam Hunt
- Institute of Evolutionary Medicine, University of Zürich, Zürich, Switzerland.
| | | | - Tom Carpenter
- College of Medical, Veterinary & Life Sciences, University of Glasgow, Glasgow, UK
| | - Adrian V Jaeggi
- Institute of Evolutionary Medicine, University of Zürich, Zürich, Switzerland
| |
Collapse
|
26
|
Lin GSS, Tan WW, Hashim H. Students' perceptions towards the ethical considerations of using artificial intelligence algorithms in clinical decision-making. Br Dent J 2024:10.1038/s41415-024-7184-3. [PMID: 38491204 DOI: 10.1038/s41415-024-7184-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 11/01/2023] [Indexed: 03/18/2024]
Abstract
Aim The present study aimed to explore the perceptions of dental students regarding the ethical considerations associated with the use of artificial intelligence (AI) algorithms in clinical decision-making.Methods All the undergraduate clinical-year dental students were invited to take part in the study. A validated online questionnaire which consisted of 21 closed-ended questions (five-point Likert scales) was distributed to the students to evaluate their perceptions on the topic. Mean perception scores of the students from different years were analysed using a one-way ANOVA test, while independent t-tests were used to compare the scores between sexes.Results In total, 165 students participated in the present study. The mean age of the respondents was 23.3 (± 1.38) years and the majority were female, Chinese students. Respondents showed positive perceptions throughout all three domains. Uniform and comparable perceptions were seen across various academic years and sexes, with female respondents expressing stronger agreement regarding patient consent and privacy prioritisation.Conclusion Undergraduate clinical dental students generally showed positive perceptions regarding the ethical considerations associated with the integration of AI algorithms in clinical decision-making. It is essential to address these ethical considerations to ensure that AI benefits patient outcomes while upholding fundamental ethical principles and patient-centred care.
Collapse
Affiliation(s)
- Galvin Sim Siang Lin
- Department of Restorative Dentistry, Kulliyyah of Dentistry, International Islamic University Malaysia, 25200, Pahang, Malaysia.
| | - Wen Wu Tan
- Department of Dental Public Health, Faculty of Dentistry, AIMST University, 08100, Kedah, Malaysia
| | - Hasnah Hashim
- Department of Dental Public Health, Faculty of Dentistry, AIMST University, 08100, Kedah, Malaysia
| |
Collapse
|
27
|
Nedbal C, Naik N, Castellani D, Gauhar V, Geraghty R, Somani BK. ChatGPT in urology practice: revolutionizing efficiency and patient care with generative artificial intelligence. Curr Opin Urol 2024; 34:98-104. [PMID: 37962176 DOI: 10.1097/mou.0000000000001151] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
PURPOSE OF REVIEW ChatGPT has emerged as a potentially useful tool for healthcare. Its role in urology is in its infancy and has much potential for research, clinical practice and for patient assistance. With this narrative review, we want to draw a picture of what is known about ChatGPT's integration in urology, alongside future promises and challenges. RECENT FINDINGS The use of ChatGPT can ease the administrative work, helping urologists with note-taking and clinical documentation such as discharge summaries and clinical notes. It can improve patient engagement through increasing awareness and facilitating communication, as it has especially been investigated for uro-oncological diseases. Its ability to understand human emotions makes ChatGPT an empathic and thoughtful interactive tool or source for urological patients and their relatives. Currently, its role in clinical diagnosis and treatment decisions is uncertain, as concerns have been raised about misinterpretation, hallucination and out-of-date information. Moreover, a mandatory regulatory process for ChatGPT in urology is yet to be established. SUMMARY ChatGPT has the potential to contribute to precision medicine and tailored practice by its quick, structured responses. However, this will depend on how well information can be obtained by seeking appropriate responses and asking the pertinent questions. The key lies in being able to validate the responses, regulating the information shared and avoiding misuse of the same to protect the data and patient privacy. Its successful integration into mainstream urology needs educational bodies to provide guidelines or best practice recommendations for the same.
Collapse
Affiliation(s)
- Carlotta Nedbal
- Department of Urology, University Hospitals Southampton, NHS Trust, Southampton, UK
- Urology Unit, Azienda Ospedaliero-Universitaria delle Marche, Polytechnic University of Marche, Ancona, Italy
| | - Nitesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Daniele Castellani
- Urology Unit, Azienda Ospedaliero-Universitaria delle Marche, Polytechnic University of Marche, Ancona, Italy
| | - Vineet Gauhar
- Department of Urology, Ng Teng Fong General Hospital, NUHS, Singapore
| | - Robert Geraghty
- Department of Urology, Freeman Hospital, Newcastle-upon-Tyne, UK
| | - Bhaskar Kumar Somani
- Department of Urology, University Hospitals Southampton, NHS Trust, Southampton, UK
| |
Collapse
|
28
|
Zafar F, Fakhare Alam L, Vivas RR, Wang J, Whei SJ, Mehmood S, Sadeghzadegan A, Lakkimsetti M, Nazir Z. The Role of Artificial Intelligence in Identifying Depression and Anxiety: A Comprehensive Literature Review. Cureus 2024; 16:e56472. [PMID: 38638735 PMCID: PMC11025697 DOI: 10.7759/cureus.56472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/18/2024] [Indexed: 04/20/2024] Open
Abstract
This narrative literature review undertakes a comprehensive examination of the burgeoning field, tracing the development of artificial intelligence (AI)-powered tools for depression and anxiety detection from the level of intricate algorithms to practical applications. Delivering essential mental health care services is now a significant public health priority. In recent years, AI has become a game-changer in the early identification and intervention of these pervasive mental health disorders. AI tools can potentially empower behavioral healthcare services by helping psychiatrists collect objective data on patients' progress and tasks. This study emphasizes the current understanding of AI, the different types of AI, its current use in multiple mental health disorders, advantages, disadvantages, and future potentials. As technology develops and the digitalization of the modern era increases, there will be a rise in the application of artificial intelligence in psychiatry; therefore, a comprehensive understanding will be needed. We searched PubMed, Google Scholar, and Science Direct using keywords for this. In a recent review of studies using electronic health records (EHR) with AI and machine learning techniques for diagnosing all clinical conditions, roughly 99 publications have been found. Out of these, 35 studies were identified for mental health disorders in all age groups, and among them, six studies utilized EHR data sources. By critically analyzing prominent scholarly works, we aim to illuminate the current state of this technology, exploring its successes, limitations, and future directions. In doing so, we hope to contribute to a nuanced understanding of AI's potential to revolutionize mental health diagnostics and pave the way for further research and development in this critically important domain.
Collapse
Affiliation(s)
- Fabeha Zafar
- Internal Medicine, Dow University of Health Sciences (DUHS), Karachi, PAK
| | | | - Rafael R Vivas
- Nutrition, Food and Exercise Sciences, Florida State University College of Human Sciences, Tallahassee, USA
| | - Jada Wang
- Medicine, St. George's University, Brooklyn, USA
| | - See Jia Whei
- Internal Medicine, Sriwijaya University, Palembang, IDN
| | | | | | | | - Zahra Nazir
- Internal Medicine, Combined Military Hospital, Quetta, Quetta, PAK
| |
Collapse
|
29
|
Haley LC, Boyd AK, Hebballi NB, Reynolds EW, Smith KG, Scully PT, Nguyen TL, Bernstam EV, Li LT. Attitudes on Artificial Intelligence use in Pediatric Care From Parents of Hospitalized Children. J Surg Res 2024; 295:158-167. [PMID: 38016269 DOI: 10.1016/j.jss.2023.10.027] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 09/27/2023] [Accepted: 10/27/2023] [Indexed: 11/30/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) may benefit pediatric healthcare, but it also raises ethical and pragmatic questions. Parental support is important for the advancement of AI in pediatric medicine. However, there is little literature describing parental attitudes toward AI in pediatric healthcare, and existing studies do not represent parents of hospitalized children well. METHODS We administered the Attitudes toward Artificial Intelligence in Pediatric Healthcare, a validated survey, to parents of hospitalized children in a single tertiary children's hospital. Surveys were administered by trained study personnel (11/2/2021-5/1/2022). Demographic data were collected. An Attitudes toward Artificial Intelligence in Pediatric Healthcare score, assessing openness toward AI-assisted medicine, was calculated for seven areas of concern. Subgroup analyses were conducted using Mann-Whitney U tests to assess the effect of race, gender, education, insurance, length of stay, and intensive care unit (ICU) admission on AI use. RESULTS We approached 90 parents and conducted 76 surveys for a response rate of 84%. Overall, parents were open to the use of AI in pediatric medicine. Social justice, convenience, privacy, and shared decision-making were important concerns. Parents of children admitted to an ICU expressed the most significantly different attitudes compared to parents of children not admitted to an ICU. CONCLUSIONS Parents were overall supportive of AI-assisted healthcare decision-making. In particular, parents of children admitted to ICU have significantly different attitudes, and further study is needed to characterize these differences. Parents value transparency and disclosure pathways should be developed to support this expectation.
Collapse
Affiliation(s)
- Lauren C Haley
- Department of Pediatric Surgery, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Alexandra K Boyd
- Department of Pediatric Surgery, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Nutan B Hebballi
- Department of Pediatric Surgery, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Eric W Reynolds
- Department of Pediatrics, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Keely G Smith
- Department of Pediatrics, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Peter T Scully
- Department of Pediatrics, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Thao L Nguyen
- Department of Pediatrics, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Elmer V Bernstam
- Department of Pediatric Surgery, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas; School of Biomedical Informatics, University of Texas at Houston, Houston, Texas
| | - Linda T Li
- Division of Pediatric Surgery, Department of Surgery, Icahn School of Medicine at Mount Sinai, New York, New York.
| |
Collapse
|
30
|
Elyoseph Z, Refoua E, Asraf K, Lvovsky M, Shimoni Y, Hadar-Shoval D. Capacity of Generative AI to Interpret Human Emotions From Visual and Textual Data: Pilot Evaluation Study. JMIR Ment Health 2024; 11:e54369. [PMID: 38319707 PMCID: PMC10879976 DOI: 10.2196/54369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 12/09/2023] [Accepted: 12/25/2023] [Indexed: 02/07/2024] Open
Abstract
BACKGROUND Mentalization, which is integral to human cognitive processes, pertains to the interpretation of one's own and others' mental states, including emotions, beliefs, and intentions. With the advent of artificial intelligence (AI) and the prominence of large language models in mental health applications, questions persist about their aptitude in emotional comprehension. The prior iteration of the large language model from OpenAI, ChatGPT-3.5, demonstrated an advanced capacity to interpret emotions from textual data, surpassing human benchmarks. Given the introduction of ChatGPT-4, with its enhanced visual processing capabilities, and considering Google Bard's existing visual functionalities, a rigorous assessment of their proficiency in visual mentalizing is warranted. OBJECTIVE The aim of the research was to critically evaluate the capabilities of ChatGPT-4 and Google Bard with regard to their competence in discerning visual mentalizing indicators as contrasted with their textual-based mentalizing abilities. METHODS The Reading the Mind in the Eyes Test developed by Baron-Cohen and colleagues was used to assess the models' proficiency in interpreting visual emotional indicators. Simultaneously, the Levels of Emotional Awareness Scale was used to evaluate the large language models' aptitude in textual mentalizing. Collating data from both tests provided a holistic view of the mentalizing capabilities of ChatGPT-4 and Bard. RESULTS ChatGPT-4, displaying a pronounced ability in emotion recognition, secured scores of 26 and 27 in 2 distinct evaluations, significantly deviating from a random response paradigm (P<.001). These scores align with established benchmarks from the broader human demographic. Notably, ChatGPT-4 exhibited consistent responses, with no discernible biases pertaining to the sex of the model or the nature of the emotion. In contrast, Google Bard's performance aligned with random response patterns, securing scores of 10 and 12 and rendering further detailed analysis redundant. In the domain of textual analysis, both ChatGPT and Bard surpassed established benchmarks from the general population, with their performances being remarkably congruent. CONCLUSIONS ChatGPT-4 proved its efficacy in the domain of visual mentalizing, aligning closely with human performance standards. Although both models displayed commendable acumen in textual emotion interpretation, Bard's capabilities in visual emotion interpretation necessitate further scrutiny and potential refinement. This study stresses the criticality of ethical AI development for emotional recognition, highlighting the need for inclusive data, collaboration with patients and mental health experts, and stringent governmental oversight to ensure transparency and protect patient privacy.
Collapse
Affiliation(s)
- Zohar Elyoseph
- Department of Educational Psychology, The Center for Psychobiological Research, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Imperial College London, London, United Kingdom
| | - Elad Refoua
- Department of Psychology, Bar-Ilan University, Ramat Gan, Israel
| | - Kfir Asraf
- Department of Psychology, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Maya Lvovsky
- Department of Psychology, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Yoav Shimoni
- Boston Children's Hospital, Boston, MA, United States
| | - Dorit Hadar-Shoval
- Department of Psychology, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|
31
|
Rogan J, Bucci S, Firth J. Health Care Professionals' Views on the Use of Passive Sensing, AI, and Machine Learning in Mental Health Care: Systematic Review With Meta-Synthesis. JMIR Ment Health 2024; 11:e49577. [PMID: 38261403 PMCID: PMC10848143 DOI: 10.2196/49577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/30/2023] [Accepted: 11/01/2023] [Indexed: 01/24/2024] Open
Abstract
BACKGROUND Mental health difficulties are highly prevalent worldwide. Passive sensing technologies and applied artificial intelligence (AI) methods can provide an innovative means of supporting the management of mental health problems and enhancing the quality of care. However, the views of stakeholders are important in understanding the potential barriers to and facilitators of their implementation. OBJECTIVE This study aims to review, critically appraise, and synthesize qualitative findings relating to the views of mental health care professionals on the use of passive sensing and AI in mental health care. METHODS A systematic search of qualitative studies was performed using 4 databases. A meta-synthesis approach was used, whereby studies were analyzed using an inductive thematic analysis approach within a critical realist epistemological framework. RESULTS Overall, 10 studies met the eligibility criteria. The 3 main themes were uses of passive sensing and AI in clinical practice, barriers to and facilitators of use in practice, and consequences for service users. A total of 5 subthemes were identified: barriers, facilitators, empowerment, risk to well-being, and data privacy and protection issues. CONCLUSIONS Although clinicians are open-minded about the use of passive sensing and AI in mental health care, important factors to consider are service user well-being, clinician workloads, and therapeutic relationships. Service users and clinicians must be involved in the development of digital technologies and systems to ensure ease of use. The development of, and training in, clear policies and guidelines on the use of passive sensing and AI in mental health care, including risk management and data security procedures, will also be key to facilitating clinician engagement. The means for clinicians and service users to provide feedback on how the use of passive sensing and AI in practice is being received should also be considered. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews CRD42022331698; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=331698.
Collapse
Affiliation(s)
- Jessica Rogan
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| | - Sandra Bucci
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| | - Joseph Firth
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
32
|
Siddiqui F, Aslam D, Tanveer K, Soudy M. The Role of Artificial Intelligence and Machine Learning in Autoimmune Disorders. STUDIES IN COMPUTATIONAL INTELLIGENCE 2024:61-75. [DOI: 10.1007/978-981-99-9029-0_3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
|
33
|
Singh V, Sarkar S, Gaur V, Grover S, Singh OP. Clinical Practice Guidelines on using artificial intelligence and gadgets for mental health and well-being. Indian J Psychiatry 2024; 66:S414-S419. [PMID: 38445270 PMCID: PMC10911327 DOI: 10.4103/indianjpsychiatry.indianjpsychiatry_926_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 12/12/2023] [Accepted: 12/18/2023] [Indexed: 03/07/2024] Open
Affiliation(s)
- Vipul Singh
- Department of Psychiatry, Government Medical College, Kannauj, Uttar Pradesh, India
| | - Sharmila Sarkar
- Department of Psychiatry, Calcutta National Medical College, Kolkata, West Bengal, India
| | - Vikas Gaur
- Department of Psychiatry, Jaipur National University Institute for Medical Sciences and Research Centre, Jaipur, Rajasthan, India
| | - Sandeep Grover
- Department of Psychiatry, Post Graduate Institute of Medical Education and Research, Chandigarh, India
| | - Om Prakash Singh
- Department of Psychiatry, Midnapore Medical College, Midnapore, West Bengal, India E-mail:
| |
Collapse
|
34
|
Irmak-Yazicioglu MB, Arslan A. Navigating the Intersection of Technology and Depression Precision Medicine. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1456:401-426. [PMID: 39261440 DOI: 10.1007/978-981-97-4402-2_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
This chapter primarily focuses on the progress in depression precision medicine with specific emphasis on the integrative approaches that include artificial intelligence and other data, tools, and technologies. After the description of the concept of precision medicine and a comparative introduction to depression precision medicine with cancer and epilepsy, new avenues of depression precision medicine derived from integrated artificial intelligence and other sources will be presented. Additionally, less advanced areas, such as comorbidity between depression and cancer, will be examined.
Collapse
Affiliation(s)
| | - Ayla Arslan
- Department of Molecular Biology and Genetics, Üsküdar University, İstanbul, Türkiye.
| |
Collapse
|
35
|
Siwek J, Żywica P, Siwek P, Wójcik A, Woch W, Pierzyński K, Dyczkowski K. Implementation of an Artificially Empathetic Robot Swarm. SENSORS (BASEL, SWITZERLAND) 2023; 24:242. [PMID: 38203107 PMCID: PMC10781239 DOI: 10.3390/s24010242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 12/27/2023] [Accepted: 12/29/2023] [Indexed: 01/12/2024]
Abstract
This paper presents a novel framework for integrating artificial empathy into robot swarms to improve communication and cooperation. The proposed model uses fuzzy state vectors to represent the knowledge and environment of individual agents, accommodating uncertainties in the real world. By utilizing similarity measures, the model compares states, enabling empathetic reasoning for synchronized swarm behavior. The paper presents a practical application example that demonstrates the efficacy of the model in a robot swarm working toward a common goal. The evaluation methodology involves the open-source physical-based experimentation platform (OPEP), which emphasizes empirical validation in real-world scenarios. The paper proposes a transitional environment that enables automated and repeatable execution of experiments on a swarm of robots using physical devices.
Collapse
Affiliation(s)
- Joanna Siwek
- Department of Artificial Intelligence, Faculty of Mathematics and Computer Science, Adam Mickiewicz University, Uniwersytetu Poznańskiego 4, 61-614 Poznań, Poland; (J.S.); (P.Ż.); (W.W.); (K.P.)
| | - Patryk Żywica
- Department of Artificial Intelligence, Faculty of Mathematics and Computer Science, Adam Mickiewicz University, Uniwersytetu Poznańskiego 4, 61-614 Poznań, Poland; (J.S.); (P.Ż.); (W.W.); (K.P.)
| | - Przemysław Siwek
- Institute of Robotics and Machine Intelligence, Faculty of Automatic Control, Robotics and Electrical Engineering, Poznan University of Technology, Piotrowo 3A, 60-965 Poznań, Poland; (P.S.); (A.W.)
| | - Adrian Wójcik
- Institute of Robotics and Machine Intelligence, Faculty of Automatic Control, Robotics and Electrical Engineering, Poznan University of Technology, Piotrowo 3A, 60-965 Poznań, Poland; (P.S.); (A.W.)
| | - Witold Woch
- Department of Artificial Intelligence, Faculty of Mathematics and Computer Science, Adam Mickiewicz University, Uniwersytetu Poznańskiego 4, 61-614 Poznań, Poland; (J.S.); (P.Ż.); (W.W.); (K.P.)
| | - Konrad Pierzyński
- Department of Artificial Intelligence, Faculty of Mathematics and Computer Science, Adam Mickiewicz University, Uniwersytetu Poznańskiego 4, 61-614 Poznań, Poland; (J.S.); (P.Ż.); (W.W.); (K.P.)
| | - Krzysztof Dyczkowski
- Department of Artificial Intelligence, Faculty of Mathematics and Computer Science, Adam Mickiewicz University, Uniwersytetu Poznańskiego 4, 61-614 Poznań, Poland; (J.S.); (P.Ż.); (W.W.); (K.P.)
| |
Collapse
|
36
|
Zhang M, Scandiffio J, Younus S, Jeyakumar T, Karsan I, Charow R, Salhia M, Wiljer D. The Adoption of AI in Mental Health Care-Perspectives From Mental Health Professionals: Qualitative Descriptive Study. JMIR Form Res 2023; 7:e47847. [PMID: 38060307 PMCID: PMC10739240 DOI: 10.2196/47847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 10/08/2023] [Accepted: 10/11/2023] [Indexed: 12/08/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming the mental health care environment. AI tools are increasingly accessed by clients and service users. Mental health professionals must be prepared not only to use AI but also to have conversations about it when delivering care. Despite the potential for AI to enable more efficient and reliable and higher-quality care delivery, there is a persistent gap among mental health professionals in the adoption of AI. OBJECTIVE A needs assessment was conducted among mental health professionals to (1) understand the learning needs of the workforce and their attitudes toward AI and (2) inform the development of AI education curricula and knowledge translation products. METHODS A qualitative descriptive approach was taken to explore the needs of mental health professionals regarding their adoption of AI through semistructured interviews. To reach maximum variation sampling, mental health professionals (eg, psychiatrists, mental health nurses, educators, scientists, and social workers) in various settings across Ontario (eg, urban and rural, public and private sector, and clinical and research) were recruited. RESULTS A total of 20 individuals were recruited. Participants included practitioners (9/20, 45% social workers and 1/20, 5% mental health nurses), educator scientists (5/20, 25% with dual roles as professors/lecturers and researchers), and practitioner scientists (3/20, 15% with dual roles as researchers and psychiatrists and 2/20, 10% with dual roles as researchers and mental health nurses). Four major themes emerged: (1) fostering practice change and building self-efficacy to integrate AI into patient care; (2) promoting system-level change to accelerate the adoption of AI in mental health; (3) addressing the importance of organizational readiness as a catalyst for AI adoption; and (4) ensuring that mental health professionals have the education, knowledge, and skills to harness AI in optimizing patient care. CONCLUSIONS AI technologies are starting to emerge in mental health care. Although many digital tools, web-based services, and mobile apps are designed using AI algorithms, mental health professionals have generally been slower in the adoption of AI. As indicated by this study's findings, the implications are 3-fold. At the individual level, digital professionals must see the value in digitally compassionate tools that retain a humanistic approach to care. For mental health professionals, resistance toward AI adoption must be acknowledged through educational initiatives to raise awareness about the relevance, practicality, and benefits of AI. At the organizational level, digital professionals and leaders must collaborate on governance and funding structures to promote employee buy-in. At the societal level, digital and mental health professionals should collaborate in the creation of formal AI training programs specific to mental health to address knowledge gaps. This study promotes the design of relevant and sustainable education programs to support the adoption of AI within the mental health care sphere.
Collapse
Affiliation(s)
| | | | | | - Tharshini Jeyakumar
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | | | - Rebecca Charow
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Mohammad Salhia
- Rotman School of Management, University of Toronto, Toronto, ON, Canada
| | - David Wiljer
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
- Department of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
37
|
Chopra H, Annu, Shin DK, Munjal K, Priyanka, Dhama K, Emran TB. Revolutionizing clinical trials: the role of AI in accelerating medical breakthroughs. Int J Surg 2023; 109:4211-4220. [PMID: 38259001 PMCID: PMC10720846 DOI: 10.1097/js9.0000000000000705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 08/13/2023] [Indexed: 01/24/2024]
Abstract
Clinical trials are the essential assessment for safe, reliable, and effective drug development. Data-related limitations, extensive manual efforts, remote patient monitoring, and the complexity of traditional clinical trials on patients drive the application of Artificial Intelligence (AI) in medical and healthcare organisations. For expeditious and streamlined clinical trials, a personalised AI solution is the best utilisation. AI provides broad utility options through structured, standardised, and digitally driven elements in medical research. The clinical trials are a time-consuming process with patient recruitment, enrolment, frequent monitoring, and medical adherence and retention. With an AI-powered tool, the automated data can be generated and managed for the trial lifecycle with all the records of the medical history of the patient as patient-centric AI. AI can intelligently interpret the data, feed downstream systems, and automatically fill out the required analysis report. This article explains how AI has revolutionised innovative ways of collecting data, biosimulation, and early disease diagnosis for clinical trials and overcomes the challenges more precisely through cost and time reduction, improved efficiency, and improved drug development research with less need for rework. The future implications of AI to accelerate clinical trials are important in medical research because of its fast output and overall utility.
Collapse
Affiliation(s)
- Hitesh Chopra
- Department of Biosciences, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai - 602105, Tamil Nadu, India
| | - Annu
- Thin Film and Materials Laboratory, School of Mechanical Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| | - Dong K. Shin
- Thin Film and Materials Laboratory, School of Mechanical Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| | - Kavita Munjal
- Department of Pharmacy, Amity Institute of Pharmacy, Amity University, Noida, Uttar Pradesh 201303, India
| | - Priyanka
- Department of Veterinary Microbiology, College of Veterinary Science, Guru Angad Dev Veterinary and Animal Sciences University (GADVASU), Rampura Phul, Bathinda, Punjab
| | - Kuldeep Dhama
- Indian Veterinary Research Institute (IVRI), Izatnagar, Bareilly, Uttar Pradesh
| | - Talha B. Emran
- Department of Pharmacy, BGC Trust University Bangladesh, Chittagong
- Department of Pharmacy, Faculty of Allied Health Sciences, Daffodil International niversity, Dhaka, Bangladesh
| |
Collapse
|
38
|
Twala B, Molloy E. On effectively predicting autism spectrum disorder therapy using an ensemble of classifiers. Sci Rep 2023; 13:19957. [PMID: 37968315 PMCID: PMC10651853 DOI: 10.1038/s41598-023-46379-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 10/31/2023] [Indexed: 11/17/2023] Open
Abstract
An ensemble of classifiers combines several single classifiers to deliver a final prediction or classification decision. An increasingly provoking question is whether such an ensemble can outperform the single best classifier. If so, what form of ensemble learning system (also known as multiple classifier learning systems) yields the most significant benefits in the size or diversity of the ensemble? In this paper, the ability of ensemble learning to predict and identify factors that influence or contribute to autism spectrum disorder therapy (ASDT) for intervention purposes is investigated. Given that most interventions are typically short-term in nature, henceforth, developing a robotic system that will provide the best outcome and measurement of ASDT therapy has never been so critical. In this paper, the performance of five single classifiers against several multiple classifier learning systems in exploring and predicting ASDT is investigated using a dataset of behavioural data and robot-enhanced therapy against standard human treatment based on 3000 sessions and 300 h, recorded from 61 autistic children. Experimental results show statistically significant differences in performance among the single classifiers for ASDT prediction with decision trees as the more accurate classifier. The results further show multiple classifier learning systems (MCLS) achieving better performance for ASDT prediction (especially those ensembles with three core classifiers). Additionally, the results show bagging and boosting ensemble learning as robust when predicting ASDT with multi-stage design as the most dominant architecture. It also appears that eye contact and social interaction are the most critical contributing factors to the ASDT problem among children.
Collapse
Affiliation(s)
- Bhekisipho Twala
- Office of the Deputy Vice-Chancellor (Digital Transformation), Tshwane University of Technology, Private Bag x680, Pretoria, 001, South Africa.
| | - Eamon Molloy
- Waterford Institute of Technology, School of Science & Computing, Waterford, Ireland
| |
Collapse
|
39
|
Khawaja Z, Bélisle-Pipon JC. Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Front Digit Health 2023; 5:1278186. [PMID: 38026836 PMCID: PMC10663264 DOI: 10.3389/fdgth.2023.1278186] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Artificial intelligence (AI)-powered chatbots have the potential to substantially increase access to affordable and effective mental health services by supplementing the work of clinicians. Their 24/7 availability and accessibility through a mobile phone allow individuals to obtain help whenever and wherever needed, overcoming financial and logistical barriers. Although psychological AI chatbots have the ability to make significant improvements in providing mental health care services, they do not come without ethical and technical challenges. Some major concerns include providing inadequate or harmful support, exploiting vulnerable populations, and potentially producing discriminatory advice due to algorithmic bias. However, it is not always obvious for users to fully understand the nature of the relationship they have with chatbots. There can be significant misunderstandings about the exact purpose of the chatbot, particularly in terms of care expectations, ability to adapt to the particularities of users and responsiveness in terms of the needs and resources/treatments that can be offered. Hence, it is imperative that users are aware of the limited therapeutic relationship they can enjoy when interacting with mental health chatbots. Ignorance or misunderstanding of such limitations or of the role of psychological AI chatbots may lead to a therapeutic misconception (TM) where the user would underestimate the restrictions of such technologies and overestimate their ability to provide actual therapeutic support and guidance. TM raises major ethical concerns that can exacerbate one's mental health contributing to the global mental health crisis. This paper will explore the various ways in which TM can occur particularly through inaccurate marketing of these chatbots, forming a digital therapeutic alliance with them, receiving harmful advice due to bias in the design and algorithm, and the chatbots inability to foster autonomy with patients.
Collapse
|
40
|
Wilhelmy S, Giupponi G, Groß D, Eisendle K, Conca A. A shift in psychiatry through AI? Ethical challenges. Ann Gen Psychiatry 2023; 22:43. [PMID: 37919759 PMCID: PMC10623776 DOI: 10.1186/s12991-023-00476-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 10/24/2023] [Indexed: 11/04/2023] Open
Abstract
The digital transformation has made its way into many areas of society, including medicine. While AI-based systems are widespread in medical disciplines, their use in psychiatry is progressing more slowly. However, they promise to revolutionize psychiatric practice in terms of prevention options, diagnostics, or even therapy. Psychiatry is in the midst of this digital transformation, so the question is no longer "whether" to use technology, but "how" we can use it to achieve goals of progress or improvement. The aim of this article is to argue that this revolution brings not only new opportunities but also new ethical challenges for psychiatry, especially with regard to safety, responsibility, autonomy, or transparency. As an example, the relationship between doctor and patient in psychiatry will be addressed, in which digitization is also leading to ethically relevant changes. Ethical reflection on the use of AI systems offers the opportunity to accompany these changes carefully in order to take advantage of the benefits that this change brings. The focus should therefore always be on balancing what is technically possible with what is ethically necessary.
Collapse
Affiliation(s)
- Saskia Wilhelmy
- Institute for History, Theory and Ethics in Medicine, University Hospital, RWTH Aachen University, Wendlingweg 2, 5074, Aachen, Germany.
| | - Giancarlo Giupponi
- Academic Teaching Department of Psychiatry, Central Hospital, Sanitary Agency of South Tyrol, Via Lorenz Böhler 5, 39100, Bolzano, Italy
| | - Dominik Groß
- Institute for History, Theory and Ethics in Medicine, University Hospital, RWTH Aachen University, Wendlingweg 2, 5074, Aachen, Germany
| | - Klaus Eisendle
- Institute of General Practice and Public Health, Provincial College for Health Professions Claudiana, Lorenz-Böhler-Straße 13, 39100, Bolzano, Italy
| | - Andreas Conca
- Academic Teaching Department of Psychiatry, Central Hospital, Sanitary Agency of South Tyrol, Via Lorenz Böhler 5, 39100, Bolzano, Italy
| |
Collapse
|
41
|
Li LT, Haley LC, Boyd AK, Bernstam EV. Technical/Algorithm, Stakeholder, and Society (TASS) barriers to the application of artificial intelligence in medicine: A systematic review. J Biomed Inform 2023; 147:104531. [PMID: 37884177 DOI: 10.1016/j.jbi.2023.104531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/14/2023] [Accepted: 10/22/2023] [Indexed: 10/28/2023]
Abstract
INTRODUCTION The use of artificial intelligence (AI), particularly machine learning and predictive analytics, has shown great promise in health care. Despite its strong potential, there has been limited use in health care settings. In this systematic review, we aim to determine the main barriers to successful implementation of AI in healthcare and discuss potential ways to overcome these challenges. METHODS We conducted a literature search in PubMed (1/1/2001-1/1/2023). The search was restricted to publications in the English language, and human study subjects. We excluded articles that did not discuss AI, machine learning, predictive analytics, and barriers to the use of these techniques in health care. Using grounded theory methodology, we abstracted concepts to identify major barriers to AI use in medicine. RESULTS We identified a total of 2,382 articles. After reviewing the 306 included papers, we developed 19 major themes, which we categorized into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). These themes included: Lack of Explainability, Need for Validation Protocols, Need for Standards for Interoperability, Need for Reporting Guidelines, Need for Standardization of Performance Metrics, Lack of Plan for Updating Algorithm, Job Loss, Skills Loss, Workflow Challenges, Loss of Patient Autonomy and Consent, Disturbing the Patient-Clinician Relationship, Lack of Trust in AI, Logistical Challenges, Lack of strategic plan, Lack of Cost-effectiveness Analysis and Proof of Efficacy, Privacy, Liability, Bias and Social Justice, and Education. CONCLUSION We identified 19 major barriers to the use of AI in healthcare and categorized them into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). Future studies should expand on barriers in pediatric care and focus on developing clearly defined protocols to overcome these barriers.
Collapse
Affiliation(s)
- Linda T Li
- Department of Surgery, Division of Pediatric Surgery, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, United States; McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States.
| | - Lauren C Haley
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Alexandra K Boyd
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Elmer V Bernstam
- McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States; McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| |
Collapse
|
42
|
Zhang X, Gu Y, Yin J, Zhang Y, Jin C, Wang W, Li AM, Wang Y, Su L, Xu H, Ge X, Ye C, Tang L, Shen B, Fang J, Wang D, Feng R. Development, Reliability, and Structural Validity of the Scale for Knowledge, Attitude, and Practice in Ethics Implementation Among AI Researchers: Cross-Sectional Study. JMIR Form Res 2023; 7:e42202. [PMID: 37883175 PMCID: PMC10636617 DOI: 10.2196/42202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 01/31/2023] [Accepted: 09/24/2023] [Indexed: 10/27/2023] Open
Abstract
BACKGROUND Medical artificial intelligence (AI) has significantly contributed to decision support for disease screening, diagnosis, and management. With the growing number of medical AI developments and applications, incorporating ethics is considered essential to avoiding harm and ensuring broad benefits in the lifecycle of medical AI. One of the premises for effectively implementing ethics in Medical AI research necessitates researchers' comprehensive knowledge, enthusiastic attitude, and practical experience. However, there is currently a lack of an available instrument to measure these aspects. OBJECTIVE The aim of this study was to develop a comprehensive scale for measuring the knowledge, attitude, and practice of ethics implementation among medical AI researchers, and to evaluate its measurement properties. METHODS The construct of the Knowledge-Attitude-Practice in Ethics Implementation (KAP-EI) scale was based on the Knowledge-Attitude-Practice (KAP) model, and the evaluation of its measurement properties was in compliance with the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) reporting guidelines for studies on measurement instruments. The study was conducted in 2 phases. The first phase involved scale development through a systematic literature review, qualitative interviews, and item analysis based on a cross-sectional survey. The second phase involved evaluation of structural validity and reliability through another cross-sectional study. RESULTS The KAP-EI scale had 3 dimensions including knowledge (10 items), attitude (6 items), and practice (7 items). The Cronbach α for the whole scale reached .934. Confirmatory factor analysis showed that the goodness-of-fit indices of the scale were satisfactory (χ2/df ratio:=2.338, comparative fit index=0.949, Tucker Lewis index=0.941, root-mean-square error of approximation=0.064, and standardized root-mean-square residual=0.052). CONCLUSIONS The results show that the scale has good reliability and structural validity; hence, it could be considered an effective instrument. This is the first instrument developed for this purpose.
Collapse
Affiliation(s)
- Xiaobo Zhang
- Children's Hospital of Fudan University, Shanghai, China
| | - Ying Gu
- Children's Hospital of Fudan University, Shanghai, China
| | - Jie Yin
- School of Philosophy Fudan University, Shanghai, China
| | - Yuejie Zhang
- School of Computer Science Fudan University, Shanghai, China
| | - Cheng Jin
- School of Computer Science Fudan University, Shanghai, China
| | - Weibing Wang
- School of Public Health Fudan University, Shanghai, China
| | - Albert Martin Li
- Department of Paediatrics, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Yingwen Wang
- Children's Hospital of Fudan University, Shanghai, China
| | - Ling Su
- Children's Hospital of Fudan University, Shanghai, China
| | - Hong Xu
- Children's Hospital of Fudan University, Shanghai, China
| | - Xiaoling Ge
- Children's Hospital of Fudan University, Shanghai, China
| | - Chengjie Ye
- Children's Hospital of Fudan University, Shanghai, China
| | - Liangfeng Tang
- Children's Hospital of Fudan University, Shanghai, China
| | - Bing Shen
- Shanghai Hospital Development Center, Shanghai, China
| | - Jinwu Fang
- School of Public Health Fudan University, Shanghai, China
| | - Daoyang Wang
- School of Public Health Fudan University, Shanghai, China
| | - Rui Feng
- School of Computer Science Fudan University, Shanghai, China
| |
Collapse
|
43
|
Altaf Dar M, Maqbool M, Ara I, Zehravi M. The intersection of technology and mental health: enhancing access and care. Int J Adolesc Med Health 2023; 35:423-428. [PMID: 37602724 DOI: 10.1515/ijamh-2023-0113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 08/13/2023] [Indexed: 08/22/2023]
Abstract
In recent times, technology has increasingly become a central force in shaping the landscape of mental health care. The integration of various technological advancements, such as teletherapy, virtual care platforms, mental health apps, and wearable devices, holds great promise in improving access to mental health services and enhancing overall care. Technology's impact on mental health care is multi-faceted. Teletherapy and virtual care have brought about a revolution in service delivery, eliminating geographical barriers and offering individuals convenient and flexible access to therapy. Mobile mental health apps empower users to monitor their emotional well-being, practice mindfulness, and access self-help resources on the move. Furthermore, wearable devices equipped with biometric data can provide valuable insights into stress levels and sleep patterns, potentially serving as valuable indicators of mental health status. However, integrating technology into mental health care comes with several challenges and ethical considerations. Bridging the digital divide is a concern, as not everyone has equal access to technology or the necessary digital literacy. Ensuring privacy and data security is crucial to safeguard sensitive client information. The rapid proliferation of mental health apps calls for careful assessment and regulation to promote evidence-based practices and ensure the delivery of quality interventions. Looking ahead, it is vital to consider future implications and adopt relevant recommendations to fully harness technology's potential in mental health care. Continuous research is essential to evaluate the efficacy and safety of digital interventions, fostering collaboration between researchers, mental health professionals, and technology developers. Proper training on ethical technology utilization is necessary for mental health practitioners to maintain therapeutic boundaries while leveraging technological advancements responsibly.
Collapse
Affiliation(s)
- Mohd Altaf Dar
- Department of Pharmacology, CT Institute of Pharmaceutical Sciences, PTU, Jalandhar Punjab, Baramulla, India
| | - Mudasir Maqbool
- Department of Pharmaceutical Sciences, University of Kashmir, Srinagar, India
| | - Irfat Ara
- Regional Research Institute of Unani Medicine, Srinagar, Jammu and Kashmir, India
| | - Mehrukh Zehravi
- Department of Clinical Pharmacy Girls Section, Prince Sattam Bin Abdul Aziz University, Alkharj, Saudia Arabia
| |
Collapse
|
44
|
Nashwan AJ, Gharib S, Alhadidi M, El-Ashry AM, Alamgir A, Al-Hassan M, Khedr MA, Dawood S, Abufarsakh B. Harnessing Artificial Intelligence: Strategies for Mental Health Nurses in Optimizing Psychiatric Patient Care. Issues Ment Health Nurs 2023; 44:1020-1034. [PMID: 37850937 DOI: 10.1080/01612840.2023.2263579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
This narrative review explores the transformative impact of Artificial Intelligence (AI) on mental health nursing, particularly in enhancing psychiatric patient care. AI technologies present new strategies for early detection, risk assessment, and improving treatment adherence in mental health. They also facilitate remote patient monitoring, bridge geographical gaps, and support clinical decision-making. The evolution of virtual mental health assistants and AI-enhanced therapeutic interventions are also discussed. These technological advancements reshape the nurse-patient interactions while ensuring personalized, efficient, and high-quality care. The review also addresses AI's ethical and responsible use in mental health nursing, emphasizing patient privacy, data security, and the balance between human interaction and AI tools. As AI applications in mental health care continue to evolve, this review encourages continued innovation while advocating for responsible implementation, thereby optimally leveraging the potential of AI in mental health nursing.
Collapse
Affiliation(s)
- Abdulqadir J Nashwan
- Nursing Department, Hamad Medical Corporation, Doha, Qatar
- Department of Public Health, College of Health Sciences, QU Health, Qatar University, Doha, Qatar
| | - Suzan Gharib
- Nursing Department, Al-Khaldi Hospital, Amman, Jordan
| | - Majdi Alhadidi
- Psychiatric & Mental Health Nursing, Faculty of Nursing, Al-Zaytoonah University of Jordan, Amman, Jordan
| | | | | | | | | | - Shaimaa Dawood
- Faculty of Nursing, Alexandria University, Alexandria, Egypt
| | | |
Collapse
|
45
|
Sahoo JP, Narayan BN, Santi NS. The future of psychiatry with artificial intelligence: can the man-machine duo redefine the tenets? CONSORTIUM PSYCHIATRICUM 2023; 4:72-76. [PMID: 38249529 PMCID: PMC10795941 DOI: 10.17816/cp13626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 09/15/2023] [Indexed: 01/23/2024] Open
Abstract
As one of the largest contributors of morbidity and mortality, psychiatric disorders are anticipated to triple in prevalence over the coming decade or so. Major obstacles to psychiatric care include stigma, funding constraints, and a dearth of resources and psychiatrists. The main thrust of our present-day discussion has been towards the direction of how machine learning and artificial intelligence could influence the way that patients experience care. To better grasp the issues regarding trust, privacy, and autonomy, their societal and ethical ramifications need to be probed. There is always the possibility that the artificial mind could malfunction or exhibit behavioral abnormalities. An in-depth philosophical understanding of these possibilities in both human and artificial intelligence could offer correlational insights into the robotic management of mental disorders in the future. This article looks into the role of artificial intelligence, the different challenges associated with it, as well as the perspectives in the management of such mental illnesses as depression, anxiety, and schizophrenia.
Collapse
Affiliation(s)
| | | | - N Simple Santi
- Veer Surendra Sai Institute Of Medical Science And Research
| |
Collapse
|
46
|
Bonny T, Al Nassan W, Obaideen K, Al Mallahi MN, Mohammad Y, El-damanhoury HM. Contemporary Role and Applications of Artificial Intelligence in Dentistry. F1000Res 2023; 12:1179. [PMID: 37942018 PMCID: PMC10630586 DOI: 10.12688/f1000research.140204.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/24/2023] [Indexed: 11/10/2023] Open
Abstract
Artificial Intelligence (AI) technologies play a significant role and significantly impact various sectors, including healthcare, engineering, sciences, and smart cities. AI has the potential to improve the quality of patient care and treatment outcomes while minimizing the risk of human error. Artificial Intelligence (AI) is transforming the dental industry, just like it is revolutionizing other sectors. It is used in dentistry to diagnose dental diseases and provide treatment recommendations. Dental professionals are increasingly relying on AI technology to assist in diagnosis, clinical decision-making, treatment planning, and prognosis prediction across ten dental specialties. One of the most significant advantages of AI in dentistry is its ability to analyze vast amounts of data quickly and accurately, providing dental professionals with valuable insights to enhance their decision-making processes. The purpose of this paper is to identify the advancement of artificial intelligence algorithms that have been frequently used in dentistry and assess how well they perform in terms of diagnosis, clinical decision-making, treatment, and prognosis prediction in ten dental specialties; dental public health, endodontics, oral and maxillofacial surgery, oral medicine and pathology, oral & maxillofacial radiology, orthodontics and dentofacial orthopedics, pediatric dentistry, periodontics, prosthodontics, and digital dentistry in general. We will also show the pros and cons of using AI in all dental specialties in different ways. Finally, we will present the limitations of using AI in dentistry, which made it incapable of replacing dental personnel, and dentists, who should consider AI a complimentary benefit and not a threat.
Collapse
Affiliation(s)
- Talal Bonny
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Wafaa Al Nassan
- Department of Computer Engineering, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Khaled Obaideen
- Sustainable Energy and Power Systems Research Centre, RISE, University of Sharjah, Sharjah, 27272, United Arab Emirates
| | - Maryam Nooman Al Mallahi
- Department of Mechanical and Aerospace Engineering, United Arab Emirates University, Al Ain City, Abu Dhabi, 27272, United Arab Emirates
| | - Yara Mohammad
- College of Engineering and Information Technology, Ajman University, Ajman University, Ajman, Ajman, United Arab Emirates
| | - Hatem M. El-damanhoury
- Department of Preventive and Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, 27272, United Arab Emirates
| |
Collapse
|
47
|
Timmons AC, Duong JB, Fiallo NS, Lee T, Vo HPQ, Ahle MW, Comer JS, Brewer LC, Frazier SL, Chaspari T. A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1062-1096. [PMID: 36490369 PMCID: PMC10250563 DOI: 10.1177/17456916221134490] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Advances in computer science and data-analytic methods are driving a new era in mental health research and application. Artificial intelligence (AI) technologies hold the potential to enhance the assessment, diagnosis, and treatment of people experiencing mental health problems and to increase the reach and impact of mental health care. However, AI applications will not mitigate mental health disparities if they are built from historical data that reflect underlying social biases and inequities. AI models biased against sensitive classes could reinforce and even perpetuate existing inequities if these models create legacies that differentially impact who is diagnosed and treated, and how effectively. The current article reviews the health-equity implications of applying AI to mental health problems, outlines state-of-the-art methods for assessing and mitigating algorithmic bias, and presents a call to action to guide the development of fair-aware AI in psychological science.
Collapse
Affiliation(s)
- Adela C. Timmons
- University of Texas at Austin Institute for Mental Health Research
- Colliga Apps Corporation
| | | | | | | | | | | | | | - LaPrincess C. Brewer
- Department of Cardiovascular Medicine, May Clinic College of Medicine, Rochester, Minnesota, United States
- Center for Health Equity and Community Engagement Research, Mayo Clinic, Rochester, Minnesota, United States
| | | | | |
Collapse
|
48
|
Sun J, Dong QX, Wang SW, Zheng YB, Liu XX, Lu TS, Yuan K, Shi J, Hu B, Lu L, Han Y. Artificial intelligence in psychiatry research, diagnosis, and therapy. Asian J Psychiatr 2023; 87:103705. [PMID: 37506575 DOI: 10.1016/j.ajp.2023.103705] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 07/16/2023] [Accepted: 07/20/2023] [Indexed: 07/30/2023]
Abstract
Psychiatric disorders are now responsible for the largest proportion of the global burden of disease, and even more challenges have been seen during the COVID-19 pandemic. Artificial intelligence (AI) is commonly used to facilitate the early detection of disease, understand disease progression, and discover new treatments in the fields of both physical and mental health. The present review provides a broad overview of AI methodology and its applications in data acquisition and processing, feature extraction and characterization, psychiatric disorder classification, potential biomarker detection, real-time monitoring, and interventions in psychiatric disorders. We also comprehensively summarize AI applications with regard to the early warning, diagnosis, prognosis, and treatment of specific psychiatric disorders, including depression, schizophrenia, autism spectrum disorder, attention-deficit/hyperactivity disorder, addiction, sleep disorders, and Alzheimer's disease. The advantages and disadvantages of AI in psychiatry are clarified. We foresee a new wave of research opportunities to facilitate and improve AI technology and its long-term implications in psychiatry during and after the COVID-19 era.
Collapse
Affiliation(s)
- Jie Sun
- Pain Medicine Center, Peking University Third Hospital, Beijing 100191, China; Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China
| | - Qun-Xi Dong
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - San-Wang Wang
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China; Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Yong-Bo Zheng
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China; Peking-Tsinghua Center for Life Sciences and PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Xiao-Xing Liu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China
| | - Tang-Sheng Lu
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China
| | - Kai Yuan
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China
| | - Jie Shi
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China
| | - Bin Hu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Lin Lu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China; Peking-Tsinghua Center for Life Sciences and PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China.
| | - Ying Han
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China.
| |
Collapse
|
49
|
Espejo G, Reiner W, Wenzinger M. Exploring the Role of Artificial Intelligence in Mental Healthcare: Progress, Pitfalls, and Promises. Cureus 2023; 15:e44748. [PMID: 37809254 PMCID: PMC10556257 DOI: 10.7759/cureus.44748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/05/2023] [Indexed: 10/10/2023] Open
Abstract
The rise of artificial intelligence (AI) heralds a significant revolution in healthcare, particularly in mental health. AI's potential spans diagnostic algorithms, data analysis from diverse sources, and real-time patient monitoring. It is essential for clinicians to remain informed about AI's progress and limitations. The inherent complexity of mental disorders, limited objective data, and retrospective studies pose challenges to the application of AI. Privacy concerns, bias, and the risk of AI replacing human care also loom. Regulatory oversight and physician involvement are needed for equitable AI implementation. AI integration and use in psychotherapy and other services are on the horizon. Patient trust, feasibility, clinical efficacy, and clinician acceptance are prerequisites. In the future, governing bodies must decide on AI ownership, governance, and integration approaches. While AI can enhance clinical decision-making and efficiency, it might also exacerbate moral dilemmas, autonomy loss, and issues regarding the scope of practice. Striking a balance between AI's strengths and limitations involves utilizing AI as a validated clinical supplement under medical supervision, necessitating active clinician involvement in AI research, ethics, and regulation. AI's trajectory must align with optimizing mental health treatment and upholding compassionate care.
Collapse
Affiliation(s)
- Gemma Espejo
- Psychiatry and Behavioral Sciences, University of California, Irvine School of Medicine, Irvine, USA
| | - Wade Reiner
- Psychiatry, University of Washington, Seattle, USA
| | | |
Collapse
|
50
|
Hadar-Shoval D, Elyoseph Z, Lvovsky M. The plasticity of ChatGPT's mentalizing abilities: personalization for personality structures. Front Psychiatry 2023; 14:1234397. [PMID: 37720897 PMCID: PMC10503434 DOI: 10.3389/fpsyt.2023.1234397] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 08/22/2023] [Indexed: 09/19/2023] Open
Abstract
This study evaluated the potential of ChatGPT, a large language model, to generate mentalizing-like abilities that are tailored to a specific personality structure and/or psychopathology. Mentalization is the ability to understand and interpret one's own and others' mental states, including thoughts, feelings, and intentions. Borderline Personality Disorder (BPD) and Schizoid Personality Disorder (SPD) are characterized by distinct patterns of emotional regulation. Individuals with BPD tend to experience intense and unstable emotions, while individuals with SPD tend to experience flattened or detached emotions. We used ChatGPT's free version 23.3 and assessed the extent to which its responses akin to emotional awareness (EA) were customized to the distinctive personality structure-character characterized by Borderline Personality Disorder (BPD) and Schizoid Personality Disorder (SPD), employing the Levels of Emotional Awareness Scale (LEAS). ChatGPT was able to accurately describe the emotional reactions of individuals with BPD as more intense, complex, and rich than those with SPD. This finding suggests that ChatGPT can generate mentalizing-like responses consistent with a range of psychopathologies in line with clinical and theoretical knowledge. However, the study also raises concerns regarding the potential for stigmas or biases related to mental diagnoses to impact the validity and usefulness of chatbot-based clinical interventions. We emphasize the need for the responsible development and deployment of chatbot-based interventions in mental health, which considers diverse theoretical frameworks.
Collapse
Affiliation(s)
- Dorit Hadar-Shoval
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
- Educational Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Maya Lvovsky
- Educational Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|