1
|
Arjanto P, Senduk FFW, Nahdiyah U, Utami MS. AI and ethics in mental health: exploring the controversy over the use of ChatGPT. J Public Health (Oxf) 2024; 46:e340-e341. [PMID: 38031294 DOI: 10.1093/pubmed/fdad254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/09/2023] [Accepted: 11/10/2023] [Indexed: 12/01/2023] Open
Affiliation(s)
- Paul Arjanto
- Faculty of Teacher Training and Education, University of Pattimura, Ambon, Indonesia
| | - Feibry F W Senduk
- Faculty of Economic and Business, State University of Manado, Tondano, Indonesia
| | - Umi Nahdiyah
- Faculty of Education, State University of Malang, Malang, Indonesia
| | - Mukti S Utami
- Faculty of Education, State University of Malang, Malang, Indonesia
| |
Collapse
|
2
|
Zafar F, Fakhare Alam L, Vivas RR, Wang J, Whei SJ, Mehmood S, Sadeghzadegan A, Lakkimsetti M, Nazir Z. The Role of Artificial Intelligence in Identifying Depression and Anxiety: A Comprehensive Literature Review. Cureus 2024; 16:e56472. [PMID: 38638735 PMCID: PMC11025697 DOI: 10.7759/cureus.56472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/18/2024] [Indexed: 04/20/2024] Open
Abstract
This narrative literature review undertakes a comprehensive examination of the burgeoning field, tracing the development of artificial intelligence (AI)-powered tools for depression and anxiety detection from the level of intricate algorithms to practical applications. Delivering essential mental health care services is now a significant public health priority. In recent years, AI has become a game-changer in the early identification and intervention of these pervasive mental health disorders. AI tools can potentially empower behavioral healthcare services by helping psychiatrists collect objective data on patients' progress and tasks. This study emphasizes the current understanding of AI, the different types of AI, its current use in multiple mental health disorders, advantages, disadvantages, and future potentials. As technology develops and the digitalization of the modern era increases, there will be a rise in the application of artificial intelligence in psychiatry; therefore, a comprehensive understanding will be needed. We searched PubMed, Google Scholar, and Science Direct using keywords for this. In a recent review of studies using electronic health records (EHR) with AI and machine learning techniques for diagnosing all clinical conditions, roughly 99 publications have been found. Out of these, 35 studies were identified for mental health disorders in all age groups, and among them, six studies utilized EHR data sources. By critically analyzing prominent scholarly works, we aim to illuminate the current state of this technology, exploring its successes, limitations, and future directions. In doing so, we hope to contribute to a nuanced understanding of AI's potential to revolutionize mental health diagnostics and pave the way for further research and development in this critically important domain.
Collapse
Affiliation(s)
- Fabeha Zafar
- Internal Medicine, Dow University of Health Sciences (DUHS), Karachi, PAK
| | | | - Rafael R Vivas
- Nutrition, Food and Exercise Sciences, Florida State University College of Human Sciences, Tallahassee, USA
| | - Jada Wang
- Medicine, St. George's University, Brooklyn, USA
| | - See Jia Whei
- Internal Medicine, Sriwijaya University, Palembang, IDN
| | | | | | | | - Zahra Nazir
- Internal Medicine, Combined Military Hospital, Quetta, Quetta, PAK
| |
Collapse
|
3
|
Monteith S, Glenn T, Geddes JR, Whybrow PC, Achtyes ED, Bauer M. Implications of Online Self-Diagnosis in Psychiatry. PHARMACOPSYCHIATRY 2024; 57:45-52. [PMID: 38471511 DOI: 10.1055/a-2268-5441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Online self-diagnosis of psychiatric disorders by the general public is increasing. The reasons for the increase include the expansion of Internet technologies and the use of social media, the rapid growth of direct-to-consumer e-commerce in healthcare, and the increased emphasis on patient involvement in decision making. The publicity given to artificial intelligence (AI) has also contributed to the increased use of online screening tools by the general public. This paper aims to review factors contributing to the expansion of online self-diagnosis by the general public, and discuss both the risks and benefits of online self-diagnosis of psychiatric disorders. A narrative review was performed with examples obtained from the scientific literature and commercial articles written for the general public. Online self-diagnosis of psychiatric disorders is growing rapidly. Some people with a positive result on a screening tool will seek professional help. However, there are many potential risks for patients who self-diagnose, including an incorrect or dangerous diagnosis, increased patient anxiety about the diagnosis, obtaining unfiltered advice on social media, using the self-diagnosis to self-treat, including online purchase of medications without a prescription, and technical issues including the loss of privacy. Physicians need to be aware of the increase in self-diagnosis by the general public and the potential risks, both medical and technical. Psychiatrists must recognize that the general public is often unaware of the challenging medical and technical issues involved in the diagnosis of a mental disorder, and be ready to treat patients who have already obtained an online self-diagnosis.
Collapse
Affiliation(s)
- Scott Monteith
- Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, Michigan, USA
| | - Tasha Glenn
- ChronoRecord Association, Fullerton, California, USA
| | - John R Geddes
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
| | - Peter C Whybrow
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, California, USA
| | - Eric D Achtyes
- Department of Psychiatry, Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, Michigan, USA
| | - Michael Bauer
- Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
4
|
Monteith S, Glenn T, Geddes JR, Achtyes ED, Whybrow PC, Bauer M. Challenges and Ethical Considerations to Successfully Implement Artificial Intelligence in Clinical Medicine and Neuroscience: a Narrative Review. PHARMACOPSYCHIATRY 2023; 56:209-213. [PMID: 37643732 DOI: 10.1055/a-2142-9325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
This narrative review discusses how the safe and effective use of clinical artificial intelligence (AI) prediction tools requires recognition of the importance of human intelligence. Human intelligence, creativity, situational awareness, and professional knowledge, are required for successful implementation. The implementation of clinical AI prediction tools may change the workflow in medical practice resulting in new challenges and safety implications. Human understanding of how a clinical AI prediction tool performs in routine and exceptional situations is fundamental to successful implementation. Physicians must be involved in all aspects of the selection, implementation, and ongoing product monitoring of clinical AI prediction tools.
Collapse
Affiliation(s)
- Scott Monteith
- Department of Psychiatry, Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, MI, USA
| | - Tasha Glenn
- ChronoRecord Association, Fullerton, CA, USA
| | - John R Geddes
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
| | - Eric D Achtyes
- Department of Psychiatry, Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, MI, USA
| | - Peter C Whybrow
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, CA, USA
| | - Michael Bauer
- Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
5
|
Kelly S, Kaye SA, White KM, Oviedo-Trespalacios O. Clearing the way for participatory data stewardship in artificial intelligence development: a mixed methods approach. ERGONOMICS 2023; 66:1782-1799. [PMID: 38054452 DOI: 10.1080/00140139.2023.2289864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 11/28/2023] [Indexed: 12/07/2023]
Abstract
Participatory data stewardship (PDS) empowers individuals to shape and govern their data via responsible collection and use. As artificial intelligence (AI) requires massive amounts of data, research must assess what factors predict consumers' willingness to provide their data to AI. This mixed-methods study applied the extended Technology Acceptance Model (TAM) with additional predictors of trust and subjective norms. Participants' data donation profile was also measured to assess the influence of individuals' social duty, understanding of the purpose and guilt. Participants (N = 322) completed an experimental survey. Individuals were willing to provide data to AI via PDS when they believed it was their social duty, understood the purpose and trusted AI. However, the TAM may not be a complete model for assessing user willingness. This study establishes that individuals value the importance of trusting and comprehending the broader societal impact of AI when providing their data to AI.Practitioner summary: To build responsible and representative AI, individuals are needed to participate in data stewardship. The factors driving willingness to participate in such methods were studied via an online survey. Trust, social duty and understanding the purpose significantly predicted willingness to provide data to AI via participatory data stewardship.
Collapse
Affiliation(s)
- Sage Kelly
- Centre for Accident Research and Road Safety - Queensland (CARRS-Q), School of Psychology & Counselling, Queensland University of Technology (QUT), Kelvin Grove, Queensland, Australia
| | - Sherrie-Anne Kaye
- Centre for Accident Research and Road Safety - Queensland (CARRS-Q), School of Psychology & Counselling, Queensland University of Technology (QUT), Kelvin Grove, Queensland, Australia
| | - Katherine M White
- Faculty of Health, School of Psychology & Counselling, Queensland University of Technology (QUT), Kelvin Grove, Queensland, Australia
| | - Oscar Oviedo-Trespalacios
- Faculty of Technology, Policy and Management, Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
6
|
Oudin A, Maatoug R, Bourla A, Ferreri F, Bonnot O, Millet B, Schoeller F, Mouchabac S, Adrien V. Digital Phenotyping: Data-Driven Psychiatry to Redefine Mental Health. J Med Internet Res 2023; 25:e44502. [PMID: 37792430 PMCID: PMC10585447 DOI: 10.2196/44502] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 07/10/2023] [Accepted: 08/21/2023] [Indexed: 10/05/2023] Open
Abstract
The term "digital phenotype" refers to the digital footprint left by patient-environment interactions. It has potential for both research and clinical applications but challenges our conception of health care by opposing 2 distinct approaches to medicine: one centered on illness with the aim of classifying and curing disease, and the other centered on patients, their personal distress, and their lived experiences. In the context of mental health and psychiatry, the potential benefits of digital phenotyping include creating new avenues for treatment and enabling patients to take control of their own well-being. However, this comes at the cost of sacrificing the fundamental human element of psychotherapy, which is crucial to addressing patients' distress. In this viewpoint paper, we discuss the advances rendered possible by digital phenotyping and highlight the risk that this technology may pose by partially excluding health care professionals from the diagnosis and therapeutic process, thereby foregoing an essential dimension of care. We conclude by setting out concrete recommendations on how to improve current digital phenotyping technology so that it can be harnessed to redefine mental health by empowering patients without alienating them.
Collapse
Affiliation(s)
- Antoine Oudin
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Pitié-Salpêtrière Hospital, Public Hospitals of Sorbonne University, Paris, France
| | - Redwan Maatoug
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Pitié-Salpêtrière Hospital, Public Hospitals of Sorbonne University, Paris, France
| | - Alexis Bourla
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Saint-Antoine Hospital, Public Hospitals of Sorbonne University, Paris, France
- Medical Strategy and Innovation Department, Clariane, Paris, France
- NeuroStim Psychiatry Practice, Paris, France
| | - Florian Ferreri
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Saint-Antoine Hospital, Public Hospitals of Sorbonne University, Paris, France
| | - Olivier Bonnot
- Department of Child and Adolescent Psychiatry, Nantes University Hospital, Nantes, France
- Pays de la Loire Psychology Laboratory, Nantes University, Nantes, France
| | - Bruno Millet
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Pitié-Salpêtrière Hospital, Public Hospitals of Sorbonne University, Paris, France
| | - Félix Schoeller
- Institute for Advanced Consciousness Studies, Santa Monica, CA, United States
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Stéphane Mouchabac
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Saint-Antoine Hospital, Public Hospitals of Sorbonne University, Paris, France
| | - Vladimir Adrien
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Saint-Antoine Hospital, Public Hospitals of Sorbonne University, Paris, France
| |
Collapse
|
7
|
McCradden M, Hui K, Buchman DZ. Evidence, ethics and the promise of artificial intelligence in psychiatry. JOURNAL OF MEDICAL ETHICS 2023; 49:573-579. [PMID: 36581457 PMCID: PMC10423547 DOI: 10.1136/jme-2022-108447] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 11/29/2022] [Indexed: 05/20/2023]
Abstract
Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine's understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the outputs of the AI system. However, a potential epistemic privileging of AI in clinical judgements may lead to unintended consequences that could negatively affect patient treatment, well-being and rights. The implications are also relevant to precision medicine, digital twin technologies and predictive analytics generally. We propose that a commitment to epistemic humility can help promote judicious clinical decision-making at the interface of big data and AI in psychiatry.
Collapse
Affiliation(s)
- Melissa McCradden
- Joint Centre for Bioethics, University of Toronto Dalla Lana School of Public Health, Toronto, Ontario, Canada
- Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics & Genome Biology, Peter Gilgan Centre for Research and Learning, Toronto, Ontario, Canada
| | - Katrina Hui
- Everyday Ethics Lab, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Daniel Z Buchman
- Joint Centre for Bioethics, University of Toronto Dalla Lana School of Public Health, Toronto, Ontario, Canada
- Everyday Ethics Lab, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| |
Collapse
|