1
|
Tai AMY, Kim JJ, Schmeckenbecher J, Kitchin V, Wang J, Kazemi A, Masoudi R, Fadakar H, Iorfino F, Krausz RM. Clinical decision support systems in addiction and concurrent disorders: A systematic review and meta-analysis. J Eval Clin Pract 2024; 30:1664-1683. [PMID: 38979849 DOI: 10.1111/jep.14069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 06/03/2024] [Accepted: 06/06/2024] [Indexed: 07/10/2024]
Abstract
INTRODUCTION This review aims to synthesise the literature on the efficacy, evolution, and challenges of implementing Clincian Decision Support Systems (CDSS) in the realm of mental health, addiction, and concurrent disorders. METHODS Following PRISMA guidelines, a systematic review and meta-analysis were performed. Searches conducted in databases such as MEDLINE, Embase, CINAHL, PsycINFO, and Web of Science through 25 May 2023, yielded 27,344 records. After necessary exclusions, 69 records were allocated for detailed synthesis. In the examination of patient outcomes with a focus on metrics such as therapeutic efficacy, patient satisfaction, and treatment acceptance, meta-analytic techniques were employed to synthesise data from randomised controlled trials. RESULTS A total of 69 studies were included, revealing a shift from knowledge-based models pre-2017 to a rise in data-driven models post-2017. The majority of models were found to be in Stage 2 or 4 of maturity. The meta-analysis showed an effect size of -0.11 for addiction-related outcomes and a stronger effect size of -0.50 for patient satisfaction and acceptance of CDSS. DISCUSSION The results indicate a shift from knowledge-based to data-driven CDSS approaches, aligned with advances in machine learning and big data. Although the immediate impact on addiction outcomes is modest, higher patient satisfaction suggests promise for wider CDSS use. Identified challenges include alert fatigue and opaque AI models. CONCLUSION CDSS shows promise in mental health and addiction treatment but requires a nuanced approach for effective and ethical implementation. The results emphasise the need for continued research to ensure optimised and equitable use in healthcare settings.
Collapse
Affiliation(s)
- Andy Man Yeung Tai
- Department of Psychiatry, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Jane J Kim
- Department of Psychiatry, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Jim Schmeckenbecher
- Department of Psychiatry and Psychotherapy, Medical University of Vienna, Wien, Austria
| | - Vanessa Kitchin
- Department of Psychiatry, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Johnston Wang
- Department of Psychiatry, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Alireza Kazemi
- Department of Psychiatry, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Raha Masoudi
- Department of Psychiatry, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Hasti Fadakar
- Department of Psychiatry, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Frank Iorfino
- Brain and Mind Centre, The University of Sydney, Sydney, Australia
| | - Reinhard Michael Krausz
- Department of Psychiatry, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
2
|
Rony MKK, Numan SM, Akter K, Tushar H, Debnath M, Johra FT, Akter F, Mondal S, Das M, Uddin MJ, Begum J, Parvin MR. Nurses' perspectives on privacy and ethical concerns regarding artificial intelligence adoption in healthcare. Heliyon 2024; 10:e36702. [PMID: 39281626 PMCID: PMC11400963 DOI: 10.1016/j.heliyon.2024.e36702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 08/08/2024] [Accepted: 08/20/2024] [Indexed: 09/18/2024] Open
Abstract
Background With the increasing integration of artificial intelligence (AI) technologies into healthcare systems, there is a growing emphasis on privacy and ethical considerations. Nurses, as frontline healthcare professionals, are pivotal in-patient care and offer valuable insights into the ethical implications of AI adoption. Objectives This study aimed to explore nurses' perspectives on privacy and ethical concerns associated with the implementation of AI in healthcare settings. Methods We employed Van Manen's hermeneutic phenomenology as the qualitative research approach. Data were collected through purposive sampling from the December 7, 2023 to the January 15, 2024, with interviews conducted in Bengali. Thematic analysis was utilized following member checking and an audit trail. Results Six themes emerged from the research findings: Ethical dimensions of AI integration, highlighting complexities in incorporating AI ethically; Privacy challenges in healthcare AI, revealing concerns about data security and confidentiality; Balancing innovation and ethical practice, indicating a need to reconcile technological advancements with ethical considerations; Human touch vs. technological progress, underscoring tensions between automation and personalized care; Patient-centered care in the AI era, emphasizing the importance of maintaining focus on patients amidst technological advancements; and Ethical preparedness and education, suggesting a need for enhanced training and education on ethical AI use in healthcare. Conclusions The findings underscore the importance of addressing privacy and ethical concerns in AI healthcare development. Nurses advocate for patient-centered approaches and collaborate with policymakers and tech developers to ensure responsible AI adoption. Further research is imperative for mitigating ethical challenges and promoting ethical AI in healthcare practice.
Collapse
Affiliation(s)
| | - Sharker Md Numan
- School of Science and Technology, Bangladesh Open University, Gazipur, Bangladesh
| | - Khadiza Akter
- Master of Public Health, Daffodil International University, Dhaka, Bangladesh
| | - Hasanuzzaman Tushar
- Department of Business Administration, International University of Business Agriculture and Technology, Dhaka, Bangladesh
| | - Mitun Debnath
- Master of Public Health, National Institute of Preventive and Social Medicine, Dhaka, Bangladesh
| | - Fateha Tuj Johra
- Masters in Disaster Management, University of Dhaka, Dhaka, Bangladesh
| | - Fazila Akter
- Dhaka Nursing College, Affiliated with the University of Dhaka, Bangladesh
| | - Sujit Mondal
- Master of Science in Nursing, National Institute of Advanced Nursing Education and Research Mugda, Dhaka, Bangladesh
| | - Mousumi Das
- Master of Public Health, Leading University, Sylhet, Bangladesh
| | - Muhammad Join Uddin
- Master of Public Health, RTM Al-Kabir Technical University, Sylhet, Bangladesh
| | - Jeni Begum
- Master of Public Health, Leading University, Sylhet, Bangladesh
| | - Mst Rina Parvin
- School of Medical Sciences, Shahjalal University of Science and Technology, Bangladesh
- Bangladesh Army (AFNS Officer), Combined Military Hospital, Dhaka, Bangladesh
| |
Collapse
|
3
|
Oudin A, Maatoug R, Bourla A, Ferreri F, Bonnot O, Millet B, Schoeller F, Mouchabac S, Adrien V. Digital Phenotyping: Data-Driven Psychiatry to Redefine Mental Health. J Med Internet Res 2023; 25:e44502. [PMID: 37792430 PMCID: PMC10585447 DOI: 10.2196/44502] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 07/10/2023] [Accepted: 08/21/2023] [Indexed: 10/05/2023] Open
Abstract
The term "digital phenotype" refers to the digital footprint left by patient-environment interactions. It has potential for both research and clinical applications but challenges our conception of health care by opposing 2 distinct approaches to medicine: one centered on illness with the aim of classifying and curing disease, and the other centered on patients, their personal distress, and their lived experiences. In the context of mental health and psychiatry, the potential benefits of digital phenotyping include creating new avenues for treatment and enabling patients to take control of their own well-being. However, this comes at the cost of sacrificing the fundamental human element of psychotherapy, which is crucial to addressing patients' distress. In this viewpoint paper, we discuss the advances rendered possible by digital phenotyping and highlight the risk that this technology may pose by partially excluding health care professionals from the diagnosis and therapeutic process, thereby foregoing an essential dimension of care. We conclude by setting out concrete recommendations on how to improve current digital phenotyping technology so that it can be harnessed to redefine mental health by empowering patients without alienating them.
Collapse
Affiliation(s)
- Antoine Oudin
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Pitié-Salpêtrière Hospital, Public Hospitals of Sorbonne University, Paris, France
| | - Redwan Maatoug
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Pitié-Salpêtrière Hospital, Public Hospitals of Sorbonne University, Paris, France
| | - Alexis Bourla
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Saint-Antoine Hospital, Public Hospitals of Sorbonne University, Paris, France
- Medical Strategy and Innovation Department, Clariane, Paris, France
- NeuroStim Psychiatry Practice, Paris, France
| | - Florian Ferreri
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Saint-Antoine Hospital, Public Hospitals of Sorbonne University, Paris, France
| | - Olivier Bonnot
- Department of Child and Adolescent Psychiatry, Nantes University Hospital, Nantes, France
- Pays de la Loire Psychology Laboratory, Nantes University, Nantes, France
| | - Bruno Millet
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Pitié-Salpêtrière Hospital, Public Hospitals of Sorbonne University, Paris, France
| | - Félix Schoeller
- Institute for Advanced Consciousness Studies, Santa Monica, CA, United States
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Stéphane Mouchabac
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Saint-Antoine Hospital, Public Hospitals of Sorbonne University, Paris, France
| | - Vladimir Adrien
- Infrastructure for Clinical Research in Neurosciences, Paris Brain Institute, Sorbonne University- Institut national de la santé et de la recherche médicale - Centre national de la recherche scientifique, Paris, France
- Department of Psychiatry, Saint-Antoine Hospital, Public Hospitals of Sorbonne University, Paris, France
| |
Collapse
|
4
|
Kerr JI, Naegelin M, Benk M, V Wangenheim F, Meins E, Viganò E, Ferrario A. Investigating Employees’ Concerns and Wishes for Digital Stress Management Interventions with Value Sensitive Design: Mixed Methods Study (Preprint). J Med Internet Res 2022; 25:e44131. [PMID: 37052996 PMCID: PMC10141316 DOI: 10.2196/44131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 02/21/2023] [Accepted: 03/12/2023] [Indexed: 03/14/2023] Open
Abstract
BACKGROUND Work stress places a heavy economic and disease burden on society. Recent technological advances include digital health interventions for helping employees prevent and manage their stress at work effectively. Although such digital solutions come with an array of ethical risks, especially if they involve biomedical big data, the incorporation of employees' values in their design and deployment has been widely overlooked. OBJECTIVE To bridge this gap, we used the value sensitive design (VSD) framework to identify relevant values concerning a digital stress management intervention (dSMI) at the workplace, assess how users comprehend these values, and derive specific requirements for an ethics-informed design of dSMIs. VSD is a theoretically grounded framework that front-loads ethics by accounting for values throughout the design process of a technology. METHODS We conducted a literature search to identify relevant values of dSMIs at the workplace. To understand how potential users comprehend these values and derive design requirements, we conducted a web-based study that contained closed and open questions with employees of a Swiss company, allowing both quantitative and qualitative analyses. RESULTS The values health and well-being, privacy, autonomy, accountability, and identity were identified through our literature search. Statistical analysis of 170 responses from the web-based study revealed that the intention to use and perceived usefulness of a dSMI were moderate to high. Employees' moderate to high health and well-being concerns included worries that a dSMI would not be effective or would even amplify their stress levels. Privacy concerns were also rated on the higher end of the score range, whereas concerns regarding autonomy, accountability, and identity were rated lower. Moreover, a personalized dSMI with a monitoring system involving a machine learning-based analysis of data led to significantly higher privacy (P=.009) and accountability concerns (P=.04) than a dSMI without a monitoring system. In addition, integrability, user-friendliness, and digital independence emerged as novel values from the qualitative analysis of 85 text responses. CONCLUSIONS Although most surveyed employees were willing to use a dSMI at the workplace, there were considerable health and well-being concerns with regard to effectiveness and problem perpetuation. For a minority of employees who value digital independence, a nondigital offer might be more suitable. In terms of the type of dSMI, privacy and accountability concerns must be particularly well addressed if a machine learning-based monitoring component is included. To help mitigate these concerns, we propose specific requirements to support the VSD of a dSMI at the workplace. The results of this work and our research protocol will inform future research on VSD-based interventions and further advance the integration of ethics in digital health.
Collapse
Affiliation(s)
- Jasmine I Kerr
- Mobiliar Lab for Analytics at ETH Zurich, Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland
- Chair of Technology Marketing, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Mara Naegelin
- Mobiliar Lab for Analytics at ETH Zurich, Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland
- Chair of Technology Marketing, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Michaela Benk
- Mobiliar Lab for Analytics at ETH Zurich, Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland
- Chair of Technology Marketing, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Florian V Wangenheim
- Chair of Technology Marketing, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Erika Meins
- Mobiliar Lab for Analytics at ETH Zurich, Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland
| | - Eleonora Viganò
- Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland
| | - Andrea Ferrario
- Mobiliar Lab for Analytics at ETH Zurich, Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland
- Chair of Technology Marketing, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
5
|
Saheb T, Saheb T, Carpenter DO. Mapping research strands of ethics of artificial intelligence in healthcare: A bibliometric and content analysis. Comput Biol Med 2021; 135:104660. [PMID: 34346319 DOI: 10.1016/j.compbiomed.2021.104660] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 07/15/2021] [Accepted: 07/15/2021] [Indexed: 02/07/2023]
Abstract
The growth of artificial intelligence in promoting healthcare is rapidly progressing. Notwithstanding its promising nature, however, AI in healthcare embodies certain ethical challenges as well. This research aims to delineate the most influential elements of scientific research on AI ethics in healthcare by conducting bibliometric, social network analysis, and cluster-based content analysis of scientific articles. Not only did the bibliometric analysis identify the most influential authors, countries, institutions, sources, and documents, but it also recognized four ethical concerns associated with 12 medical issues. These ethical categories are composed of normative, meta-ethics, epistemological and medical practice. The content analysis complemented this list of ethical categories and distinguished seven more ethical categories: ethics of relationships, medico-legal concerns, ethics of robots, ethics of ambient intelligence, patients' rights, physicians' rights, and ethics of predictive analytics. This analysis likewise identified 40 general research gaps in the literature and plausible future research strands. This analysis furthers conversations on the ethics of AI and associated emerging technologies such as nanotech and biotech in healthcare, hence, advances convergence research on the ethics of AI in healthcare. Practically, this research will provide a map for policymakers and AI engineers and scientists on what dimensions of AI-based medical interventions require stricter policies and guidelines and robust ethical design and development.
Collapse
Affiliation(s)
- Tahereh Saheb
- Management Studies Center, Tarbiat Modares University, Tehran, Iran.
| | - Tayebeh Saheb
- Assistant professor, Faculty of Law, Tarbiat Modares University, Tehran, Iran.
| | - David O Carpenter
- Director for the Institute for Health and the Environment, School of Public Health, State University of New York, University at Albany, USA.
| |
Collapse
|