1
|
Koker O, Sahin S, Yildiz M, Adrovic A, Kasapcopur O. The emerging paradigm in pediatric rheumatology: harnessing the power of artificial intelligence. Rheumatol Int 2024; 44:2315-2325. [PMID: 39012357 PMCID: PMC11424736 DOI: 10.1007/s00296-024-05661-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 07/05/2024] [Indexed: 07/17/2024]
Abstract
Artificial intelligence algorithms, with roots extending into the past but experiencing a resurgence and evolution in recent years due to their superiority over traditional methods and contributions to human capabilities, have begun to make their presence felt in the field of pediatric rheumatology. In the ever-evolving realm of pediatric rheumatology, there have been incremental advancements supported by artificial intelligence in understanding and stratifying diseases, developing biomarkers, refining visual analyses, and facilitating individualized treatment approaches. However, like in many other domains, these strides have yet to gain clinical applicability and validation, and ethical issues remain unresolved. Furthermore, mastering different and novel terminologies appears challenging for clinicians. This review aims to provide a comprehensive overview of the current literature, categorizing algorithms and their applications, thus offering a fresh perspective on the nascent relationship between pediatric rheumatology and artificial intelligence, highlighting both its advancements and constraints.
Collapse
Affiliation(s)
- Oya Koker
- Department of Pediatric Rheumatology, Faculty of Medicine, Marmara University, Istanbul, Turkey
| | - Sezgin Sahin
- Department of Pediatric Rheumatology, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Mehmet Yildiz
- Department of Pediatric Rheumatology, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Amra Adrovic
- Department of Pediatric Rheumatology, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Ozgur Kasapcopur
- Department of Pediatric Rheumatology, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey.
| |
Collapse
|
2
|
Ning Y, Teixayavong S, Shang Y, Savulescu J, Nagaraj V, Miao D, Mertens M, Ting DSW, Ong JCL, Liu M, Cao J, Dunn M, Vaughan R, Ong MEH, Sung JJY, Topol EJ, Liu N. Generative artificial intelligence and ethical considerations in health care: a scoping review and ethics checklist. Lancet Digit Health 2024; 6:e848-e856. [PMID: 39294061 DOI: 10.1016/s2589-7500(24)00143-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 04/25/2024] [Accepted: 06/18/2024] [Indexed: 09/20/2024]
Abstract
The widespread use of Chat Generative Pre-trained Transformer (known as ChatGPT) and other emerging technology that is powered by generative artificial intelligence (GenAI) has drawn attention to the potential ethical issues they can cause, especially in high-stakes applications such as health care, but ethical discussions have not yet been translated into operationalisable solutions. Furthermore, ongoing ethical discussions often neglect other types of GenAI that have been used to synthesise data (eg, images) for research and practical purposes, which resolve some ethical issues and expose others. We did a scoping review of the ethical discussions on GenAI in health care to comprehensively analyse gaps in the research. To reduce the gaps, we have developed a checklist for comprehensive assessment and evaluation of ethical discussions in GenAI research. The checklist can be integrated into peer review and publication systems to enhance GenAI research and might be useful for ethics-related disclosures for GenAI-powered products and health-care applications of such products and beyond.
Collapse
Affiliation(s)
- Yilin Ning
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore
| | | | - Yuqing Shang
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore
| | - Julian Savulescu
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK
| | | | - Di Miao
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore
| | - Mayli Mertens
- Centre for Ethics, Department of Philosophy, University of Antwerp, Antwerp, Belgium; Antwerp Center on Responsible AI, University of Antwerp, Antwerp, Belgium
| | - Daniel Shu Wei Ting
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; SingHealth AI Office, Singapore Health Services, Singapore
| | | | - Mingxuan Liu
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore
| | - Jiuwen Cao
- Machine Learning and I-Health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, Zhejiang, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, China
| | - Michael Dunn
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Roger Vaughan
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore; Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore
| | - Marcus Eng Hock Ong
- Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore; Department of Emergency Medicine, Singapore General Hospital, Singapore
| | - Joseph Jao-Yiu Sung
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Eric J Topol
- Scripps Research Translational Institute, Scripps Research, La Jolla, CA, USA
| | - Nan Liu
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore; Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore; Institute of Data Science, National University of Singapore, Singapore.
| |
Collapse
|
3
|
Hassan EA, El-Ashry AM. Leading with AI in critical care nursing: challenges, opportunities, and the human factor. BMC Nurs 2024; 23:752. [PMID: 39402609 PMCID: PMC11475860 DOI: 10.1186/s12912-024-02363-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Accepted: 09/20/2024] [Indexed: 10/19/2024] Open
Abstract
INTRODUCTION The integration of artificial intelligence (AI) in intensive care units (ICUs) presents both opportunities and challenges for critical care nurses. This study delves into the human factor, exploring how nurses with leadership roles perceive the impact of AI on their professional practice. OBJECTIVE To investigate how nurses perceive the impact of AI on their professional identity, ethical considerations surrounding its use, and the shared meanings they attribute to trust, collaboration, and communication when working with AI systems. METHODS An interpretive phenomenological analysis was used to capture the lived experiences of critical care nurses leading with AI. Ten nurses with leadership roles in various ICU specializations were interviewed through purposive sampling. Semi-structured interviews explored nurses' experiences with AI, challenges, and opportunities. Thematic analysis identified recurring themes related to the human factor in leading with AI. FINDINGS Thematic analysis revealed two key themes which are leading with AI: making sense of challenges and opportunities and the human factor in leading with AI. The two main themes have six subthemes which revealed that AI offered benefits like task automation, but concerns existed about overreliance and the need for ongoing training. New challenges emerged, including adapting to new workflows and managing potential bias. Clear communication and collaboration were crucial for successful AI integration. Building trust in AI hinged on transparency, and collaboration allowed nurses to focus on human-centered care while AI supported data analysis. Ethical considerations included maintaining patient autonomy and ensuring accountability in AI-driven decisions. CONCLUSION While AI presents opportunities for automation and data analysis, successful integration hinges on addressing concerns about overreliance, workflow adaptation, and potential bias. Building trust and fostering collaboration are fundamentals for AI integration. Transparency in AI systems allows nurses to confidently delegate tasks, while collaboration empowers them to focus on human-centered care with AI support. Ultimately, dealing with the ethical concerns of AI in ICU care requires prioritizing patient autonomy and ensuring accountability in AI-driven decisions.
Collapse
Affiliation(s)
- Eman Arafa Hassan
- Critical Care and Emergency Nursing Department, Faculty of Nursing, Alexandria University, Alexandria, Egypt.
| | - Ayman Mohamed El-Ashry
- Psychiatric and Mental Health Nursing Department, Faculty of Nursing, Alexandria University, Alexandria, Egypt
- Psychiatric and Mental Health Nursing, Department of Nursing, College of Applied Medical Sciences, Jouf University, Al-Qurayyat, Saudi Arabia
| |
Collapse
|
4
|
Funer F, Tinnemeyer S, Liedtke W, Salloch S. Clinicians' roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students. BMC Med Ethics 2024; 25:107. [PMID: 39375660 PMCID: PMC11457475 DOI: 10.1186/s12910-024-01109-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 09/26/2024] [Indexed: 10/09/2024] Open
Abstract
BACKGROUND Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders' viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating the requirements for understanding and explicability in depth with regard to the rationale behind them. On the other hand, it surveys medical students at the end of their studies as stakeholders, of whom little data is available so far, but for whom AI-CDSS will be an important part of their medical practice. METHODS Fifteen semi-structured qualitative interviews (each lasting an average of 56 min) were conducted with German medical students to investigate their perspectives and attitudes on the use of AI-CDSS. The problem-centred interviews draw on two hypothetical case vignettes of AI-CDSS employed in nephrology and surgery. Interviewees' perceptions and convictions of their own clinical role and responsibilities in dealing with AI-CDSS were elicited as well as viewpoints on explicability as well as the necessary level of understanding and competencies needed on the clinicians' side. The qualitative data were analysed according to key principles of qualitative content analysis (Kuckartz). RESULTS In response to the central question about the necessary understanding of AI-CDSS tools and the emergence of their outputs as well as the reasons for the requirements placed on them, two types of argumentation could be differentiated inductively from the interviewees' statements: the first type, the clinician as a systemic trustee (or "the one relying"), highlights that there needs to be empirical evidence and adequate approval processes that guarantee minimised harm and a clinical benefit from the employment of an AI-CDSS. Based on proof of these requirements, the use of an AI-CDSS would be appropriate, as according to "the one relying", clinicians should choose those measures that statistically cause the least harm. The second type, the clinician as an individual expert (or "the one controlling"), sets higher prerequisites that go beyond ensuring empirical evidence and adequate approval processes. These higher prerequisites relate to the clinician's necessary level of competence and understanding of how a specific AI-CDSS works and how to use it properly in order to evaluate its outputs and to mitigate potential risks for the individual patient. Both types are unified in their high esteem of evidence-based clinical practice and the need to communicate with the patient on the use of medical AI. However, the interviewees' different conceptions of the clinician's role and responsibilities cause them to have different requirements regarding the clinician's understanding and explicability of an AI-CDSS beyond the proof of benefit. CONCLUSIONS The study results highlight two different types among (future) clinicians regarding their view of the necessary levels of understanding and competence. These findings should inform the debate on appropriate training programmes and professional standards (e.g. clinical practice guidelines) that enable the safe and effective clinical employment of AI-CDSS in various clinical fields. While current approaches search for appropriate minimum requirements of the necessary understanding and competence, the differences between (future) clinicians in terms of their information and understanding needs described here can lead to more differentiated approaches to solutions.
Collapse
Affiliation(s)
- F Funer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany
- Institute for Ethics and History of Medicine, Eberhard Karls University Tübingen, Gartenstr. 47, 72074, Tübingen, Germany
| | - S Tinnemeyer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany
| | - W Liedtke
- Faculty of Theology, University of Greifswald, Am Rubenowplatz 2/3, 17489, Greifswald, Germany
| | - S Salloch
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany.
| |
Collapse
|
5
|
Bagaria V, Chhabra HS. The balancing act: Adopting AI and robotics in medicine with cautious optimism. J Clin Orthop Trauma 2024; 57:102550. [PMID: 39398288 PMCID: PMC11466566 DOI: 10.1016/j.jcot.2024.102550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/15/2024] Open
Affiliation(s)
- Vaibhav Bagaria
- Corresponding author. Dept Of Orthopedics Sir HN Reliance Foundation Hospital, Mumbai, India.
| | | |
Collapse
|
6
|
Shipton L, Vitale L. Artificial intelligence and the politics of avoidance in global health. Soc Sci Med 2024; 359:117274. [PMID: 39217716 DOI: 10.1016/j.socscimed.2024.117274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 08/05/2024] [Accepted: 08/24/2024] [Indexed: 09/04/2024]
Abstract
For decades, global health actors have centered technology in their interventions. Today, artificial intelligence (AI) is emerging as the latest technology-based solution in global health. Yet, AI, like other technological interventions, is not a comprehensive solution to the fundamental determinants of global health inequities. This article gathers and critically appraises grey and peer-reviewed literature on AI in global health to explore the question: What is avoided when global health prioritizes technological solutions to problems with deep-seated political, economic, and commercial determinants? Our literature search and selection yielded 34 documents, which we analyzed to develop seven areas where AI both continues and disrupts past legacies of technological interventions in global health, with significant implications for health equity and human rights. By focusing on the power dynamics that underpin AI's expansion in global health, we situate it as the latest in a long line of technological interventions that avoids addressing the fundamental determinants of health inequities, albeit at times differently than its technology-based predecessors. We call this phenomenon the 'politics of avoidance.' We conclude with reflections on how the literature we reviewed engages with and recognizes the politics of avoidance and with suggestions for future research, practice, and advocacy.
Collapse
Affiliation(s)
- Leah Shipton
- Department of Political Science, University of British Columbia, 1866 Main Mall C425, Vancouver, BC, V6T 1Z1, Canada; School of Public Policy, Simon Fraser University, 515 West Hasting Street Office 3269, Vancouver, BC, V6B 5K3, Canada.
| | - Lucia Vitale
- Politics Department, University of California at Santa Cruz, 639 Merrill Rd, Santa Cruz, CA, 95064, United States.
| |
Collapse
|
7
|
Eguia H, Sánchez-Bocanegra CL, Vinciarelli F, Alvarez-Lopez F, Saigí-Rubió F. Clinical Decision Support and Natural Language Processing in Medicine: Systematic Literature Review. J Med Internet Res 2024; 26:e55315. [PMID: 39348889 PMCID: PMC11474138 DOI: 10.2196/55315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 04/20/2024] [Accepted: 07/24/2024] [Indexed: 10/02/2024] Open
Abstract
BACKGROUND Ensuring access to accurate and verified information is essential for effective patient treatment and diagnosis. Although health workers rely on the internet for clinical data, there is a need for a more streamlined approach. OBJECTIVE This systematic review aims to assess the current state of artificial intelligence (AI) and natural language processing (NLP) techniques in health care to identify their potential use in electronic health records and automated information searches. METHODS A search was conducted in the PubMed, Embase, ScienceDirect, Scopus, and Web of Science online databases for articles published between January 2000 and April 2023. The only inclusion criteria were (1) original research articles and studies on the application of AI-based medical clinical decision support using NLP techniques and (2) publications in English. A Critical Appraisal Skills Programme tool was used to assess the quality of the studies. RESULTS The search yielded 707 articles, from which 26 studies were included (24 original articles and 2 systematic reviews). Of the evaluated articles, 21 (81%) explained the use of NLP as a source of data collection, 18 (69%) used electronic health records as a data source, and a further 8 (31%) were based on clinical data. Only 5 (19%) of the articles showed the use of combined strategies for NLP to obtain clinical data. In total, 16 (62%) articles presented stand-alone data review algorithms. Other studies (n=9, 35%) showed that the clinical decision support system alternative was also a way of displaying the information obtained for immediate clinical use. CONCLUSIONS The use of NLP engines can effectively improve clinical decision systems' accuracy, while biphasic tools combining AI algorithms and human criteria may optimize clinical diagnosis and treatment flows. TRIAL REGISTRATION PROSPERO CRD42022373386; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=373386.
Collapse
Affiliation(s)
- Hans Eguia
- SEMERGEN New Technologies Working Group, Madrid, Spain
- Faculty of Health Sciences, Universitat Oberta de Catalunya (UOC), Barcelona, Spain
| | | | - Franco Vinciarelli
- SEMERGEN New Technologies Working Group, Madrid, Spain
- Emergency Hospital Clemente Álvarez, Rosario (Santa Fe), Argentina
| | | | - Francesc Saigí-Rubió
- Faculty of Health Sciences, Universitat Oberta de Catalunya (UOC), Barcelona, Spain
| |
Collapse
|
8
|
Sonmez SC, Sevgi M, Antaki F, Huemer J, Keane PA. Generative artificial intelligence in ophthalmology: current innovations, future applications and challenges. Br J Ophthalmol 2024; 108:1335-1340. [PMID: 38925907 PMCID: PMC11503064 DOI: 10.1136/bjo-2024-325458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024]
Abstract
The rapid advancements in generative artificial intelligence are set to significantly influence the medical sector, particularly ophthalmology. Generative adversarial networks and diffusion models enable the creation of synthetic images, aiding the development of deep learning models tailored for specific imaging tasks. Additionally, the advent of multimodal foundational models, capable of generating images, text and videos, presents a broad spectrum of applications within ophthalmology. These range from enhancing diagnostic accuracy to improving patient education and training healthcare professionals. Despite the promising potential, this area of technology is still in its infancy, and there are several challenges to be addressed, including data bias, safety concerns and the practical implementation of these technologies in clinical settings.
Collapse
Affiliation(s)
| | - Mertcan Sevgi
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
| | - Fares Antaki
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
- The CHUM School of Artificial Intelligence in Healthcare, Montreal, Quebec, Canada
| | - Josef Huemer
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
- Department of Ophthalmology and Optometry, Kepler University Hospital, Linz, Austria
| | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
| |
Collapse
|
9
|
Mooghali M, Stroud AM, Yoo DW, Barry BA, Grimshaw AA, Ross JS, Zhu X, Miller JE. Trustworthy and ethical AI-enabled cardiovascular care: a rapid review. BMC Med Inform Decis Mak 2024; 24:247. [PMID: 39232725 PMCID: PMC11373417 DOI: 10.1186/s12911-024-02653-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 08/26/2024] [Indexed: 09/06/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) is increasingly used for prevention, diagnosis, monitoring, and treatment of cardiovascular diseases. Despite the potential for AI to improve care, ethical concerns and mistrust in AI-enabled healthcare exist among the public and medical community. Given the rapid and transformative recent growth of AI in cardiovascular care, to inform practice guidelines and regulatory policies that facilitate ethical and trustworthy use of AI in medicine, we conducted a literature review to identify key ethical and trust barriers and facilitators from patients' and healthcare providers' perspectives when using AI in cardiovascular care. METHODS In this rapid literature review, we searched six bibliographic databases to identify publications discussing transparency, trust, or ethical concerns (outcomes of interest) associated with AI-based medical devices (interventions of interest) in the context of cardiovascular care from patients', caregivers', or healthcare providers' perspectives. The search was completed on May 24, 2022 and was not limited by date or study design. RESULTS After reviewing 7,925 papers from six databases and 3,603 papers identified through citation chasing, 145 articles were included. Key ethical concerns included privacy, security, or confidentiality issues (n = 59, 40.7%); risk of healthcare inequity or disparity (n = 36, 24.8%); risk of patient harm (n = 24, 16.6%); accountability and responsibility concerns (n = 19, 13.1%); problematic informed consent and potential loss of patient autonomy (n = 17, 11.7%); and issues related to data ownership (n = 11, 7.6%). Major trust barriers included data privacy and security concerns, potential risk of patient harm, perceived lack of transparency about AI-enabled medical devices, concerns about AI replacing human aspects of care, concerns about prioritizing profits over patients' interests, and lack of robust evidence related to the accuracy and limitations of AI-based medical devices. Ethical and trust facilitators included ensuring data privacy and data validation, conducting clinical trials in diverse cohorts, providing appropriate training and resources to patients and healthcare providers and improving their engagement in different phases of AI implementation, and establishing further regulatory oversights. CONCLUSION This review revealed key ethical concerns and barriers and facilitators of trust in AI-enabled medical devices from patients' and healthcare providers' perspectives. Successful integration of AI into cardiovascular care necessitates implementation of mitigation strategies. These strategies should focus on enhanced regulatory oversight on the use of patient data and promoting transparency around the use of AI in patient care.
Collapse
Affiliation(s)
- Maryam Mooghali
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA.
- Yale Center for Outcomes Research and Evaluation (CORE), 195 Church Street, New Haven, CT, 06510, USA.
| | - Austin M Stroud
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN, USA
| | - Dong Whi Yoo
- School of Information, Kent State University, Kent, OH, USA
| | - Barbara A Barry
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
- Division of Health Care Delivery Research, Mayo Clinic, Rochester, MN, USA
| | - Alyssa A Grimshaw
- Harvey Cushing/John Hay Whitney Medical Library, Yale University, New Haven, CT, USA
| | - Joseph S Ross
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
- Department of Health Policy and Management, Yale School of Public Health, New Haven, CT, USA
| | - Xuan Zhu
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
| | - Jennifer E Miller
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
| |
Collapse
|
10
|
Marques M, Almeida A, Pereira H. The Medicine Revolution Through Artificial Intelligence: Ethical Challenges of Machine Learning Algorithms in Decision-Making. Cureus 2024; 16:e69405. [PMID: 39411643 PMCID: PMC11473215 DOI: 10.7759/cureus.69405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/14/2024] [Indexed: 10/19/2024] Open
Abstract
The integration of artificial intelligence (AI) and its autonomous learning processes (or machine learning) in medicine has revolutionized the global health landscape, providing faster and more accurate diagnoses, personalization of medical treatment, and efficient management of clinical information. However, this transformation is not without ethical challenges, which require a comprehensive and responsible approach. There are many fields where AI and medicine intersect, such as health education, patient-doctor interface, data management, diagnosis, intervention, and decision-making processes. For some of these fields, there are some guidelines to regulate them. AI has numerous applications in medicine, including medical imaging analysis, diagnosis, predictive analytics for patient outcomes, drug discovery and development, virtual health assistants, and remote patient monitoring. It is also used in robotic surgery, clinical decision support systems, AI-powered chatbots for triage, administrative workflow automation, and treatment recommendations. Despite numerous applications, there are several problems related to the use of AI identified in the literature in general and in medicine in particular. These problems are data privacy and security, bias and discrimination, lack of transparency (Black Box Problem), integration with existing systems, cost and accessibility disparities, risk of overconfidence in AI, technical limitations, accountability for AI errors, algorithmic interpretability, data standardization issues, unemployment, and challenges in clinical validation. Of the various problems already identified, the most worrying are data bias, the black box phenomenon, questions about data privacy, responsibility for decision-making, security issues for the human species, and technological unemployment. There are still several ethical problems associated with the use of AI autonomous learning algorithms, namely epistemic, normative, and comprehensive ethical problems (overarching). Addressing all these issues is crucial to ensure that the use of AI in healthcare is implemented ethically and responsibly, providing benefits to populations without compromising fundamental values. Ongoing dialogue between healthcare providers and the industry, the establishment of ethical guidelines and regulations, and considering not only current ethical dilemmas but also future perspectives are fundamental points for the application of AI to medical practice. The purpose of this review is to discuss the ethical issues of AI algorithms used mainly in data management, diagnosis, intervention, and decision-making processes.
Collapse
Affiliation(s)
- Marta Marques
- Anesthesiology, Centro Hospitalar Universitário São João, Porto, PRT
| | - Ana Almeida
- Anesthesiology, Centro Hospitalar Universitário São João, Porto, PRT
| | - Helder Pereira
- Surgery and Physiology, Faculty of Medicine, Universidade do Porto, Porto, PRT
| |
Collapse
|
11
|
Sodhi R, Vatsyayan V, Panibatla V, Sayyad K, Williams J, Pattery T, Pal A. Impact of a pilot mHealth intervention on treatment outcomes of TB patients seeking care in the private sector using Propensity Scores Matching-Evidence collated from New Delhi, India. PLOS DIGITAL HEALTH 2024; 3:e0000421. [PMID: 39259731 PMCID: PMC11389929 DOI: 10.1371/journal.pdig.0000421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 07/23/2024] [Indexed: 09/13/2024]
Abstract
Mobile health applications called Digital Adherence Technologies (DATs), are increasingly used for improving treatment adherence among Tuberculosis patients to attain cure, and/or other chronic diseases requiring long-term and complex medication regimens. These DATs are found to be useful in resource-limited settings because of their cost efficiency in reaching out to vulnerable groups (providing pill and clinic visit reminders, relevant health information, and motivational messages) or those staying in remote or rural areas. Despite their growing ubiquity, there is very limited evidence on how DATs improve healthcare outcomes. We analyzed the uptake of DATs in an urban setting (DS-DOST, powered by Connect for LifeTM, Johnson & Johnson) among different patient groups accessing TB services in New Delhi, India, and subsequently assessed its impact in improving patient engagement and treatment outcomes. This study aims to understand the uptake patterns of a digital adherence technology and its impact in improving follow-ups and treatment outcomes among TB patients. Propensity choice modelling was used to create balanced treated and untreated patient datasets, before applying simple ordinary least square and logistic regression methods to estimate the causal impact of the intervention on the number of follow-ups made with the patient and treatment outcomes. After controlling for potential confounders, it was found that patients who installed and utilized DS-DOST application received an average of 6.4 (95% C.I. [5.32 to 7.557]) additional follow-ups, relative to those who did not utilize the application. This translates to a 58% increase. They also had a 245% higher likelihood of treatment success (Odds ratio: 3.458; 95% C.I. [1.709 to 6.996]).
Collapse
Affiliation(s)
| | | | | | | | - Jason Williams
- Disease Management Programs, Global Public Health at Johnson & Johnson, Germany
| | - Theresa Pattery
- Disease Management Programs, Global Public Health at Johnson & Johnson, Germany
| | - Arnab Pal
- William J Clinton Foundation, New Delhi, India
| |
Collapse
|
12
|
Salloch S, Eriksen A. What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024; 24:67-78. [PMID: 38767971 DOI: 10.1080/15265161.2024.2353800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as "human in the loop" or "meaningful human control" are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the "human in the loop" and to overcome the perspective of rivaling ethical principles in the evaluation of AI in health care. We argue that patients should be perceived as "fellow workers" and epistemic partners in the interpretation of ML_CDSS outputs. We further highlight that a meaningful process of integrating (rather than weighing and balancing) ethical principles is most appropriate in the evaluation of medical AI.
Collapse
|
13
|
Welsh C, Román García S, Barnett GC, Jena R. Democratising artificial intelligence in healthcare: community-driven approaches for ethical solutions. Future Healthc J 2024; 11:100165. [PMID: 39371538 PMCID: PMC11452836 DOI: 10.1016/j.fhj.2024.100165] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/08/2024] [Accepted: 07/09/2024] [Indexed: 10/08/2024]
Abstract
The rapid advancement and widespread adoption of artificial intelligence (AI) has ushered in a new era of possibilities in healthcare, ranging from clinical task automation to disease detection. AI algorithms have the potential to analyse medical data, enhance diagnostic accuracy, personalise treatment plans and predict patient outcomes among other possibilities. With a surge in AI's popularity, its developments are outpacing policy and regulatory frameworks, leading to concerns about ethical considerations and collaborative development. Healthcare faces its own ethical challenges, including biased datasets, under-representation and inequitable access to resources, all contributing to mistrust in medical systems. To address these issues in the context of AI healthcare solutions and prevent perpetuating existing inequities, it is crucial to involve communities and stakeholders in the AI lifecycle. This article discusses four community-driven approaches for co-developing ethical AI healthcare solutions, including understanding and prioritising needs, defining a shared language, promoting mutual learning and co-creation, and democratising AI. These approaches emphasise bottom-up decision-making to reflect and centre impacted communities' needs and values. These collaborative approaches provide actionable considerations for creating equitable AI solutions in healthcare, fostering a more just and effective healthcare system that serves patient and community needs.
Collapse
Affiliation(s)
- Ceilidh Welsh
- Department of Oncology, University of Cambridge, Cambridge, UK
| | - Susana Román García
- Centre for Discovery Brain Sciences, College of Medicine & Veterinary Medicine, Biomedical Sciences, University of Edinburgh, UK
| | - Gillian C. Barnett
- Addenbrookes Hospital, Cambridge University Hospitals, Hills Road, Cambridge, UK
| | - Raj Jena
- Addenbrookes Hospital, Cambridge University Hospitals, Hills Road, Cambridge, UK
| |
Collapse
|
14
|
Kostick Quenet K, Ayaz SS. Limitations of Patient-Physician Co-Reasoning in AI-Driven Clinical Decision Support Systems. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024; 24:97-99. [PMID: 39225988 DOI: 10.1080/15265161.2024.2377124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
|
15
|
Palaniappan K, Lin EYT, Vogel S, Lim JCW. Gaps in the Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector and Key Recommendations. Healthcare (Basel) 2024; 12:1730. [PMID: 39273754 PMCID: PMC11394803 DOI: 10.3390/healthcare12171730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 08/23/2024] [Accepted: 08/27/2024] [Indexed: 09/15/2024] Open
Abstract
Artificial Intelligence (AI) has shown remarkable potential to revolutionise healthcare by enhancing diagnostics, improving treatment outcomes, and streamlining administrative processes. In the global regulatory landscape, several countries are working on regulating AI in healthcare. There are five key regulatory issues that need to be addressed: (i) data security and protection-measures to cover the "digital health footprints" left unknowingly by patients when they access AI in health services; (ii) data quality-availability of safe and secure data and more open database sources for AI, algorithms, and datasets to ensure equity and prevent demographic bias; (iii) validation of algorithms-mapping of the explainability and causability of the AI system; (iv) accountability-whether this lies with the healthcare professional, healthcare organisation, or the personified AI algorithm; (v) ethics and equitable access-whether fundamental rights of people are met in an ethical manner. Policymakers may need to consider the entire life cycle of AI in healthcare services and the databases that were used for the training of the AI system, along with requirements for their risk assessments to be publicly accessible for effective regulatory oversight. AI services that enhance their functionality over time need to undergo repeated algorithmic impact assessment and must also demonstrate real-time performance. Harmonising regulatory frameworks at the international level would help to resolve cross-border issues of AI in healthcare services.
Collapse
Affiliation(s)
- Kavitha Palaniappan
- Centre of Regulatory Excellence, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Elaine Yan Ting Lin
- Centre of Regulatory Excellence, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Silke Vogel
- Centre of Regulatory Excellence, Duke-NUS Medical School, Singapore 169857, Singapore
| | - John C W Lim
- Centre of Regulatory Excellence, Duke-NUS Medical School, Singapore 169857, Singapore
| |
Collapse
|
16
|
Funer F, Schneider D, Heyen NB, Aichinger H, Klausen AD, Tinnemeyer S, Liedtke W, Salloch S, Bratan T. Impacts of Clinical Decision Support Systems on the Relationship, Communication, and Shared Decision-Making Between Health Care Professionals and Patients: Multistakeholder Interview Study. J Med Internet Res 2024; 26:e55717. [PMID: 39178023 PMCID: PMC11380058 DOI: 10.2196/55717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 05/02/2024] [Accepted: 06/07/2024] [Indexed: 08/24/2024] Open
Abstract
BACKGROUND Clinical decision support systems (CDSSs) are increasingly being introduced into various domains of health care. Little is known so far about the impact of such systems on the health care professional-patient relationship, and there is a lack of agreement about whether and how patients should be informed about the use of CDSSs. OBJECTIVE This study aims to explore, in an empirically informed manner, the potential implications for the health care professional-patient relationship and to underline the importance of this relationship when using CDSSs for both patients and future professionals. METHODS Using a methodological triangulation, 15 medical students and 12 trainee nurses were interviewed in semistructured interviews and 18 patients were involved in focus groups between April 2021 and April 2022. All participants came from Germany. Three examples of CDSSs covering different areas of health care (ie, surgery, nephrology, and intensive home care) were used as stimuli in the study to identify similarities and differences regarding the use of CDSSs in different fields of application. The interview and focus group transcripts were analyzed using a structured qualitative content analysis. RESULTS From the interviews and focus groups analyzed, three topics were identified that interdependently address the interactions between patients and health care professionals: (1) CDSSs and their impact on the roles of and requirements for health care professionals, (2) CDSSs and their impact on the relationship between health care professionals and patients (including communication requirements for shared decision-making), and (3) stakeholders' expectations for patient education and information about CDSSs and their use. CONCLUSIONS The results indicate that using CDSSs could restructure established power and decision-making relationships between (future) health care professionals and patients. In addition, respondents expected that the use of CDSSs would involve more communication, so they anticipated an increased time commitment. The results shed new light on the existing discourse by demonstrating that the anticipated impact of CDSSs on the health care professional-patient relationship appears to stem less from the function of a CDSS and more from its integration in the relationship. Therefore, the anticipated effects on the relationship between health care professionals and patients could be specifically addressed in patient information about the use of CDSSs.
Collapse
Affiliation(s)
- Florian Funer
- Institute for Ethics and History of Medicine, Eberhard Karls University Tuebingen, Tuebingen, Germany
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Nils B Heyen
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Heike Aichinger
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Andrea Diana Klausen
- Institute for Medical Informatics, University Medical Center - RWTH Aachen, Aachen, Germany
| | - Sara Tinnemeyer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences Rhineland-Westphalia-Lippe, Bochum, Germany
- Ethics and its Didactics, Faculty of Theology, University of Greifswald, Greifswald, Germany
| | - Sabine Salloch
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Tanja Bratan
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| |
Collapse
|
17
|
Nair M, Svedberg P, Larsson I, Nygren JM. A comprehensive overview of barriers and strategies for AI implementation in healthcare: Mixed-method design. PLoS One 2024; 19:e0305949. [PMID: 39121051 PMCID: PMC11315296 DOI: 10.1371/journal.pone.0305949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 06/07/2024] [Indexed: 08/11/2024] Open
Abstract
Implementation of artificial intelligence systems for healthcare is challenging. Understanding the barriers and implementation strategies can impact their adoption and allows for better anticipation and planning. This study's objective was to create a detailed inventory of barriers to and strategies for AI implementation in healthcare to support advancements in methods and implementation processes in healthcare. A sequential explanatory mixed method design was used. Firstly, scoping reviews and systematic literature reviews were identified using PubMed. Selected studies included empirical cases of AI implementation and use in clinical practice. As the reviews were deemed insufficient to fulfil the aim of the study, data collection shifted to the primary studies included in those reviews. The primary studies were screened by title and abstract, and thereafter read in full text. Then, data on barriers to and strategies for AI implementation were extracted from the included articles, thematically coded by inductive analysis, and summarized. Subsequently, a direct qualitative content analysis of 69 interviews with healthcare leaders and healthcare professionals confirmed and added results from the literature review. Thirty-eight empirical cases from the six identified scoping and literature reviews met the inclusion and exclusion criteria. Barriers to and strategies for AI implementation were grouped under three phases of implementation (planning, implementing, and sustaining the use) and were categorized into eleven concepts; Leadership, Buy-in, Change management, Engagement, Workflow, Finance and human resources, Legal, Training, Data, Evaluation and monitoring, Maintenance. Ethics emerged as a twelfth concept through qualitative analysis of the interviews. This study illustrates the inherent challenges and useful strategies in implementing AI in healthcare practice. Future research should explore various aspects of leadership, collaboration and contracts among key stakeholders, legal strategies surrounding clinicians' liability, solutions to ethical dilemmas, infrastructure for efficient integration of AI in workflows, and define decision points in the implementation process.
Collapse
Affiliation(s)
- Monika Nair
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Ingrid Larsson
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens M. Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|
18
|
Armitage RC. Digital health technologies: Compounding the existing ethical challenges of the 'right' not to know. J Eval Clin Pract 2024; 30:774-779. [PMID: 38493485 DOI: 10.1111/jep.13980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 02/13/2024] [Accepted: 03/02/2024] [Indexed: 03/19/2024]
Abstract
INTRODUCTION Doctors hold a prima facie duty to respect the autonomy of their patients. This manifests as the patient's 'right' not to know when patients wish to remain unaware of medical information regarding their health, and poses ethical challenges for good medical practice. This paper explores how the emergence of digital health technologies might impact upon the patient's 'right' not to know. METHOD The capabilities of digital health technologies are surveyed and ethical implications of their effects on the 'right' not to know are explored. FINDINGS Digital health technologies are increasingly collecting, processing and presenting medical data as clinically useful information, which simultaneously presents large opportunities for improved health outcomes while compounding the existing ethical challenges generated by the patient's 'right' not to know. CONCLUSION These digital tools should be designed to include functionality that mitigates these ethical challenges, and allows the preservation of their user's autonomy with regard to the medical information they wish to learn and not learn about.
Collapse
Affiliation(s)
- Richard C Armitage
- Academic Unit of Population and Lifespan Sciences, School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|
19
|
Abukhadijah HJ, Nashwan AJ. Transforming Hospital Quality Improvement Through Harnessing the Power of Artificial Intelligence. GLOBAL JOURNAL ON QUALITY AND SAFETY IN HEALTHCARE 2024; 7:132-139. [PMID: 39104802 PMCID: PMC11298043 DOI: 10.36401/jqsh-24-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/28/2024] [Accepted: 05/01/2024] [Indexed: 08/07/2024]
Abstract
This policy analysis focuses on harnessing the power of artificial intelligence (AI) in hospital quality improvement to transform quality and patient safety. It examines the application of AI at the two following fundamental levels: (1) diagnostic and treatment and (2) clinical operations. AI applications in diagnostics directly impact patient care and safety. At the same time, AI indirectly influences patient safety at the clinical operations level by streamlining (1) operational efficiency, (2) risk assessment, (3) predictive analytics, (4) quality indicators reporting, and (5) staff training and education. The challenges and future perspectives of AI application in healthcare, encompassing technological, ethical, and other considerations, are also critically analyzed.
Collapse
Affiliation(s)
| | - Abdulqadir J. Nashwan
- Nursing & Midwifery Research Department, Hamad Medical Corporation, Doha, Qatar
- Department of Public Health, College of Health Sciences, QU Health, Qatar University, Doha, Qatar
| |
Collapse
|
20
|
Federico CA, Trotsyuk AA. Biomedical Data Science, Artificial Intelligence, and Ethics: Navigating Challenges in the Face of Explosive Growth. Annu Rev Biomed Data Sci 2024; 7:1-14. [PMID: 38598860 DOI: 10.1146/annurev-biodatasci-102623-104553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Advances in biomedical data science and artificial intelligence (AI) are profoundly changing the landscape of healthcare. This article reviews the ethical issues that arise with the development of AI technologies, including threats to privacy, data security, consent, and justice, as they relate to donors of tissue and data. It also considers broader societal obligations, including the importance of assessing the unintended consequences of AI research in biomedicine. In addition, this article highlights the challenge of rapid AI development against the backdrop of disparate regulatory frameworks, calling for a global approach to address concerns around data misuse, unintended surveillance, and the equitable distribution of AI's benefits and burdens. Finally, a number of potential solutions to these ethical quandaries are offered. Namely, the merits of advocating for a collaborative, informed, and flexible regulatory approach that balances innovation with individual rights and public welfare, fostering a trustworthy AI-driven healthcare ecosystem, are discussed.
Collapse
Affiliation(s)
- Carole A Federico
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA; ,
| | - Artem A Trotsyuk
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA; ,
| |
Collapse
|
21
|
Muralidharan V, Schamroth J, Youssef A, Celi LA, Daneshjou R. Applied artificial intelligence for global child health: Addressing biases and barriers. PLOS DIGITAL HEALTH 2024; 3:e0000583. [PMID: 39172772 PMCID: PMC11340888 DOI: 10.1371/journal.pdig.0000583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2024]
Abstract
Given the potential benefits of artificial intelligence and machine learning (AI/ML) within healthcare, it is critical to consider how these technologies can be deployed in pediatric research and practice. Currently, healthcare AI/ML has not yet adapted to the specific technical considerations related to pediatric data nor adequately addressed the specific vulnerabilities of children and young people (CYP) in relation to AI. While the greatest burden of disease in CYP is firmly concentrated in lower and middle-income countries (LMICs), existing applied pediatric AI/ML efforts are concentrated in a small number of high-income countries (HICs). In LMICs, use-cases remain primarily in the proof-of-concept stage. This narrative review identifies a number of intersecting challenges that pose barriers to effective AI/ML for CYP globally and explores the shifts needed to make progress across multiple domains. Child-specific technical considerations throughout the AI/ML lifecycle have been largely overlooked thus far, yet these can be critical to model effectiveness. Governance concerns are paramount, with suitable national and international frameworks and guidance required to enable the safe and responsible deployment of advanced technologies impacting the care of CYP and using their data. An ambitious vision for child health demands that the potential benefits of AI/Ml are realized universally through greater international collaboration, capacity building, strong oversight, and ultimately diffusing the AI/ML locus of power to empower researchers and clinicians globally. In order that AI/ML systems that do not exacerbate inequalities in pediatric care, teams researching and developing these technologies in LMICs must ensure that AI/ML research is inclusive of the needs and concerns of CYP and their caregivers. A broad, interdisciplinary, and human-centered approach to AI/ML is essential for developing tools for healthcare workers delivering care, such that the creation and deployment of ML is grounded in local systems, cultures, and clinical practice. Decisions to invest in developing and testing pediatric AI/ML in resource-constrained settings must always be part of a broader evaluation of the overall needs of a healthcare system, considering the critical building blocks underpinning effective, sustainable, and cost-efficient healthcare delivery for CYP.
Collapse
Affiliation(s)
- Vijaytha Muralidharan
- Department of Dermatology, Stanford University, Stanford, California, United States of America
| | - Joel Schamroth
- Faculty of Population Health Sciences, University College London, London, United Kingdom
| | - Alaa Youssef
- Stanford Center for Artificial Intelligence in Medicine and Imaging, Department of Radiology, Stanford University, Stanford, California, United States of America
| | - Leo A. Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, United States of America
| | - Roxana Daneshjou
- Department of Dermatology, Stanford University, Stanford, California, United States of America
- Department of Biomedical Data Science, Stanford University, Stanford, California, United States of America
| |
Collapse
|
22
|
Burti S, Banzato T, Coghlan S, Wodzinski M, Bendazzoli M, Zotti A. Artificial intelligence in veterinary diagnostic imaging: Perspectives and limitations. Res Vet Sci 2024; 175:105317. [PMID: 38843690 DOI: 10.1016/j.rvsc.2024.105317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 05/22/2024] [Accepted: 05/29/2024] [Indexed: 06/17/2024]
Abstract
The field of veterinary diagnostic imaging is undergoing significant transformation with the integration of artificial intelligence (AI) tools. This manuscript provides an overview of the current state and future prospects of AI in veterinary diagnostic imaging. The manuscript delves into various applications of AI across different imaging modalities, such as radiology, ultrasound, computed tomography, and magnetic resonance imaging. Examples of AI applications in each modality are provided, ranging from orthopaedics to internal medicine, cardiology, and more. Notable studies are discussed, demonstrating AI's potential for improved accuracy in detecting and classifying various abnormalities. The ethical considerations of using AI in veterinary diagnostics are also explored, highlighting the need for transparent AI development, accurate training data, awareness of the limitations of AI models, and the importance of maintaining human expertise in the decision-making process. The manuscript underscores the significance of AI as a decision support tool rather than a replacement for human judgement. In conclusion, this comprehensive manuscript offers an assessment of the current landscape and future potential of AI in veterinary diagnostic imaging. It provides insights into the benefits and challenges of integrating AI into clinical practice while emphasizing the critical role of ethics and human expertise in ensuring the wellbeing of veterinary patients.
Collapse
Affiliation(s)
- Silvia Burti
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy.
| | - Tommaso Banzato
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy
| | - Simon Coghlan
- School of Computing and Information Systems, Centre for AI and Digital Ethics, Australian Research Council Centre of Excellence for Automated Decision-Making and Society, University of Melbourne, 3052 Melbourne, Australia
| | - Marek Wodzinski
- Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Krakow, 30059 Kraków, Poland; Information Systems Institute, University of Applied Sciences - Western Switzerland (HES-SO Valais), 3960 Sierre, Switzerland
| | - Margherita Bendazzoli
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy
| | - Alessandro Zotti
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy
| |
Collapse
|
23
|
Hilbig A. Trainee Focus debate: Artificial intelligence will have a negative impact on emergency medicine. Emerg Med Australas 2024; 36:639-640. [PMID: 39013801 DOI: 10.1111/1742-6723.14460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Accepted: 06/11/2024] [Indexed: 07/18/2024]
Affiliation(s)
- Adelene Hilbig
- Emergency Department, Alice Springs Hospital, The Gap, Northern Territory, Australia
| |
Collapse
|
24
|
Chatwal MS, Camacho P, Symington B, Rosenberg A, Hinyard L, Chavez Mac Gregor M, Gallagher C, El-Jawahri A, McGinnis M, Lee RT. Ethics of Patient-Clinician Boundaries in Oncology: Communication Strategies for Promoting Clinician Well-Being and Quality Patient Care. JCO Oncol Pract 2024; 20:1016-1020. [PMID: 38484207 DOI: 10.1200/op.23.00650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 02/07/2024] [Accepted: 02/15/2024] [Indexed: 07/18/2024] Open
Affiliation(s)
| | - Polo Camacho
- American Society of Clinical Oncology, Alexandria, VA
| | | | | | | | | | | | | | | | | |
Collapse
|
25
|
Lukkien DRM, Ipakchian Askari S, Stolwijk NE, Hofstede BM, Nap HH, Boon WPC, Peine A, Moors EHM, Minkman MMN. Making Co-Design More Responsible: Case Study on the Development of an AI-Based Decision Support System in Dementia Care. JMIR Hum Factors 2024; 11:e55961. [PMID: 39083768 PMCID: PMC11325107 DOI: 10.2196/55961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 06/02/2024] [Indexed: 08/02/2024] Open
Abstract
BACKGROUND Emerging technologies such as artificial intelligence (AI) require an early-stage assessment of potential societal and ethical implications to increase their acceptability, desirability, and sustainability. This paper explores and compares 2 of these assessment approaches: the responsible innovation (RI) framework originating from technology studies and the co-design approach originating from design studies. While the RI framework has been introduced to guide early-stage technology assessment through anticipation, inclusion, reflexivity, and responsiveness, co-design is a commonly accepted approach in the development of technologies to support the care for older adults with frailty. However, there is limited understanding about how co-design contributes to the anticipation of implications. OBJECTIVE This paper empirically explores how the co-design process of an AI-based decision support system (DSS) for dementia caregivers is complemented by explicit anticipation of implications. METHODS This case study investigated an international collaborative project that focused on the co-design, development, testing, and commercialization of a DSS that is intended to provide actionable information to formal caregivers of people with dementia. In parallel to the co-design process, an RI exploration took place, which involved examining project members' viewpoints on both positive and negative implications of using the DSS, along with strategies to address these implications. Results from the co-design process and RI exploration were analyzed and compared. In addition, retrospective interviews were held with project members to reflect on the co-design process and RI exploration. RESULTS Our results indicate that, when involved in exploring requirements for the DSS, co-design participants naturally raised various implications and conditions for responsible design and deployment: protecting privacy, preventing cognitive overload, providing transparency, empowering caregivers to be in control, safeguarding accuracy, and training users. However, when comparing the co-design results with insights from the RI exploration, we found limitations to the co-design results, for instance, regarding the specification, interrelatedness, and context dependency of implications and strategies to address implications. CONCLUSIONS This case study shows that a co-design process that focuses on opportunities for innovation rather than balancing attention for both positive and negative implications may result in knowledge gaps related to social and ethical implications and how they can be addressed. In the pursuit of responsible outcomes, co-design facilitators could broaden their scope and reconsider the specific implementation of the process-oriented RI principles of anticipation and inclusion.
Collapse
Affiliation(s)
- Dirk R M Lukkien
- Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, Netherlands
| | - Sima Ipakchian Askari
- Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands
- Human Technology Interaction, Eindhoven University of Technology, Eindhoven, Netherlands
| | | | - Bob M Hofstede
- Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands
- Human Technology Interaction, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Henk Herman Nap
- Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands
- Human Technology Interaction, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Wouter P C Boon
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, Netherlands
| | - Alexander Peine
- Faculty of Humanities, Open University of The Netherlands, Heerlen, Netherlands
| | - Ellen H M Moors
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, Netherlands
| | - Mirella M N Minkman
- Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands
- Tilburg Institute for Advanced Studies School for Business and Society, Tilburg University, Tilburg, Netherlands
| |
Collapse
|
26
|
Lukkien DRM, Stolwijk NE, Ipakchian Askari S, Hofstede BM, Nap HH, Boon WPC, Peine A, Moors EHM, Minkman MMN. AI-Assisted Decision-Making in Long-Term Care: Qualitative Study on Prerequisites for Responsible Innovation. JMIR Nurs 2024; 7:e55962. [PMID: 39052315 PMCID: PMC11310645 DOI: 10.2196/55962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Revised: 04/16/2024] [Accepted: 05/24/2024] [Indexed: 07/27/2024] Open
Abstract
BACKGROUND Although the use of artificial intelligence (AI)-based technologies, such as AI-based decision support systems (AI-DSSs), can help sustain and improve the quality and efficiency of care, their deployment creates ethical and social challenges. In recent years, a growing prevalence of high-level guidelines and frameworks for responsible AI innovation has been observed. However, few studies have specified the responsible embedding of AI-based technologies, such as AI-DSSs, in specific contexts, such as the nursing process in long-term care (LTC) for older adults. OBJECTIVE Prerequisites for responsible AI-assisted decision-making in nursing practice were explored from the perspectives of nurses and other professional stakeholders in LTC. METHODS Semistructured interviews were conducted with 24 care professionals in Dutch LTC, including nurses, care coordinators, data specialists, and care centralists. A total of 2 imaginary scenarios about AI-DSSs were developed beforehand and used to enable participants articulate their expectations regarding the opportunities and risks of AI-assisted decision-making. In addition, 6 high-level principles for responsible AI were used as probing themes to evoke further consideration of the risks associated with using AI-DSSs in LTC. Furthermore, the participants were asked to brainstorm possible strategies and actions in the design, implementation, and use of AI-DSSs to address or mitigate these risks. A thematic analysis was performed to identify the opportunities and risks of AI-assisted decision-making in nursing practice and the associated prerequisites for responsible innovation in this area. RESULTS The stance of care professionals on the use of AI-DSSs is not a matter of purely positive or negative expectations but rather a nuanced interplay of positive and negative elements that lead to a weighed perception of the prerequisites for responsible AI-assisted decision-making. Both opportunities and risks were identified in relation to the early identification of care needs, guidance in devising care strategies, shared decision-making, and the workload of and work experience of caregivers. To optimally balance the opportunities and risks of AI-assisted decision-making, seven categories of prerequisites for responsible AI-assisted decision-making in nursing practice were identified: (1) regular deliberation on data collection; (2) a balanced proactive nature of AI-DSSs; (3) incremental advancements aligned with trust and experience; (4) customization for all user groups, including clients and caregivers; (5) measures to counteract bias and narrow perspectives; (6) human-centric learning loops; and (7) the routinization of using AI-DSSs. CONCLUSIONS The opportunities of AI-assisted decision-making in nursing practice could turn into drawbacks depending on the specific shaping of the design and deployment of AI-DSSs. Therefore, we recommend considering the responsible use of AI-DSSs as a balancing act. Moreover, considering the interrelatedness of the identified prerequisites, we call for various actors, including developers and users of AI-DSSs, to cohesively address the different factors important to the responsible embedding of AI-DSSs in practice.
Collapse
Affiliation(s)
- Dirk R M Lukkien
- Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, Netherlands
| | | | - Sima Ipakchian Askari
- Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands
- Human Technology Interaction, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Bob M Hofstede
- Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands
- Human Technology Interaction, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Henk Herman Nap
- Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands
- Human Technology Interaction, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Wouter P C Boon
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, Netherlands
| | - Alexander Peine
- Faculty of Humanities, Open University of The Netherlands, Heerlen, Netherlands
| | - Ellen H M Moors
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, Netherlands
| | - Mirella M N Minkman
- Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands
- TIAS School for Business and Society, Tilburg University, Tilburg, Netherlands
| |
Collapse
|
27
|
Hesjedal MB, Lysø EH, Solbjør M, Skolbekken JA. Valuing good health care: How medical doctors, scientists and patients relate ethical challenges with artificial intelligence decision-making support tools in prostate cancer diagnostics to good health care. SOCIOLOGY OF HEALTH & ILLNESS 2024. [PMID: 39037701 DOI: 10.1111/1467-9566.13818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 06/24/2024] [Indexed: 07/23/2024]
Abstract
Artificial intelligence (AI) is increasingly used in health care to improve diagnostics and treatment. Decision-making tools intended to help professionals in diagnostic processes are developed in a variety of medical fields. Despite the imagined benefits, AI in health care is contested. Scholars point to ethical and social issues related to the development, implementation, and use of AI in diagnostics. Here, we investigate how three relevant groups construct ethical challenges with AI decision-making tools in prostate cancer (PCa) diagnostics: scientists developing AI decision support tools for interpreting MRI scans for PCa, medical doctors working with PCa and PCa patients. This qualitative study is based on participant observation and interviews with the abovementioned actors. The analysis focuses on how each group draws on their understanding of 'good health care' when discussing ethical challenges, and how they mobilise different registers of valuing in this process. Our theoretical approach is inspired by scholarship on evaluation and justification. We demonstrate how ethical challenges in this area are conceptualised, weighted and negotiated among these participants as processes of valuing good health care and compare their perspectives.
Collapse
Affiliation(s)
- Maria Bårdsen Hesjedal
- Department of Public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway
| | - Emilie Hybertsen Lysø
- Department of Public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway
| | - Marit Solbjør
- Department of Public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway
| | - John-Arne Skolbekken
- Department of Public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
28
|
Haltaufderheide J, Ranisch R. The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs). NPJ Digit Med 2024; 7:183. [PMID: 38977771 PMCID: PMC11231310 DOI: 10.1038/s41746-024-01157-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 05/29/2024] [Indexed: 07/10/2024] Open
Abstract
With the introduction of ChatGPT, Large Language Models (LLMs) have received enormous attention in healthcare. Despite potential benefits, researchers have underscored various ethical implications. While individual instances have garnered attention, a systematic and comprehensive overview of practical applications currently researched and ethical issues connected to them is lacking. Against this background, this work maps the ethical landscape surrounding the current deployment of LLMs in medicine and healthcare through a systematic review. Electronic databases and preprint servers were queried using a comprehensive search strategy which generated 796 records. Studies were screened and extracted following a modified rapid review approach. Methodological quality was assessed using a hybrid approach. For 53 records, a meta-aggregative synthesis was performed. Four general fields of applications emerged showcasing a dynamic exploration phase. Advantages of using LLMs are attributed to their capacity in data analysis, information provisioning, support in decision-making or mitigating information loss and enhancing information accessibility. However, our study also identifies recurrent ethical concerns connected to fairness, bias, non-maleficence, transparency, and privacy. A distinctive concern is the tendency to produce harmful or convincing but inaccurate content. Calls for ethical guidance and human oversight are recurrent. We suggest that the ethical guidance debate should be reframed to focus on defining what constitutes acceptable human oversight across the spectrum of applications. This involves considering the diversity of settings, varying potentials for harm, and different acceptable thresholds for performance and certainty in healthcare. Additionally, critical inquiry is needed to evaluate the necessity and justification of LLMs' current experimental use.
Collapse
Affiliation(s)
- Joschka Haltaufderheide
- Faculty of Health Sciences Brandenburg, University of Potsdam, Am Mühlenberg 9, Potsdam, 14476, Germany
| | - Robert Ranisch
- Faculty of Health Sciences Brandenburg, University of Potsdam, Am Mühlenberg 9, Potsdam, 14476, Germany.
| |
Collapse
|
29
|
Vandemeulebroucke T. The ethics of artificial intelligence systems in healthcare and medicine: from a local to a global perspective, and back. Pflugers Arch 2024:10.1007/s00424-024-02984-3. [PMID: 38969841 DOI: 10.1007/s00424-024-02984-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 04/30/2024] [Accepted: 06/24/2024] [Indexed: 07/07/2024]
Abstract
Artificial intelligence systems (ai-systems) (e.g. machine learning, generative artificial intelligence), in healthcare and medicine, have been received with hopes of better care quality, more efficiency, lower care costs, etc. Simultaneously, these systems have been met with reservations regarding their impacts on stakeholders' privacy, on changing power dynamics, on systemic biases, etc. Fortunately, healthcare and medicine have been guided by a multitude of ethical principles, frameworks, or approaches, which also guide the use of ai-systems in healthcare and medicine, in one form or another. Nevertheless, in this article, I argue that most of these approaches are inspired by a local isolationist view on ai-systems, here exemplified by the principlist approach. Despite positive contributions to laying out the ethical landscape of ai-systems in healthcare and medicine, such ethics approaches are too focused on a specific local healthcare and medical setting, be it a particular care relationship, a particular care organisation, or a particular society or region. By doing so, they lose sight of the global impacts ai-systems have, especially environmental impacts and related social impacts, such as increased health risks. To meet this gap, this article presents a global approach to the ethics of ai-systems in healthcare and medicine which consists of five levels of ethical impacts and analysis: individual-relational, organisational, societal, global, and historical. As such, this global approach incorporates the local isolationist view by integrating it in a wider landscape of ethical consideration so to ensure ai-systems meet the needs of everyone everywhere.
Collapse
Affiliation(s)
- Tijs Vandemeulebroucke
- Bonn Sustainable AI Lab, Institut für Wissenschaft und Ethik, Universität Bonn-University of Bonn, Bonner Talweg 57, 53113, Bonn, Germany.
| |
Collapse
|
30
|
Awuah WA, Adebusoye FT, Wellington J, David L, Salam A, Weng Yee AL, Lansiaux E, Yarlagadda R, Garg T, Abdul-Rahman T, Kalmanovich J, Miteu GD, Kundu M, Mykolaivna NI. Recent Outcomes and Challenges of Artificial Intelligence, Machine Learning, and Deep Learning in Neurosurgery. World Neurosurg X 2024; 23:100301. [PMID: 38577317 PMCID: PMC10992893 DOI: 10.1016/j.wnsx.2024.100301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/23/2023] [Accepted: 02/21/2024] [Indexed: 04/06/2024] Open
Abstract
Neurosurgeons receive extensive technical training, which equips them with the knowledge and skills to specialise in various fields and manage the massive amounts of information and decision-making required throughout the various stages of neurosurgery, including preoperative, intraoperative, and postoperative care and recovery. Over the past few years, artificial intelligence (AI) has become more useful in neurosurgery. AI has the potential to improve patient outcomes by augmenting the capabilities of neurosurgeons and ultimately improving diagnostic and prognostic outcomes as well as decision-making during surgical procedures. By incorporating AI into both interventional and non-interventional therapies, neurosurgeons may provide the best care for their patients. AI, machine learning (ML), and deep learning (DL) have made significant progress in the field of neurosurgery. These cutting-edge methods have enhanced patient outcomes, reduced complications, and improved surgical planning.
Collapse
Affiliation(s)
| | | | - Jack Wellington
- Cardiff University School of Medicine, Cardiff University, Wales, United Kingdom
| | - Lian David
- Norwich Medical School, University of East Anglia, United Kingdom
| | - Abdus Salam
- Department of Surgery, Khyber Teaching Hospital, Peshawar, Pakistan
| | | | | | - Rohan Yarlagadda
- Rowan University School of Osteopathic Medicine, Stratford, NJ, USA
| | - Tulika Garg
- Government Medical College and Hospital Chandigarh, India
| | | | | | | | - Mrinmoy Kundu
- Institute of Medical Sciences and SUM Hospital, Bhubaneswar, India
| | | |
Collapse
|
31
|
Shlobin NA, Ward M, Shah HA, Brown EDL, Sciubba DM, Langer D, D'Amico RS. Ethical Incorporation of Artificial Intelligence into Neurosurgery: A Generative Pretrained Transformer Chatbot-Based, Human-Modified Approach. World Neurosurg 2024; 187:e769-e791. [PMID: 38723944 DOI: 10.1016/j.wneu.2024.04.165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/25/2024] [Accepted: 04/26/2024] [Indexed: 05/31/2024]
Abstract
INTRODUCTION Artificial intelligence (AI) has become increasingly used in neurosurgery. Generative pretrained transformers (GPTs) have been of particular interest. However, ethical concerns regarding the incorporation of AI into the field remain underexplored. We delineate key ethical considerations using a novel GPT-based, human-modified approach, synthesize the most common considerations, and present an ethical framework for the involvement of AI in neurosurgery. METHODS GPT-4, ChatGPT, Bing Chat/Copilot, You, Perplexity.ai, and Google Bard were queried with the prompt "How can artificial intelligence be ethically incorporated into neurosurgery?". Then, a layered GPT-based thematic analysis was performed. The authors synthesized the results into considerations for the ethical incorporation of AI into neurosurgery. Separate Pareto analyses with 20% threshold and 10% threshold were conducted to determine salient themes. The authors refined these salient themes. RESULTS Twelve key ethical considerations focusing on stakeholders, clinical implementation, and governance were identified. Refinement of the Pareto analysis of the top 20% most salient themes in the aggregated GPT outputs yielded 10 key considerations. Additionally, from the top 10% most salient themes, 5 considerations were retrieved. An ethical framework for the use of AI in neurosurgery was developed. CONCLUSIONS It is critical to address the ethical considerations associated with the use of AI in neurosurgery. The framework described in this manuscript may facilitate the integration of AI into neurosurgery, benefitting both patients and neurosurgeons alike. We urge neurosurgeons to use AI only for validated purposes and caution against automatic adoption of its outputs without neurosurgeon interpretation.
Collapse
Affiliation(s)
- Nathan A Shlobin
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA.
| | - Max Ward
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Harshal A Shah
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Ethan D L Brown
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Daniel M Sciubba
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - David Langer
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| | - Randy S D'Amico
- Department of Neurological Surgery, Lenox Hill Hospital/Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, New York, New York, USA
| |
Collapse
|
32
|
Bouhouita-Guermech S, Haidar H. Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of "Responsibility" in Artificial Intelligence within the Healthcare Context. Asian Bioeth Rev 2024; 16:315-344. [PMID: 39022380 PMCID: PMC11250714 DOI: 10.1007/s41649-024-00292-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 07/20/2024] Open
Abstract
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to "responsibility" and "AI in healthcare", and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders' responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.
Collapse
Affiliation(s)
| | - Hazar Haidar
- Ethics Programs, Department of Letters and Humanities, University of Quebec at Rimouski, Rimouski, Québec Canada
| |
Collapse
|
33
|
Graham Y, Spencer AE, Velez GE, Herbell K. Engaging Youth Voice and Family Partnerships to Improve Children's Mental Health Outcomes. Child Adolesc Psychiatr Clin N Am 2024; 33:343-354. [PMID: 38823808 DOI: 10.1016/j.chc.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/03/2024]
Abstract
Promoting active participation of families and youth in mental health systems of care is the cornerstone of creating a more inclusive, effective, and responsive care network. This article focuses on the inclusion of parent and youth voice in transforming our mental health care system to promote increased engagement at all levels of service delivery. Youth and parent peer support delivery models, digital innovation, and technology not only empower the individuals involved, but also have the potential to enhance the overall efficacy of the mental health care system.
Collapse
Affiliation(s)
- Yolanda Graham
- Morehouse School of Medicine, Devereux Advanced Behavioral Health, 444 Devereux Drive, Villanova, PA 19085, USA.
| | - Andrea E Spencer
- Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine, 225 East Chicago Avenue, Chicago, IL 60611, USA
| | - German E Velez
- New York-Presbyterian Hospital, Weill Cornell Medical College/ Columbia University College of Physicians and Surgeons, 525 E. 68th Street, Box 140, New York, NY 10065, USA
| | - Kayla Herbell
- Martha S. Pitzer Center for Women, Children, and Youth, The Ohio State University, 1577 Neil Avenue, Columbus, OH 43210, USA
| |
Collapse
|
34
|
Freeman S, Stewart J, Kaard R, Ouliel E, Goudie A, Dwivedi G, Akhlaghi H. Health consumers' ethical concerns towards artificial intelligence in Australian emergency departments. Emerg Med Australas 2024. [PMID: 38890798 DOI: 10.1111/1742-6723.14449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 04/10/2024] [Accepted: 05/15/2024] [Indexed: 06/20/2024]
Abstract
OBJECTIVES To investigate health consumers' ethical concerns towards the use of artificial intelligence (AI) in EDs. METHODS Qualitative semi-structured interviews with health consumers, recruited via health consumer networks and community groups, interviews conducted between January and August 2022. RESULTS We interviewed 28 health consumers about their perceptions towards the ethical use of AI in EDs. The results discussed in this paper highlight the challenges and barriers for the effective and ethical implementation of AI from the perspective of Australian health consumers. Most health consumers are more likely to support AI health tools in EDs if they continue to be involved in the decision-making process. There is considerably more approval of AI tools that support clinical decision-making, as opposed to replacing it. There is mixed sentiment about the acceptability of AI tools influencing clinical decision-making and judgement. Health consumers are mostly supportive of the use of their data to train and develop AI tools but are concerned with who has access. Addressing bias and discrimination in AI is an important consideration for some health consumers. Robust regulation and governance are critical for health consumers to trust and accept the use of AI. CONCLUSION Health consumers view AI as an emerging technology that they want to see comprehensively regulated to ensure it functions safely and securely with EDs. Without considerations made for the ethical design, implementation and use of AI technologies, health consumer trust and acceptance in the use of these tools will be limited.
Collapse
Affiliation(s)
- Sam Freeman
- Department of Emergency Medicine, St Vincent's Hospital, Melbourne, Victoria, Australia
- Centre for Digital Transformation of Health, The University of Melbourne, Melbourne, Victoria, Australia
| | - Jonathon Stewart
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Cardiovascular Disease and Diabetes Program, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
| | - Rebecca Kaard
- School of Medicine, The University of Notre Dame, Fremantle, Western Australia, Australia
| | - Eden Ouliel
- School of Medicine, The University of Notre Dame, Fremantle, Western Australia, Australia
| | - Adrian Goudie
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Department of Emergency Medicine, Fiona Stanley Hospital, Perth, Western Australia, Australia
| | - Girish Dwivedi
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Cardiovascular Disease and Diabetes Program, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
- Department of Cardiology, Fiona Stanley Hospital, Perth, Western Australia, Australia
| | - Hamed Akhlaghi
- Department of Emergency Medicine, St Vincent's Hospital, Melbourne, Victoria, Australia
| |
Collapse
|
35
|
Mahesh N, Devishamani CS, Raghu K, Mahalingam M, Bysani P, Chakravarthy AV, Raman R. Advancing healthcare: the role and impact of AI and foundation models. Am J Transl Res 2024; 16:2166-2179. [PMID: 39006256 PMCID: PMC11236664 DOI: 10.62347/wqwv9220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Accepted: 05/06/2024] [Indexed: 07/16/2024]
Abstract
BACKGROUND The integration of artificial intelligence (AI) into the healthcare domain is a monumental shift with profound implications for diagnostics, medical interventions, and the overall structure of healthcare systems. PURPOSE This study explores the transformative journey of foundation AI models in healthcare, shedding light on the challenges, ethical considerations, and vast potential they hold for improving patient outcome and system efficiency. Notably, in this investigation we observe a relatively slow adoption of AI within the public sector of healthcare. The evolution of AI in healthcare is un-paralleled, especially its prowess in revolutionizing diagnostic processes. RESULTS This research showcases how these foundational models can unravel hidden patterns within complex medical datasets. The impact of AI reverberates through medical interventions, encompassing pathology, imaging, genomics, and personalized healthcare, positioning AI as a cornerstone in the quest for precision medicine. The paper delves into the applications of generative AI models in critical facets of healthcare, including decision support, medical imaging, and the prediction of protein structures. The study meticulously evaluates various AI models, such as transfer learning, RNN, autoencoders, and their roles in the healthcare landscape. A pioneering concept introduced in this exploration is that of General Medical AI (GMAI), advocating for the development of reusable and flexible AI models. CONCLUSION The review article discusses how AI can revolutionize healthcare by stressing the significance of transparency, fairness and accountability, in AI applications regarding patient data privacy and biases. By tackling these issues and suggesting a governance structure the article adds to the conversation about AI integration in healthcare environments.
Collapse
Affiliation(s)
- Nandhini Mahesh
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| | - Chitralekha S Devishamani
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| | - Keerthana Raghu
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| | - Maanasi Mahalingam
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| | - Pragathi Bysani
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| | | | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| |
Collapse
|
36
|
Movahed M, Bilderback S. Evaluating the readiness of healthcare administration students to utilize AI for sustainable leadership: a survey study. J Health Organ Manag 2024; ahead-of-print. [PMID: 38858220 DOI: 10.1108/jhom-12-2023-0385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024]
Abstract
PURPOSE This paper explores how healthcare administration students perceive the integration of Artificial Intelligence (AI) in healthcare leadership, mainly focusing on the sustainability aspects involved. It aims to identify gaps in current educational curricula and suggests enhancements to better prepare future healthcare professionals for the evolving demands of AI-driven healthcare environments. DESIGN/METHODOLOGY/APPROACH This study utilized a cross-sectional survey design to understand healthcare administration students' perceptions regarding integrating AI in healthcare leadership. An online questionnaire, developed from an extensive literature review covering fundamental AI knowledge and its role in sustainable leadership, was distributed to students majoring and minoring in healthcare administration. This methodological approach garnered participation from 62 students, providing insights and perspectives crucial for the study's objectives. FINDINGS The research revealed that while a significant majority of healthcare administration students (70%) recognize the potential of AI in fostering sustainable leadership in healthcare, only 30% feel adequately prepared to work in AI-integrated environments. Additionally, students were interested in learning more about AI applications in healthcare and the role of AI in sustainable leadership, underscoring the need for comprehensive AI-focused education in their curriculum. RESEARCH LIMITATIONS/IMPLICATIONS The research is limited by its focus on a single academic institution, which may not fully represent the diversity of perspectives in healthcare administration. PRACTICAL IMPLICATIONS This study highlights the need for healthcare administration curricula to incorporate AI education, aligning theoretical knowledge with practical applications, to effectively prepare future professionals for the evolving demands of AI-integrated healthcare environments. ORIGINALITY/VALUE This research paper presents insights into healthcare administration students' readiness and perspectives toward AI integration in healthcare leadership, filling a critical gap in understanding the educational needs in the evolving landscape of AI-driven healthcare.
Collapse
Affiliation(s)
- Mohammad Movahed
- Department of Economics, Finance, and Healthcare Administration, Valdosta State University, Valdosta, Georgia, USA
| | | |
Collapse
|
37
|
Olver IN. Ethics of artificial intelligence in supportive care in cancer. Med J Aust 2024; 220:499-501. [PMID: 38714360 DOI: 10.5694/mja2.52297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 12/22/2023] [Indexed: 05/09/2024]
|
38
|
Sulaieva O, Dudin O, Koshyk O, Panko M, Kobyliak N. Digital pathology implementation in cancer diagnostics: towards informed decision-making. Front Digit Health 2024; 6:1358305. [PMID: 38873358 PMCID: PMC11169727 DOI: 10.3389/fdgth.2024.1358305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 05/16/2024] [Indexed: 06/15/2024] Open
Abstract
Digital pathology (DP) has become a part of the cancer healthcare system, creating additional value for cancer patients. DP implementation in clinical practice provides plenty of benefits but also harbors hidden ethical challenges affecting physician-patient relationships. This paper addresses the ethical obligation to transform the physician-patient relationship for informed and responsible decision-making when using artificial intelligence (AI)-based tools for cancer diagnostics. DP application allows to improve the performance of the Human-AI Team shifting focus from AI challenges towards the Augmented Human Intelligence (AHI) benefits. AHI enhances analytical sensitivity and empowers pathologists to deliver accurate diagnoses and assess predictive biomarkers for further personalized treatment of cancer patients. At the same time, patients' right to know about using AI tools, their accuracy, strengths and limitations, measures for privacy protection, acceptance of privacy concerns and legal protection defines the duty of physicians to provide the relevant information about AHI-based solutions to patients and the community for building transparency, understanding and trust, respecting patients' autonomy and empowering informed decision-making in oncology.
Collapse
Affiliation(s)
- Oksana Sulaieva
- Medical LaboratoryCSD, Kyiv, Ukraine
- Endocrinology Department, Bogomolets National Medical University, Kyiv, Ukraine
| | | | | | | | - Nazarii Kobyliak
- Medical LaboratoryCSD, Kyiv, Ukraine
- Endocrinology Department, Bogomolets National Medical University, Kyiv, Ukraine
| |
Collapse
|
39
|
Zondag AGM, Rozestraten R, Grimmelikhuijsen SG, Jongsma KR, van Solinge WW, Bots ML, Vernooij RWM, Haitjema S. The Effect of Artificial Intelligence on Patient-Physician Trust: Cross-Sectional Vignette Study. J Med Internet Res 2024; 26:e50853. [PMID: 38805702 PMCID: PMC11167322 DOI: 10.2196/50853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 03/21/2024] [Accepted: 04/16/2024] [Indexed: 05/30/2024] Open
Abstract
BACKGROUND Clinical decision support systems (CDSSs) based on routine care data, using artificial intelligence (AI), are increasingly being developed. Previous studies focused largely on the technical aspects of using AI, but the acceptability of these technologies by patients remains unclear. OBJECTIVE We aimed to investigate whether patient-physician trust is affected when medical decision-making is supported by a CDSS. METHODS We conducted a vignette study among the patient panel (N=860) of the University Medical Center Utrecht, the Netherlands. Patients were randomly assigned into 4 groups-either the intervention or control groups of the high-risk or low-risk cases. In both the high-risk and low-risk case groups, a physician made a treatment decision with (intervention groups) or without (control groups) the support of a CDSS. Using a questionnaire with a 7-point Likert scale, with 1 indicating "strongly disagree" and 7 indicating "strongly agree," we collected data on patient-physician trust in 3 dimensions: competence, integrity, and benevolence. We assessed differences in patient-physician trust between the control and intervention groups per case using Mann-Whitney U tests and potential effect modification by the participant's sex, age, education level, general trust in health care, and general trust in technology using multivariate analyses of (co)variance. RESULTS In total, 398 patients participated. In the high-risk case, median perceived competence and integrity were lower in the intervention group compared to the control group but not statistically significant (5.8 vs 5.6; P=.16 and 6.3 vs 6.0; P=.06, respectively). However, the effect of a CDSS application on the perceived competence of the physician depended on the participant's sex (P=.03). Although no between-group differences were found in men, in women, the perception of the physician's competence and integrity was significantly lower in the intervention compared to the control group (P=.009 and P=.01, respectively). In the low-risk case, no differences in trust between the groups were found. However, increased trust in technology positively influenced the perceived benevolence and integrity in the low-risk case (P=.009 and P=.04, respectively). CONCLUSIONS We found that, in general, patient-physician trust was high. However, our findings indicate a potentially negative effect of AI applications on the patient-physician relationship, especially among women and in high-risk situations. Trust in technology, in general, might increase the likelihood of embracing the use of CDSSs by treating professionals.
Collapse
Affiliation(s)
- Anna G M Zondag
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Raoul Rozestraten
- Utrecht University School of Governance, Utrecht University, Utrecht, Netherlands
| | | | - Karin R Jongsma
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Wouter W van Solinge
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Michiel L Bots
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Robin W M Vernooij
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
- Department of Nephrology and Hypertension, University Medical Center Utrecht, Utrecht, Netherlands
| | - Saskia Haitjema
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
40
|
Morales-García WC, Sairitupa-Sanchez LZ, Morales-García SB, Morales-García M. Adaptation and Psychometric Properties of an Attitude toward Artificial Intelligence Scale (AIAS-4) among Peruvian Nurses. Behav Sci (Basel) 2024; 14:437. [PMID: 38920769 PMCID: PMC11200830 DOI: 10.3390/bs14060437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 05/11/2024] [Accepted: 05/14/2024] [Indexed: 06/27/2024] Open
Abstract
BACKGROUND The integration of Artificial Intelligence (AI) into various aspects of daily life has sparked growing interest in understanding public attitudes toward this technology. Despite advancements in tools to assess these perceptions, there remains a need for culturally adapted instruments, particularly in specific contexts like that of Peruvian nurses. OBJECTIVE To evaluate the psychometric properties of the AIAS-4 in a sample of Peruvian nurses. METHODS An instrumental design was employed, recruiting 200 Peruvian nurses. The Attitude toward Artificial Intelligence in Spanish (AIAS-S), a cultural and linguistic adaptation of the AIAS-4, involved data analysis using descriptive statistics, confirmatory factor analysis (CFA), and invariance tests. RESULTS The Confirmatory Factor Analysis (CFA) confirmed a unidimensional factor structure with an excellent model fit (χ2 = 0.410, df = 1, p = 0.522, CFI = 1.00, TLI = 1.00, RMSEA = 0.00, SRMR = 0.00). The scale demonstrated high internal consistency (α = 0.94, ω = 0.91). Tests of invariance from configural to strict confirmed that the scale is stable across different demographic subgroups. CONCLUSIONS The AIAS-S proved to be a psychometrically solid tool for assessing attitudes toward AI in the context of Peruvian nurses, providing evidence of validity, reliability, and gender invariance. This study highlights the importance of having culturally adapted instruments to explore attitudes toward emerging technologies in specific groups.
Collapse
Affiliation(s)
- Wilter C. Morales-García
- Escuela de Posgrado, Universidad Peruana Unión, Lima 15457, Peru;
- Facultad de Teología, Universidad Peruana Unión, Lima 15457, Peru
- Sociedad Científica de Investigadores Adventistas, SOCIA, Universidad Peruana Unión, Lima 15457, Peru
- Club de Conquistadores, Orión, Universidad Peruana Unión, Lima 15457, Peru
| | - Liset Z. Sairitupa-Sanchez
- Escuela Profesional de Psicología, Facultad de Ciencias de la Salud, Universidad Peruana Unión, Lima 15457, Peru;
| | - Sandra B. Morales-García
- Escuela Profesional de Medicina Humana, Facultad de Ciencias de la Salud, Universidad Peruana Unión, Lima 15457, Peru;
| | - Mardel Morales-García
- Unidad de Salud, Escuela de posgrado, Universidad Peruana Unión, Km 19, Carretera Central, Lima 15033, Peru
| |
Collapse
|
41
|
Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif Intell Med 2024; 151:102861. [PMID: 38555850 DOI: 10.1016/j.artmed.2024.102861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 03/19/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024]
Abstract
Healthcare organizations have realized that Artificial intelligence (AI) can provide a competitive edge through personalized patient experiences, improved patient outcomes, early diagnosis, augmented clinician capabilities, enhanced operational efficiencies, or improved medical service accessibility. However, deploying AI-driven tools in the healthcare ecosystem could be challenging. This paper categorizes AI applications in healthcare and comprehensively examines the challenges associated with deploying AI in medical practices at scale. As AI continues to make strides in healthcare, its integration presents various challenges, including production timelines, trust generation, privacy concerns, algorithmic biases, and data scarcity. The paper highlights that flawed business models and wrong workflows in healthcare practices cannot be rectified merely by deploying AI-driven tools. Healthcare organizations should re-evaluate root problems such as misaligned financial incentives (e.g., fee-for-service models), dysfunctional medical workflows (e.g., high rates of patient readmissions), poor care coordination between different providers, fragmented electronic health records systems, and inadequate patient education and engagement models in tandem with AI adoption. This study also explores the need for a cultural shift in viewing AI not as a threat but as an enabler that can enhance healthcare delivery and create new employment opportunities while emphasizing the importance of addressing underlying operational issues. The necessity of investments beyond finance is discussed, emphasizing the importance of human capital, continuous learning, and a supportive environment for AI integration. The paper also highlights the crucial role of clear regulations in building trust, ensuring safety, and guiding the ethical use of AI, calling for coherent frameworks addressing transparency, model accuracy, data quality control, liability, and ethics. Furthermore, this paper underscores the importance of advancing AI literacy within academia to prepare future healthcare professionals for an AI-driven landscape. Through careful navigation and proactive measures addressing these challenges, the healthcare community can harness AI's transformative power responsibly and effectively, revolutionizing healthcare delivery and patient care. The paper concludes with a vision and strategic suggestions for the future of healthcare with AI, emphasizing thoughtful, responsible, and innovative engagement as the pathway to realizing its full potential to unlock immense benefits for healthcare organizations, physicians, nurses, and patients while proactively mitigating risks.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University (FIU), Modesto A. Maidique Campus, 11200 S.W. 8th St, RB 261B, Miami, FL 33199, United States.
| |
Collapse
|
42
|
Khalil H, Campbell F, Danial K, Pollock D, Munn Z, Welsh V, Saran A, Hoppe D, Tricco AC. Advancing the methodology of mapping reviews: A scoping review. Res Synth Methods 2024; 15:384-397. [PMID: 38169156 DOI: 10.1002/jrsm.1694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 10/30/2023] [Accepted: 11/18/2023] [Indexed: 01/05/2024]
Abstract
This scoping review aims to identify and systematically review published mapping reviews to assess their commonality and heterogeneity and determine whether additional efforts should be made to standardise methodology and reporting. The following databases were searched; Ovid MEDLINE, Embase, CINAHL, PsycINFO, Campbell collaboration database, Social Science Abstracts, Library and Information Science Abstracts (LISA). Following a pilot-test on a random sample of 20 citations included within title and abstracts, two team members independently completed all screening. Ten articles were piloted at full-text screening, and then each citation was reviewed independently by two team members. Discrepancies at both stages were resolved through discussion. Following a pilot-test on a random sample of five relevant full-text articles, one team member abstracted all the relevant data. Uncertainties in the data abstraction were resolved by another team member. A total of 335 articles were eligible for this scoping review and subsequently included. There was an increasing growth in the number of published mapping reviews over the years from 5 in 2010 to 73 in 2021. Moreover, there was a significant variability in reporting the included mapping reviews including their research question, priori protocol, methodology, data synthesis and reporting. This work has further highlighted the gaps in evidence synthesis methodologies. Further guidance developed by evidence synthesis organisations, such as JBI and Campbell, has the potential to clarify challenges experienced by researchers, given the magnitude of mapping reviews published every year.
Collapse
Affiliation(s)
- Hanan Khalil
- La Trobe University, School of Psychology and Public Health, Department of Public Health, Melbourne, Australia
| | - Fiona Campbell
- Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Katrina Danial
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Danielle Pollock
- Health Evidence Synthesis Recommendations and Impact, School of Public Health, Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, Australia
| | - Zachary Munn
- Health Evidence Synthesis Recommendations and Impact, School of Public Health, Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, Australia
| | - Vivian Welsh
- Bruyère Research Institute, Ottawa, Ontario, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
| | | | - Dimi Hoppe
- La Trobe University, School of Psychology and Public Health, Department of Public Health, Melbourne, Australia
| | - Andrea C Tricco
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Canada
- Epidemiology Division and Institute for Health Policy, Management, and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
- Queen's Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen's University, Kingston, Canada
| |
Collapse
|
43
|
Rajaram S, Gupta S, Medhi B. Digital medicine, intelligent medicine, and smart medication system. Indian J Pharmacol 2024; 56:159-161. [PMID: 39078177 PMCID: PMC11286089 DOI: 10.4103/ijp.ijp_501_24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 07/31/2024] Open
Affiliation(s)
| | - Shreya Gupta
- Department of Pharmacology, PGIMER, Chandigarh, India
| | - Bikash Medhi
- Department of Pharmacology, PGIMER, Chandigarh, India
| |
Collapse
|
44
|
Privitera AJ, Ng SHS, Kong APH, Weekes BS. AI and Aphasia in the Digital Age: A Critical Review. Brain Sci 2024; 14:383. [PMID: 38672032 PMCID: PMC11047933 DOI: 10.3390/brainsci14040383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 04/11/2024] [Accepted: 04/14/2024] [Indexed: 04/28/2024] Open
Abstract
Aphasiology has a long and rich tradition of contributing to understanding how culture, language, and social environment contribute to brain development and function. Recent breakthroughs in AI can transform the role of aphasiology in the digital age by leveraging speech data in all languages to model how damage to specific brain regions impacts linguistic universals such as grammar. These tools, including generative AI (ChatGPT) and natural language processing (NLP) models, could also inform practitioners working with clinical populations in the assessment and treatment of aphasia using AI-based interventions such as personalized therapy and adaptive platforms. Although these possibilities have generated enthusiasm in aphasiology, a rigorous interrogation of their limitations is necessary before AI is integrated into practice. We explain the history and first principles of reciprocity between AI and aphasiology, highlighting how lesioning neural networks opened the black box of cognitive neurolinguistic processing. We then argue that when more data from aphasia across languages become digitized and available online, deep learning will reveal hitherto unreported patterns of language processing of theoretical interest for aphasiologists. We also anticipate some problems using AI, including language biases, cultural, ethical, and scientific limitations, a misrepresentation of marginalized languages, and a lack of rigorous validation of tools. However, as these challenges are met with better governance, AI could have an equitable impact.
Collapse
Affiliation(s)
- Adam John Privitera
- Centre for Research and Development in Learning, Nanyang Technological University, Singapore 637335, Singapore;
| | - Siew Hiang Sally Ng
- Centre for Research and Development in Learning, Nanyang Technological University, Singapore 637335, Singapore;
- Institute for Pedagogical Innovation, Research, and Excellence, Nanyang Technological University, Singapore 637335, Singapore
| | - Anthony Pak-Hin Kong
- Academic Unit of Human Communication, Learning, and Development, The University of Hong Kong, Pokfulam, Hong Kong;
- Aphasia Research and Therapy (ART) Laboratory, The University of Hong Kong, Pokfulam, Hong Kong
| | - Brendan Stuart Weekes
- Faculty of Education, The University of Hong Kong, Pokfulam, Hong Kong
- Melbourne School of Psychological Sciences, University of Melbourne, Parkville 3010, Australia
| |
Collapse
|
45
|
Perets O, Stagno E, Yehuda EB, McNichol M, Anthony Celi L, Rappoport N, Dorotic M. Inherent Bias in Electronic Health Records: A Scoping Review of Sources of Bias. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.04.09.24305594. [PMID: 38680842 PMCID: PMC11046491 DOI: 10.1101/2024.04.09.24305594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Objectives 1.1Biases inherent in electronic health records (EHRs), and therefore in medical artificial intelligence (AI) models may significantly exacerbate health inequities and challenge the adoption of ethical and responsible AI in healthcare. Biases arise from multiple sources, some of which are not as documented in the literature. Biases are encoded in how the data has been collected and labeled, by implicit and unconscious biases of clinicians, or by the tools used for data processing. These biases and their encoding in healthcare records undermine the reliability of such data and bias clinical judgments and medical outcomes. Moreover, when healthcare records are used to build data-driven solutions, the biases are further exacerbated, resulting in systems that perpetuate biases and induce healthcare disparities. This literature scoping review aims to categorize the main sources of biases inherent in EHRs. Methods 1.2We queried PubMed and Web of Science on January 19th, 2023, for peer-reviewed sources in English, published between 2016 and 2023, using the PRISMA approach to stepwise scoping of the literature. To select the papers that empirically analyze bias in EHR, from the initial yield of 430 papers, 27 duplicates were removed, and 403 studies were screened for eligibility. 196 articles were removed after the title and abstract screening, and 96 articles were excluded after the full-text review resulting in a final selection of 116 articles. Results 1.3Systematic categorizations of diverse sources of bias are scarce in the literature, while the effects of separate studies are often convoluted and methodologically contestable. Our categorization of published empirical evidence identified the six main sources of bias: a) bias arising from past clinical trials; b) data-related biases arising from missing, incomplete information or poor labeling of data; human-related bias induced by c) implicit clinician bias, d) referral and admission bias; e) diagnosis or risk disparities bias and finally, (f) biases in machinery and algorithms. Conclusions 1.4Machine learning and data-driven solutions can potentially transform healthcare delivery, but not without limitations. The core inputs in the systems (data and human factors) currently contain several sources of bias that are poorly documented and analyzed for remedies. The current evidence heavily focuses on data-related biases, while other sources are less often analyzed or anecdotal. However, these different sources of biases add to one another exponentially. Therefore, to understand the issues holistically we need to explore these diverse sources of bias. While racial biases in EHR have been often documented, other sources of biases have been less frequently investigated and documented (e.g. gender-related biases, sexual orientation discrimination, socially induced biases, and implicit, often unconscious, human-related cognitive biases). Moreover, some existing studies lack causal evidence, illustrating the different prevalences of disease across groups, which does not per se prove the causality. Our review shows that data-, human- and machine biases are prevalent in healthcare and they significantly impact healthcare outcomes and judgments and exacerbate disparities and differential treatment. Understanding how diverse biases affect AI systems and recommendations is critical. We suggest that researchers and medical personnel should develop safeguards and adopt data-driven solutions with a "bias-in-mind" approach. More empirical evidence is needed to tease out the effects of different sources of bias on health outcomes.
Collapse
|
46
|
Wang W, Wang Y, Chen L, Ma R, Zhang M. Justice at the Forefront: Cultivating felt accountability towards Artificial Intelligence among healthcare professionals. Soc Sci Med 2024; 347:116717. [PMID: 38518481 DOI: 10.1016/j.socscimed.2024.116717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 02/10/2024] [Accepted: 02/20/2024] [Indexed: 03/24/2024]
Abstract
The advent of AI has ushered in a new era of patient care, but with it emerges a contentious debate surrounding accountability for algorithmic medical decisions. Within this discourse, a spectrum of views prevails, ranging from placing accountability on AI solution providers to laying it squarely on the shoulders of healthcare professionals. In response to this debate, this study, grounded in the mutualistic partner choice (MPC) model of the evolution of morality, seeks to establish a configurational framework for cultivating felt accountability towards AI among healthcare professionals. This framework underscores two pivotal conditions: AI ethics enactment and trusting belief in AI and considers the influence of organizational complexity in the implementation of this framework. Drawing on Fuzzy-set Qualitative Comparative Analysis (fsQCA) of a sample of 401 healthcare professionals, this study reveals that a) focusing justice and autonomy in AI ethics enactment along with building trusting belief in AI reliability and functionality reinforces healthcare professionals' sense of felt accountability towards AI, b) fostering felt accountability towards AI necessitates ensuring the establishment of trust in its functionality for high complexity hospitals, and c) prioritizing justice in AI ethics enactment and trust in AI reliability is essential for low complexity hospitals.
Collapse
Affiliation(s)
- Weisha Wang
- Research Center for Smarter Supply Chain, Business School, Soochow University, 50 Donghuan Road, Suzhou, 215006, China.
| | - Yichuan Wang
- Sheffield University Management School, University of Sheffield, Conduit Rd, Sheffield, S10 1FL, United Kingdom.
| | - Long Chen
- Brunel University London, United Kingdom.
| | - Rui Ma
- Greenwich Business School, University of Greenwich, United Kingdom.
| | - Minhao Zhang
- University of Bristol School of Management, University of Bristol, United Kingdom.
| |
Collapse
|
47
|
Pesapane F, Giambersio E, Capetti B, Monzani D, Grasso R, Nicosia L, Rotili A, Sorce A, Meneghetti L, Carriero S, Santicchia S, Carrafiello G, Pravettoni G, Cassano E. Patients' Perceptions and Attitudes to the Use of Artificial Intelligence in Breast Cancer Diagnosis: A Narrative Review. Life (Basel) 2024; 14:454. [PMID: 38672725 PMCID: PMC11051490 DOI: 10.3390/life14040454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 03/26/2024] [Accepted: 03/27/2024] [Indexed: 04/28/2024] Open
Abstract
Breast cancer remains the most prevalent cancer among women worldwide, necessitating advancements in diagnostic methods. The integration of artificial intelligence (AI) into mammography has shown promise in enhancing diagnostic accuracy. However, understanding patient perspectives, particularly considering the psychological impact of breast cancer diagnoses, is crucial. This narrative review synthesizes literature from 2000 to 2023 to examine breast cancer patients' attitudes towards AI in breast imaging, focusing on trust, acceptance, and demographic influences on these views. Methodologically, we employed a systematic literature search across databases such as PubMed, Embase, Medline, and Scopus, selecting studies that provided insights into patients' perceptions of AI in diagnostics. Our review included a sample of seven key studies after rigorous screening, reflecting varied patient trust and acceptance levels towards AI. Overall, we found a clear preference among patients for AI to augment rather than replace the diagnostic process, emphasizing the necessity of radiologists' expertise in conjunction with AI to enhance decision-making accuracy. This paper highlights the importance of aligning AI implementation in clinical settings with patient needs and expectations, emphasizing the need for human interaction in healthcare. Our findings advocate for a model where AI augments the diagnostic process, underlining the necessity for educational efforts to mitigate concerns and enhance patient trust in AI-enhanced diagnostics.
Collapse
Affiliation(s)
- Filippo Pesapane
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Emilia Giambersio
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (E.G.); (A.S.)
| | - Benedetta Capetti
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
| | - Dario Monzani
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
- Department of Psychology, Educational Science and Human Movement (SPPEFF), University of Palermo, 90133 Palermo, Italy
| | - Roberto Grasso
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy;
| | - Luca Nicosia
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Anna Rotili
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Adriana Sorce
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (E.G.); (A.S.)
| | - Lorenza Meneghetti
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| | - Serena Carriero
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.C.); (S.S.)
| | - Sonia Santicchia
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.C.); (S.S.)
| | - Gianpaolo Carrafiello
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy;
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.C.); (S.S.)
| | - Gabriella Pravettoni
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology, IRCCS, 20141 Milan, Italy; (B.C.); (D.M.); (R.G.); (G.P.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy;
| | - Enrico Cassano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.R.); (L.M.); (E.C.)
| |
Collapse
|
48
|
Rivard L, Lehoux P, Rocha de Oliveira R, Alami H. Thematic analysis of tools for health innovators and organisation leaders to develop digital health solutions fit for climate change. BMJ LEADER 2024; 8:32-38. [PMID: 37407065 DOI: 10.1136/leader-2022-000697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 06/23/2023] [Indexed: 07/07/2023]
Abstract
OBJECTIVES While ethicists have largely underscored the risks raised by digital health solutions that operate with or without artificial intelligence (AI), limited research has addressed the need to also mitigate their environmental footprint and equip health innovators as well as organisation leaders to meet responsibility requirements that go beyond clinical safety, efficacy and ethics. Drawing on the Responsible Innovation in Health framework, this qualitative study asks: (1) what are the practice-oriented tools available for innovators to develop environmentally sustainable digital solutions and (2) how are organisation leaders supposed to support them in this endeavour? METHODS Focusing on a subset of 34 tools identified through a comprehensive scoping review (health sciences, computer sciences, engineering and social sciences), our qualitative thematic analysis identifies and illustrates how two responsibility principles-environmental sustainability and organisational responsibility-are meant to be put in practice. RESULTS Guidance to make environmentally sustainable digital solutions is found in 11 tools whereas organisational responsibility is described in 33 tools. The former tools focus on reducing energy and materials consumption as well as pollution and waste production. The latter tools highlight executive roles for data risk management, data ethics and AI ethics. Only four tools translate environmental sustainability issues into tangible organisational responsibilities. CONCLUSIONS Recognising that key design and development decisions in the digital health industry are largely shaped by market considerations, this study indicates that significant work lies ahead for medical and organisation leaders to support the development of solutions fit for climate change.
Collapse
Affiliation(s)
- Lysanne Rivard
- Center for Public Health Research, Universite de Montreal, Montreal, Quebec, Canada
| | - Pascale Lehoux
- Center for Public Health Research, Universite de Montreal, Montreal, Quebec, Canada
- Department of Health Management, Evaluation and Policy, Universite de Montreal, Montreal, Quebec, Canada
| | | | - Hassane Alami
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| |
Collapse
|
49
|
Cai G, Huang F, Gao Y, Li X, Chi J, Xie J, Zhou L, Feng Y, Huang H, Deng T, Zhou Y, Zhang C, Luo X, Xie X, Gao Q, Zhen X, Liu J. Artificial intelligence-based models enabling accurate diagnosis of ovarian cancer using laboratory tests in China: a multicentre, retrospective cohort study. Lancet Digit Health 2024; 6:e176-e186. [PMID: 38212232 DOI: 10.1016/s2589-7500(23)00245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 10/26/2023] [Accepted: 11/22/2023] [Indexed: 01/13/2024]
Abstract
BACKGROUND Ovarian cancer is the most lethal gynecological malignancy. Timely diagnosis of ovarian cancer is difficult due to the lack of effective biomarkers. Laboratory tests are widely applied in clinical practice, and some have shown diagnostic and prognostic relevance to ovarian cancer. We aimed to systematically evaluate the value of routine laboratory tests on the prediction of ovarian cancer, and develop a robust and generalisable ensemble artificial intelligence (AI) model to assist in identifying patients with ovarian cancer. METHODS In this multicentre, retrospective cohort study, we collected 98 laboratory tests and clinical features of women with or without ovarian cancer admitted to three hospitals in China during Jan 1, 2012 and April 4, 2021. A multi-criteria decision making-based classification fusion (MCF) risk prediction framework was used to make a model that combined estimations from 20 AI classification models to reach an integrated prediction tool developed for ovarian cancer diagnosis. It was evaluated on an internal validation set (3007 individuals) and two external validation sets (5641 and 2344 individuals). The primary outcome was the prediction accuracy of the model in identifying ovarian cancer. FINDINGS Based on 52 features (51 laboratory tests and age), the MCF achieved an area under the receiver-operating characteristic curve (AUC) of 0·949 (95% CI 0·948-0·950) in the internal validation set, and AUCs of 0·882 (0·880-0·885) and 0·884 (0·882-0·887) in the two external validation sets. The model showed higher AUC and sensitivity compared with CA125 and HE4 in identifying ovarian cancer, especially in patients with early-stage ovarian cancer. The MCF also yielded acceptable prediction accuracy with the exclusion of highly ranked laboratory tests that indicate ovarian cancer, such as CA125 and other tumour markers, and outperformed state-of-the-art models in ovarian cancer prediction. The MCF was wrapped as an ovarian cancer prediction tool, and made publicly available to provide estimated probability of ovarian cancer with input laboratory test values. INTERPRETATION The MCF model consistently achieved satisfactory performance in ovarian cancer prediction when using laboratory tests from the three validation sets. This model offers a low-cost, easily accessible, and accurate diagnostic tool for ovarian cancer. The included laboratory tests, not only CA125 which was the highest ranked laboratory test in importance of diagnostic assistance, contributed to the characterisation of patients with ovarian cancer. FUNDING Ministry of Science and Technology of China; National Natural Science Foundation of China; Natural Science Foundation of Guangdong Province, China; and Science and Technology Project of Guangzhou, China. TRANSLATION For the Chinese translation of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Guangyao Cai
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Fangjun Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Yue Gao
- Cancer Biology Research Centre (Key Laboratory of the Ministry of Education), Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiao Li
- Department of Gynecologic Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Jianhua Chi
- Cancer Biology Research Centre (Key Laboratory of the Ministry of Education), Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jincheng Xie
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Linghong Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Yanling Feng
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - He Huang
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Ting Deng
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Yun Zhou
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Chuyao Zhang
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Xiaolin Luo
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Xing Xie
- Department of Gynecologic Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Qinglei Gao
- Cancer Biology Research Centre (Key Laboratory of the Ministry of Education), Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
| | - Xin Zhen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China.
| | - Jihong Liu
- Department of Gynecologic Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China.
| |
Collapse
|
50
|
Gianola S, Bargeri S, Castellini G, Cook C, Palese A, Pillastrini P, Salvalaggio S, Turolla A, Rossettini G. Performance of ChatGPT Compared to Clinical Practice Guidelines in Making Informed Decisions for Lumbosacral Radicular Pain: A Cross-sectional Study. J Orthop Sports Phys Ther 2024; 54:222-228. [PMID: 38284363 DOI: 10.2519/jospt.2024.12151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/30/2024]
Abstract
OBJECTIVE: To compare the accuracy of an artificial intelligence chatbot to clinical practice guidelines (CPGs) recommendations for providing answers to complex clinical questions on lumbosacral radicular pain. DESIGN: Cross-sectional study. METHODS: We extracted recommendations from recent CPGs for diagnosing and treating lumbosacral radicular pain. Relative clinical questions were developed and queried to OpenAI's ChatGPT (GPT-3.5). We compared ChatGPT answers to CPGs recommendations by assessing the (1) internal consistency of ChatGPT answers by measuring the percentage of text wording similarity when a clinical question was posed 3 times, (2) reliability between 2 independent reviewers in grading ChatGPT answers, and (3) accuracy of ChatGPT answers compared to CPGs recommendations. Reliability was estimated using Fleiss' kappa (κ) coefficients, and accuracy by interobserver agreement as the frequency of the agreements among all judgments. RESULTS: We tested 9 clinical questions. The internal consistency of text ChatGPT answers was unacceptable across all 3 trials in all clinical questions (mean percentage of 49%, standard deviation of 15). Intrareliability (reviewer 1: κ = 0.90, standard error [SE] = 0.09; reviewer 2: κ = 0.90, SE = 0.10) and interreliability (κ = 0.85, SE = 0.15) between the 2 reviewers was "almost perfect." Accuracy between ChatGPT answers and CPGs recommendations was slight, demonstrating agreement in 33% of recommendations. CONCLUSION: ChatGPT performed poorly in internal consistency and accuracy of the indications generated compared to clinical practice guideline recommendations for lumbosacral radicular pain. J Orthop Sports Phys Ther 2024;54(3):1-7. Epub 29 January 2024. doi:10.2519/jospt.2024.12151.
Collapse
|